00:00:00.002 Started by upstream project "autotest-nightly" build number 3886 00:00:00.002 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3266 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.002 Started by timer 00:00:00.040 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/crypto-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.041 The recommended git tool is: git 00:00:00.041 using credential 00000000-0000-0000-0000-000000000002 00:00:00.043 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/crypto-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.059 Fetching changes from the remote Git repository 00:00:00.061 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.084 Using shallow fetch with depth 1 00:00:00.084 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.084 > git --version # timeout=10 00:00:00.117 > git --version # 'git version 2.39.2' 00:00:00.117 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.171 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.171 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.271 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.283 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.296 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:03.296 > git config core.sparsecheckout # timeout=10 00:00:03.305 > git read-tree -mu HEAD # timeout=10 00:00:03.322 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:03.343 Commit message: "inventory: add WCP3 to free inventory" 00:00:03.343 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:03.447 [Pipeline] Start of Pipeline 00:00:03.460 [Pipeline] library 00:00:03.461 Loading library shm_lib@master 00:00:03.461 Library shm_lib@master is cached. Copying from home. 00:00:03.478 [Pipeline] node 00:00:03.492 Running on WFP19 in /var/jenkins/workspace/crypto-phy-autotest 00:00:03.494 [Pipeline] { 00:00:03.503 [Pipeline] catchError 00:00:03.504 [Pipeline] { 00:00:03.517 [Pipeline] wrap 00:00:03.524 [Pipeline] { 00:00:03.531 [Pipeline] stage 00:00:03.533 [Pipeline] { (Prologue) 00:00:03.711 [Pipeline] sh 00:00:03.995 + logger -p user.info -t JENKINS-CI 00:00:04.014 [Pipeline] echo 00:00:04.016 Node: WFP19 00:00:04.023 [Pipeline] sh 00:00:04.320 [Pipeline] setCustomBuildProperty 00:00:04.332 [Pipeline] echo 00:00:04.334 Cleanup processes 00:00:04.337 [Pipeline] sh 00:00:04.616 + sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:04.616 1143325 sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:04.630 [Pipeline] sh 00:00:04.918 ++ sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:04.918 ++ grep -v 'sudo pgrep' 00:00:04.918 ++ awk '{print $1}' 00:00:04.918 + sudo kill -9 00:00:04.918 + true 00:00:04.932 [Pipeline] cleanWs 00:00:04.941 [WS-CLEANUP] Deleting project workspace... 00:00:04.941 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.949 [WS-CLEANUP] done 00:00:04.953 [Pipeline] setCustomBuildProperty 00:00:04.962 [Pipeline] sh 00:00:05.242 + sudo git config --global --replace-all safe.directory '*' 00:00:05.317 [Pipeline] httpRequest 00:00:05.333 [Pipeline] echo 00:00:05.335 Sorcerer 10.211.164.101 is alive 00:00:05.341 [Pipeline] httpRequest 00:00:05.344 HttpMethod: GET 00:00:05.345 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:05.345 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:05.347 Response Code: HTTP/1.1 200 OK 00:00:05.347 Success: Status code 200 is in the accepted range: 200,404 00:00:05.348 Saving response body to /var/jenkins/workspace/crypto-phy-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:06.119 [Pipeline] sh 00:00:06.399 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:06.414 [Pipeline] httpRequest 00:00:06.452 [Pipeline] echo 00:00:06.453 Sorcerer 10.211.164.101 is alive 00:00:06.460 [Pipeline] httpRequest 00:00:06.464 HttpMethod: GET 00:00:06.465 URL: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:06.465 Sending request to url: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:06.467 Response Code: HTTP/1.1 200 OK 00:00:06.467 Success: Status code 200 is in the accepted range: 200,404 00:00:06.468 Saving response body to /var/jenkins/workspace/crypto-phy-autotest/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:28.015 [Pipeline] sh 00:00:28.298 + tar --no-same-owner -xf spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:30.849 [Pipeline] sh 00:00:31.131 + git -C spdk log --oneline -n5 00:00:31.131 719d03c6a sock/uring: only register net impl if supported 00:00:31.131 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:00:31.131 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:00:31.131 6c7c1f57e accel: add sequence outstanding stat 00:00:31.131 3bc8e6a26 accel: add utility to put task 00:00:31.145 [Pipeline] } 00:00:31.163 [Pipeline] // stage 00:00:31.173 [Pipeline] stage 00:00:31.175 [Pipeline] { (Prepare) 00:00:31.195 [Pipeline] writeFile 00:00:31.213 [Pipeline] sh 00:00:31.497 + logger -p user.info -t JENKINS-CI 00:00:31.511 [Pipeline] sh 00:00:31.794 + logger -p user.info -t JENKINS-CI 00:00:31.806 [Pipeline] sh 00:00:32.094 + cat autorun-spdk.conf 00:00:32.094 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:32.094 SPDK_TEST_BLOCKDEV=1 00:00:32.094 SPDK_TEST_ISAL=1 00:00:32.094 SPDK_TEST_CRYPTO=1 00:00:32.094 SPDK_TEST_REDUCE=1 00:00:32.094 SPDK_TEST_VBDEV_COMPRESS=1 00:00:32.094 SPDK_RUN_ASAN=1 00:00:32.094 SPDK_RUN_UBSAN=1 00:00:32.102 RUN_NIGHTLY=1 00:00:32.108 [Pipeline] readFile 00:00:32.137 [Pipeline] withEnv 00:00:32.139 [Pipeline] { 00:00:32.154 [Pipeline] sh 00:00:32.440 + set -ex 00:00:32.440 + [[ -f /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf ]] 00:00:32.440 + source /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf 00:00:32.440 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:32.440 ++ SPDK_TEST_BLOCKDEV=1 00:00:32.440 ++ SPDK_TEST_ISAL=1 00:00:32.440 ++ SPDK_TEST_CRYPTO=1 00:00:32.440 ++ SPDK_TEST_REDUCE=1 00:00:32.440 ++ SPDK_TEST_VBDEV_COMPRESS=1 00:00:32.440 ++ SPDK_RUN_ASAN=1 00:00:32.440 ++ SPDK_RUN_UBSAN=1 00:00:32.440 ++ RUN_NIGHTLY=1 00:00:32.440 + case $SPDK_TEST_NVMF_NICS in 00:00:32.440 + DRIVERS= 00:00:32.440 + [[ -n '' ]] 00:00:32.440 + exit 0 00:00:32.450 [Pipeline] } 00:00:32.469 [Pipeline] // withEnv 00:00:32.474 [Pipeline] } 00:00:32.492 [Pipeline] // stage 00:00:32.503 [Pipeline] catchError 00:00:32.505 [Pipeline] { 00:00:32.522 [Pipeline] timeout 00:00:32.522 Timeout set to expire in 40 min 00:00:32.524 [Pipeline] { 00:00:32.541 [Pipeline] stage 00:00:32.543 [Pipeline] { (Tests) 00:00:32.561 [Pipeline] sh 00:00:32.845 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/crypto-phy-autotest 00:00:32.845 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest 00:00:32.845 + DIR_ROOT=/var/jenkins/workspace/crypto-phy-autotest 00:00:32.845 + [[ -n /var/jenkins/workspace/crypto-phy-autotest ]] 00:00:32.845 + DIR_SPDK=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:32.845 + DIR_OUTPUT=/var/jenkins/workspace/crypto-phy-autotest/output 00:00:32.845 + [[ -d /var/jenkins/workspace/crypto-phy-autotest/spdk ]] 00:00:32.845 + [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/output ]] 00:00:32.845 + mkdir -p /var/jenkins/workspace/crypto-phy-autotest/output 00:00:32.845 + [[ -d /var/jenkins/workspace/crypto-phy-autotest/output ]] 00:00:32.845 + [[ crypto-phy-autotest == pkgdep-* ]] 00:00:32.845 + cd /var/jenkins/workspace/crypto-phy-autotest 00:00:32.845 + source /etc/os-release 00:00:32.845 ++ NAME='Fedora Linux' 00:00:32.845 ++ VERSION='38 (Cloud Edition)' 00:00:32.845 ++ ID=fedora 00:00:32.845 ++ VERSION_ID=38 00:00:32.845 ++ VERSION_CODENAME= 00:00:32.845 ++ PLATFORM_ID=platform:f38 00:00:32.845 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:32.845 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:32.845 ++ LOGO=fedora-logo-icon 00:00:32.845 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:32.845 ++ HOME_URL=https://fedoraproject.org/ 00:00:32.845 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:32.845 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:32.845 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:32.845 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:32.845 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:32.845 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:32.845 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:32.845 ++ SUPPORT_END=2024-05-14 00:00:32.845 ++ VARIANT='Cloud Edition' 00:00:32.845 ++ VARIANT_ID=cloud 00:00:32.845 + uname -a 00:00:32.845 Linux spdk-wfp-19 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:32.845 + sudo /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:00:37.076 Hugepages 00:00:37.076 node hugesize free / total 00:00:37.076 node0 1048576kB 0 / 0 00:00:37.076 node0 2048kB 0 / 0 00:00:37.076 node1 1048576kB 0 / 0 00:00:37.076 node1 2048kB 0 / 0 00:00:37.076 00:00:37.076 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:37.076 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:37.076 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:37.076 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:37.076 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:37.076 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:37.076 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:37.076 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:37.076 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:37.076 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:37.076 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:37.076 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:37.076 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:37.076 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:37.076 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:37.076 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:37.076 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:37.076 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:37.076 + rm -f /tmp/spdk-ld-path 00:00:37.076 + source autorun-spdk.conf 00:00:37.076 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:37.076 ++ SPDK_TEST_BLOCKDEV=1 00:00:37.076 ++ SPDK_TEST_ISAL=1 00:00:37.076 ++ SPDK_TEST_CRYPTO=1 00:00:37.076 ++ SPDK_TEST_REDUCE=1 00:00:37.076 ++ SPDK_TEST_VBDEV_COMPRESS=1 00:00:37.076 ++ SPDK_RUN_ASAN=1 00:00:37.076 ++ SPDK_RUN_UBSAN=1 00:00:37.076 ++ RUN_NIGHTLY=1 00:00:37.076 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:37.076 + [[ -n '' ]] 00:00:37.076 + sudo git config --global --add safe.directory /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:37.076 + for M in /var/spdk/build-*-manifest.txt 00:00:37.076 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:37.076 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/crypto-phy-autotest/output/ 00:00:37.076 + for M in /var/spdk/build-*-manifest.txt 00:00:37.076 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:37.076 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/crypto-phy-autotest/output/ 00:00:37.076 ++ uname 00:00:37.076 + [[ Linux == \L\i\n\u\x ]] 00:00:37.076 + sudo dmesg -T 00:00:37.076 + sudo dmesg --clear 00:00:37.076 + dmesg_pid=1144387 00:00:37.076 + [[ Fedora Linux == FreeBSD ]] 00:00:37.076 + sudo dmesg -Tw 00:00:37.076 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:37.076 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:37.076 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:37.076 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:37.076 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:37.076 + [[ -x /usr/src/fio-static/fio ]] 00:00:37.076 + export FIO_BIN=/usr/src/fio-static/fio 00:00:37.076 + FIO_BIN=/usr/src/fio-static/fio 00:00:37.077 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\c\r\y\p\t\o\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:37.077 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:37.077 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:37.077 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:37.077 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:37.077 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:37.077 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:37.077 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:37.077 + spdk/autorun.sh /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf 00:00:37.077 Test configuration: 00:00:37.077 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:37.077 SPDK_TEST_BLOCKDEV=1 00:00:37.077 SPDK_TEST_ISAL=1 00:00:37.077 SPDK_TEST_CRYPTO=1 00:00:37.077 SPDK_TEST_REDUCE=1 00:00:37.077 SPDK_TEST_VBDEV_COMPRESS=1 00:00:37.077 SPDK_RUN_ASAN=1 00:00:37.077 SPDK_RUN_UBSAN=1 00:00:37.077 RUN_NIGHTLY=1 21:43:56 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:00:37.077 21:43:56 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:37.077 21:43:56 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:37.077 21:43:56 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:37.077 21:43:56 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:37.077 21:43:56 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:37.077 21:43:56 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:37.077 21:43:56 -- paths/export.sh@5 -- $ export PATH 00:00:37.077 21:43:56 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:37.077 21:43:56 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:00:37.077 21:43:56 -- common/autobuild_common.sh@444 -- $ date +%s 00:00:37.077 21:43:56 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720899836.XXXXXX 00:00:37.077 21:43:56 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720899836.0rROdj 00:00:37.077 21:43:56 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:00:37.077 21:43:56 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:00:37.077 21:43:56 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/' 00:00:37.077 21:43:56 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:37.077 21:43:56 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:37.077 21:43:56 -- common/autobuild_common.sh@460 -- $ get_config_params 00:00:37.077 21:43:56 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:37.077 21:43:56 -- common/autotest_common.sh@10 -- $ set +x 00:00:37.077 21:43:56 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-vbdev-compress --with-dpdk-compressdev --with-crypto --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:00:37.077 21:43:56 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:00:37.077 21:43:56 -- pm/common@17 -- $ local monitor 00:00:37.077 21:43:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:37.077 21:43:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:37.077 21:43:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:37.077 21:43:56 -- pm/common@21 -- $ date +%s 00:00:37.077 21:43:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:37.077 21:43:56 -- pm/common@21 -- $ date +%s 00:00:37.077 21:43:56 -- pm/common@21 -- $ date +%s 00:00:37.077 21:43:56 -- pm/common@25 -- $ sleep 1 00:00:37.077 21:43:56 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720899836 00:00:37.077 21:43:56 -- pm/common@21 -- $ date +%s 00:00:37.077 21:43:56 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720899836 00:00:37.077 21:43:56 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720899836 00:00:37.077 21:43:56 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720899836 00:00:37.077 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720899836_collect-cpu-temp.pm.log 00:00:37.077 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720899836_collect-vmstat.pm.log 00:00:37.077 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720899836_collect-cpu-load.pm.log 00:00:37.077 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720899836_collect-bmc-pm.bmc.pm.log 00:00:38.016 21:43:57 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:00:38.016 21:43:57 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:38.016 21:43:57 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:38.016 21:43:57 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:38.016 21:43:57 -- spdk/autobuild.sh@16 -- $ date -u 00:00:38.016 Sat Jul 13 07:43:57 PM UTC 2024 00:00:38.016 21:43:57 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:38.016 v24.09-pre-202-g719d03c6a 00:00:38.016 21:43:57 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:00:38.016 21:43:57 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:00:38.016 21:43:57 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:00:38.016 21:43:57 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:38.016 21:43:57 -- common/autotest_common.sh@10 -- $ set +x 00:00:38.016 ************************************ 00:00:38.016 START TEST asan 00:00:38.016 ************************************ 00:00:38.016 21:43:57 asan -- common/autotest_common.sh@1123 -- $ echo 'using asan' 00:00:38.016 using asan 00:00:38.016 00:00:38.016 real 0m0.001s 00:00:38.016 user 0m0.000s 00:00:38.016 sys 0m0.000s 00:00:38.016 21:43:57 asan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:00:38.016 21:43:57 asan -- common/autotest_common.sh@10 -- $ set +x 00:00:38.016 ************************************ 00:00:38.016 END TEST asan 00:00:38.016 ************************************ 00:00:38.016 21:43:57 -- common/autotest_common.sh@1142 -- $ return 0 00:00:38.016 21:43:57 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:38.016 21:43:57 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:38.016 21:43:57 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:00:38.016 21:43:57 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:38.016 21:43:57 -- common/autotest_common.sh@10 -- $ set +x 00:00:38.016 ************************************ 00:00:38.016 START TEST ubsan 00:00:38.016 ************************************ 00:00:38.016 21:43:57 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:00:38.016 using ubsan 00:00:38.016 00:00:38.016 real 0m0.000s 00:00:38.016 user 0m0.000s 00:00:38.016 sys 0m0.000s 00:00:38.016 21:43:57 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:00:38.016 21:43:57 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:38.016 ************************************ 00:00:38.016 END TEST ubsan 00:00:38.016 ************************************ 00:00:38.281 21:43:57 -- common/autotest_common.sh@1142 -- $ return 0 00:00:38.281 21:43:57 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:38.281 21:43:57 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:38.281 21:43:57 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:38.281 21:43:57 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:38.281 21:43:57 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:38.281 21:43:57 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:38.281 21:43:57 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:38.281 21:43:57 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:38.281 21:43:57 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-vbdev-compress --with-dpdk-compressdev --with-crypto --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:00:38.281 Using default SPDK env in /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:00:38.281 Using default DPDK in /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:00:38.547 Using 'verbs' RDMA provider 00:00:54.807 Configuring ISA-L (logfile: /var/jenkins/workspace/crypto-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:07.020 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/crypto-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:07.020 Creating mk/config.mk...done. 00:01:07.020 Creating mk/cc.flags.mk...done. 00:01:07.020 Type 'make' to build. 00:01:07.020 21:44:25 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:01:07.020 21:44:25 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:07.020 21:44:25 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:07.020 21:44:25 -- common/autotest_common.sh@10 -- $ set +x 00:01:07.020 ************************************ 00:01:07.020 START TEST make 00:01:07.020 ************************************ 00:01:07.020 21:44:25 make -- common/autotest_common.sh@1123 -- $ make -j112 00:01:07.020 make[1]: Nothing to be done for 'all'. 00:01:33.566 The Meson build system 00:01:33.566 Version: 1.3.1 00:01:33.566 Source dir: /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk 00:01:33.566 Build dir: /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build-tmp 00:01:33.566 Build type: native build 00:01:33.566 Program cat found: YES (/usr/bin/cat) 00:01:33.566 Project name: DPDK 00:01:33.566 Project version: 24.03.0 00:01:33.566 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:33.566 C linker for the host machine: cc ld.bfd 2.39-16 00:01:33.566 Host machine cpu family: x86_64 00:01:33.566 Host machine cpu: x86_64 00:01:33.566 Message: ## Building in Developer Mode ## 00:01:33.566 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:33.566 Program check-symbols.sh found: YES (/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:33.566 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:33.566 Program python3 found: YES (/usr/bin/python3) 00:01:33.566 Program cat found: YES (/usr/bin/cat) 00:01:33.566 Compiler for C supports arguments -march=native: YES 00:01:33.566 Checking for size of "void *" : 8 00:01:33.566 Checking for size of "void *" : 8 (cached) 00:01:33.566 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:33.566 Library m found: YES 00:01:33.566 Library numa found: YES 00:01:33.566 Has header "numaif.h" : YES 00:01:33.566 Library fdt found: NO 00:01:33.566 Library execinfo found: NO 00:01:33.566 Has header "execinfo.h" : YES 00:01:33.566 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:33.566 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:33.566 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:33.566 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:33.566 Run-time dependency openssl found: YES 3.0.9 00:01:33.566 Run-time dependency libpcap found: YES 1.10.4 00:01:33.566 Has header "pcap.h" with dependency libpcap: YES 00:01:33.566 Compiler for C supports arguments -Wcast-qual: YES 00:01:33.566 Compiler for C supports arguments -Wdeprecated: YES 00:01:33.566 Compiler for C supports arguments -Wformat: YES 00:01:33.566 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:33.566 Compiler for C supports arguments -Wformat-security: NO 00:01:33.566 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:33.566 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:33.566 Compiler for C supports arguments -Wnested-externs: YES 00:01:33.566 Compiler for C supports arguments -Wold-style-definition: YES 00:01:33.566 Compiler for C supports arguments -Wpointer-arith: YES 00:01:33.566 Compiler for C supports arguments -Wsign-compare: YES 00:01:33.566 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:33.566 Compiler for C supports arguments -Wundef: YES 00:01:33.566 Compiler for C supports arguments -Wwrite-strings: YES 00:01:33.566 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:33.566 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:33.566 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:33.566 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:33.566 Program objdump found: YES (/usr/bin/objdump) 00:01:33.566 Compiler for C supports arguments -mavx512f: YES 00:01:33.566 Checking if "AVX512 checking" compiles: YES 00:01:33.566 Fetching value of define "__SSE4_2__" : 1 00:01:33.566 Fetching value of define "__AES__" : 1 00:01:33.566 Fetching value of define "__AVX__" : 1 00:01:33.566 Fetching value of define "__AVX2__" : 1 00:01:33.566 Fetching value of define "__AVX512BW__" : 1 00:01:33.567 Fetching value of define "__AVX512CD__" : 1 00:01:33.567 Fetching value of define "__AVX512DQ__" : 1 00:01:33.567 Fetching value of define "__AVX512F__" : 1 00:01:33.567 Fetching value of define "__AVX512VL__" : 1 00:01:33.567 Fetching value of define "__PCLMUL__" : 1 00:01:33.567 Fetching value of define "__RDRND__" : 1 00:01:33.567 Fetching value of define "__RDSEED__" : 1 00:01:33.567 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:33.567 Fetching value of define "__znver1__" : (undefined) 00:01:33.567 Fetching value of define "__znver2__" : (undefined) 00:01:33.567 Fetching value of define "__znver3__" : (undefined) 00:01:33.567 Fetching value of define "__znver4__" : (undefined) 00:01:33.567 Library asan found: YES 00:01:33.567 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:33.567 Message: lib/log: Defining dependency "log" 00:01:33.567 Message: lib/kvargs: Defining dependency "kvargs" 00:01:33.567 Message: lib/telemetry: Defining dependency "telemetry" 00:01:33.567 Library rt found: YES 00:01:33.567 Checking for function "getentropy" : NO 00:01:33.567 Message: lib/eal: Defining dependency "eal" 00:01:33.567 Message: lib/ring: Defining dependency "ring" 00:01:33.567 Message: lib/rcu: Defining dependency "rcu" 00:01:33.567 Message: lib/mempool: Defining dependency "mempool" 00:01:33.567 Message: lib/mbuf: Defining dependency "mbuf" 00:01:33.567 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:33.567 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:33.567 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:33.567 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:33.567 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:33.567 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:33.567 Compiler for C supports arguments -mpclmul: YES 00:01:33.567 Compiler for C supports arguments -maes: YES 00:01:33.567 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:33.567 Compiler for C supports arguments -mavx512bw: YES 00:01:33.567 Compiler for C supports arguments -mavx512dq: YES 00:01:33.567 Compiler for C supports arguments -mavx512vl: YES 00:01:33.567 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:33.567 Compiler for C supports arguments -mavx2: YES 00:01:33.567 Compiler for C supports arguments -mavx: YES 00:01:33.567 Message: lib/net: Defining dependency "net" 00:01:33.567 Message: lib/meter: Defining dependency "meter" 00:01:33.567 Message: lib/ethdev: Defining dependency "ethdev" 00:01:33.567 Message: lib/pci: Defining dependency "pci" 00:01:33.567 Message: lib/cmdline: Defining dependency "cmdline" 00:01:33.567 Message: lib/hash: Defining dependency "hash" 00:01:33.567 Message: lib/timer: Defining dependency "timer" 00:01:33.567 Message: lib/compressdev: Defining dependency "compressdev" 00:01:33.567 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:33.567 Message: lib/dmadev: Defining dependency "dmadev" 00:01:33.567 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:33.567 Message: lib/power: Defining dependency "power" 00:01:33.567 Message: lib/reorder: Defining dependency "reorder" 00:01:33.567 Message: lib/security: Defining dependency "security" 00:01:33.567 Has header "linux/userfaultfd.h" : YES 00:01:33.567 Has header "linux/vduse.h" : YES 00:01:33.567 Message: lib/vhost: Defining dependency "vhost" 00:01:33.567 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:33.567 Message: drivers/bus/auxiliary: Defining dependency "bus_auxiliary" 00:01:33.567 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:33.567 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:33.567 Compiler for C supports arguments -std=c11: YES 00:01:33.567 Compiler for C supports arguments -Wno-strict-prototypes: YES 00:01:33.567 Compiler for C supports arguments -D_BSD_SOURCE: YES 00:01:33.567 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES 00:01:33.567 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES 00:01:33.567 Run-time dependency libmlx5 found: YES 1.24.44.0 00:01:33.567 Run-time dependency libibverbs found: YES 1.14.44.0 00:01:33.567 Library mtcr_ul found: NO 00:01:33.567 Header "infiniband/verbs.h" has symbol "IBV_FLOW_SPEC_ESP" with dependencies libmlx5, libibverbs: YES 00:01:33.567 Header "infiniband/verbs.h" has symbol "IBV_RX_HASH_IPSEC_SPI" with dependencies libmlx5, libibverbs: YES 00:01:33.567 Header "infiniband/verbs.h" has symbol "IBV_ACCESS_RELAXED_ORDERING " with dependencies libmlx5, libibverbs: YES 00:01:33.567 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CQE_RES_FORMAT_CSUM_STRIDX" with dependencies libmlx5, libibverbs: YES 00:01:33.567 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_MASK_TUNNEL_OFFLOADS" with dependencies libmlx5, libibverbs: YES 00:01:33.567 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_FLAGS_MPW_ALLOWED" with dependencies libmlx5, libibverbs: YES 00:01:33.567 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_FLAGS_CQE_128B_COMP" with dependencies libmlx5, libibverbs: YES 00:01:33.567 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CQ_INIT_ATTR_FLAGS_CQE_PAD" with dependencies libmlx5, libibverbs: YES 00:01:33.567 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_create_flow_action_packet_reformat" with dependencies libmlx5, libibverbs: YES 00:01:33.567 Header "infiniband/verbs.h" has symbol "IBV_FLOW_SPEC_MPLS" with dependencies libmlx5, libibverbs: YES 00:01:33.567 Header "infiniband/verbs.h" has symbol "IBV_WQ_FLAGS_PCI_WRITE_END_PADDING" with dependencies libmlx5, libibverbs: YES 00:01:33.567 Header "infiniband/verbs.h" has symbol "IBV_WQ_FLAG_RX_END_PADDING" with dependencies libmlx5, libibverbs: NO 00:01:33.567 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_query_devx_port" with dependencies libmlx5, libibverbs: NO 00:01:33.567 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_query_port" with dependencies libmlx5, libibverbs: YES 00:01:33.567 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_ib_port" with dependencies libmlx5, libibverbs: YES 00:01:37.760 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_obj_create" with dependencies libmlx5, libibverbs: YES 00:01:37.760 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_FLOW_ACTION_COUNTERS_DEVX" with dependencies libmlx5, libibverbs: YES 00:01:37.760 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_FLOW_ACTION_DEFAULT_MISS" with dependencies libmlx5, libibverbs: YES 00:01:37.760 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_obj_query_async" with dependencies libmlx5, libibverbs: YES 00:01:37.760 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_qp_query" with dependencies libmlx5, libibverbs: YES 00:01:37.760 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_pp_alloc" with dependencies libmlx5, libibverbs: YES 00:01:37.760 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_devx_tir" with dependencies libmlx5, libibverbs: YES 00:01:37.760 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_get_event" with dependencies libmlx5, libibverbs: YES 00:01:37.760 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_flow_meter" with dependencies libmlx5, libibverbs: YES 00:01:37.760 Header "infiniband/mlx5dv.h" has symbol "MLX5_MMAP_GET_NC_PAGES_CMD" with dependencies libmlx5, libibverbs: YES 00:01:37.760 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_DOMAIN_TYPE_NIC_RX" with dependencies libmlx5, libibverbs: YES 00:01:37.760 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_DOMAIN_TYPE_FDB" with dependencies libmlx5, libibverbs: YES 00:01:37.760 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_push_vlan" with dependencies libmlx5, libibverbs: YES 00:01:37.760 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_alloc_var" with dependencies libmlx5, libibverbs: YES 00:01:37.760 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_ENHANCED_MPSW" with dependencies libmlx5, libibverbs: NO 00:01:37.760 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_SEND_EN" with dependencies libmlx5, libibverbs: NO 00:01:37.760 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_WAIT" with dependencies libmlx5, libibverbs: NO 00:01:37.760 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_ACCESS_ASO" with dependencies libmlx5, libibverbs: NO 00:01:37.760 Header "linux/if_link.h" has symbol "IFLA_NUM_VF" with dependencies libmlx5, libibverbs: YES 00:01:37.760 Header "linux/if_link.h" has symbol "IFLA_EXT_MASK" with dependencies libmlx5, libibverbs: YES 00:01:37.760 Header "linux/if_link.h" has symbol "IFLA_PHYS_SWITCH_ID" with dependencies libmlx5, libibverbs: YES 00:01:37.761 Header "linux/if_link.h" has symbol "IFLA_PHYS_PORT_NAME" with dependencies libmlx5, libibverbs: YES 00:01:37.761 Header "rdma/rdma_netlink.h" has symbol "RDMA_NL_NLDEV" with dependencies libmlx5, libibverbs: YES 00:01:37.761 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_CMD_GET" with dependencies libmlx5, libibverbs: YES 00:01:37.761 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_CMD_PORT_GET" with dependencies libmlx5, libibverbs: YES 00:01:37.761 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_DEV_INDEX" with dependencies libmlx5, libibverbs: YES 00:01:37.761 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_DEV_NAME" with dependencies libmlx5, libibverbs: YES 00:01:37.761 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_PORT_INDEX" with dependencies libmlx5, libibverbs: YES 00:01:37.761 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_PORT_STATE" with dependencies libmlx5, libibverbs: YES 00:01:37.761 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_NDEV_INDEX" with dependencies libmlx5, libibverbs: YES 00:01:37.761 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dump_dr_domain" with dependencies libmlx5, libibverbs: YES 00:01:37.761 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_flow_sampler" with dependencies libmlx5, libibverbs: YES 00:01:37.761 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_domain_set_reclaim_device_memory" with dependencies libmlx5, libibverbs: YES 00:01:37.761 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_array" with dependencies libmlx5, libibverbs: YES 00:01:37.761 Header "linux/devlink.h" has symbol "DEVLINK_GENL_NAME" with dependencies libmlx5, libibverbs: YES 00:01:37.761 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_aso" with dependencies libmlx5, libibverbs: YES 00:01:37.761 Header "infiniband/verbs.h" has symbol "INFINIBAND_VERBS_H" with dependencies libmlx5, libibverbs: YES 00:01:37.761 Header "infiniband/mlx5dv.h" has symbol "MLX5_WQE_UMR_CTRL_FLAG_INLINE" with dependencies libmlx5, libibverbs: YES 00:01:37.761 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dump_dr_rule" with dependencies libmlx5, libibverbs: YES 00:01:37.761 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_ACTION_FLAGS_ASO_CT_DIRECTION_INITIATOR" with dependencies libmlx5, libibverbs: YES 00:01:37.761 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_domain_allow_duplicate_rules" with dependencies libmlx5, libibverbs: YES 00:01:37.761 Header "infiniband/verbs.h" has symbol "ibv_reg_mr_iova" with dependencies libmlx5, libibverbs: YES 00:01:37.761 Header "infiniband/verbs.h" has symbol "ibv_import_device" with dependencies libmlx5, libibverbs: YES 00:01:37.761 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_root_table" with dependencies libmlx5, libibverbs: YES 00:01:37.761 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_create_steering_anchor" with dependencies libmlx5, libibverbs: YES 00:01:37.761 Header "infiniband/verbs.h" has symbol "ibv_is_fork_initialized" with dependencies libmlx5, libibverbs: YES 00:01:37.761 Checking whether type "struct mlx5dv_sw_parsing_caps" has member "sw_parsing_offloads" with dependencies libmlx5, libibverbs: YES 00:01:37.761 Checking whether type "struct ibv_counter_set_init_attr" has member "counter_set_id" with dependencies libmlx5, libibverbs: NO 00:01:37.761 Checking whether type "struct ibv_counters_init_attr" has member "comp_mask" with dependencies libmlx5, libibverbs: YES 00:01:37.761 Checking whether type "struct mlx5dv_devx_uar" has member "mmap_off" with dependencies libmlx5, libibverbs: YES 00:01:37.761 Checking whether type "struct mlx5dv_flow_matcher_attr" has member "ft_type" with dependencies libmlx5, libibverbs: YES 00:01:37.761 Configuring mlx5_autoconf.h using configuration 00:01:37.761 Message: drivers/common/mlx5: Defining dependency "common_mlx5" 00:01:37.761 Run-time dependency libcrypto found: YES 3.0.9 00:01:37.761 Library IPSec_MB found: YES 00:01:37.761 Fetching value of define "IMB_VERSION_STR" : "1.5.0" 00:01:37.761 Message: drivers/common/qat: Defining dependency "common_qat" 00:01:37.761 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:37.761 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:37.761 Library IPSec_MB found: YES 00:01:37.761 Fetching value of define "IMB_VERSION_STR" : "1.5.0" (cached) 00:01:37.761 Message: drivers/crypto/ipsec_mb: Defining dependency "crypto_ipsec_mb" 00:01:37.761 Compiler for C supports arguments -std=c11: YES (cached) 00:01:37.761 Compiler for C supports arguments -Wno-strict-prototypes: YES (cached) 00:01:37.761 Compiler for C supports arguments -D_BSD_SOURCE: YES (cached) 00:01:37.761 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES (cached) 00:01:37.761 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES (cached) 00:01:37.761 Message: drivers/crypto/mlx5: Defining dependency "crypto_mlx5" 00:01:37.761 Run-time dependency libisal found: NO (tried pkgconfig) 00:01:37.761 Library libisal found: NO 00:01:37.761 Message: drivers/compress/isal: Defining dependency "compress_isal" 00:01:37.761 Compiler for C supports arguments -std=c11: YES (cached) 00:01:37.761 Compiler for C supports arguments -Wno-strict-prototypes: YES (cached) 00:01:37.761 Compiler for C supports arguments -D_BSD_SOURCE: YES (cached) 00:01:37.761 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES (cached) 00:01:37.761 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES (cached) 00:01:37.761 Message: drivers/compress/mlx5: Defining dependency "compress_mlx5" 00:01:37.761 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:37.761 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:37.761 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:37.761 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:37.761 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:37.761 Program doxygen found: YES (/usr/bin/doxygen) 00:01:37.761 Configuring doxy-api-html.conf using configuration 00:01:37.761 Configuring doxy-api-man.conf using configuration 00:01:37.761 Program mandb found: YES (/usr/bin/mandb) 00:01:37.761 Program sphinx-build found: NO 00:01:37.761 Configuring rte_build_config.h using configuration 00:01:37.761 Message: 00:01:37.761 ================= 00:01:37.761 Applications Enabled 00:01:37.761 ================= 00:01:37.761 00:01:37.761 apps: 00:01:37.761 00:01:37.761 00:01:37.761 Message: 00:01:37.761 ================= 00:01:37.761 Libraries Enabled 00:01:37.761 ================= 00:01:37.761 00:01:37.761 libs: 00:01:37.761 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:37.761 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:37.761 cryptodev, dmadev, power, reorder, security, vhost, 00:01:37.761 00:01:37.761 Message: 00:01:37.761 =============== 00:01:37.761 Drivers Enabled 00:01:37.761 =============== 00:01:37.761 00:01:37.761 common: 00:01:37.761 mlx5, qat, 00:01:37.761 bus: 00:01:37.761 auxiliary, pci, vdev, 00:01:37.761 mempool: 00:01:37.761 ring, 00:01:37.761 dma: 00:01:37.761 00:01:37.761 net: 00:01:37.761 00:01:37.761 crypto: 00:01:37.761 ipsec_mb, mlx5, 00:01:37.761 compress: 00:01:37.761 isal, mlx5, 00:01:37.761 vdpa: 00:01:37.761 00:01:37.761 00:01:37.761 Message: 00:01:37.761 ================= 00:01:37.761 Content Skipped 00:01:37.761 ================= 00:01:37.761 00:01:37.761 apps: 00:01:37.761 dumpcap: explicitly disabled via build config 00:01:37.761 graph: explicitly disabled via build config 00:01:37.761 pdump: explicitly disabled via build config 00:01:37.761 proc-info: explicitly disabled via build config 00:01:37.761 test-acl: explicitly disabled via build config 00:01:37.761 test-bbdev: explicitly disabled via build config 00:01:37.761 test-cmdline: explicitly disabled via build config 00:01:37.761 test-compress-perf: explicitly disabled via build config 00:01:37.761 test-crypto-perf: explicitly disabled via build config 00:01:37.761 test-dma-perf: explicitly disabled via build config 00:01:37.761 test-eventdev: explicitly disabled via build config 00:01:37.761 test-fib: explicitly disabled via build config 00:01:37.761 test-flow-perf: explicitly disabled via build config 00:01:37.761 test-gpudev: explicitly disabled via build config 00:01:37.761 test-mldev: explicitly disabled via build config 00:01:37.761 test-pipeline: explicitly disabled via build config 00:01:37.761 test-pmd: explicitly disabled via build config 00:01:37.761 test-regex: explicitly disabled via build config 00:01:37.761 test-sad: explicitly disabled via build config 00:01:37.761 test-security-perf: explicitly disabled via build config 00:01:37.761 00:01:37.761 libs: 00:01:37.761 argparse: explicitly disabled via build config 00:01:37.761 metrics: explicitly disabled via build config 00:01:37.761 acl: explicitly disabled via build config 00:01:37.761 bbdev: explicitly disabled via build config 00:01:37.761 bitratestats: explicitly disabled via build config 00:01:37.761 bpf: explicitly disabled via build config 00:01:37.761 cfgfile: explicitly disabled via build config 00:01:37.761 distributor: explicitly disabled via build config 00:01:37.761 efd: explicitly disabled via build config 00:01:37.761 eventdev: explicitly disabled via build config 00:01:37.761 dispatcher: explicitly disabled via build config 00:01:37.761 gpudev: explicitly disabled via build config 00:01:37.761 gro: explicitly disabled via build config 00:01:37.761 gso: explicitly disabled via build config 00:01:37.761 ip_frag: explicitly disabled via build config 00:01:37.761 jobstats: explicitly disabled via build config 00:01:37.761 latencystats: explicitly disabled via build config 00:01:37.761 lpm: explicitly disabled via build config 00:01:37.761 member: explicitly disabled via build config 00:01:37.761 pcapng: explicitly disabled via build config 00:01:37.761 rawdev: explicitly disabled via build config 00:01:37.761 regexdev: explicitly disabled via build config 00:01:37.761 mldev: explicitly disabled via build config 00:01:37.762 rib: explicitly disabled via build config 00:01:37.762 sched: explicitly disabled via build config 00:01:37.762 stack: explicitly disabled via build config 00:01:37.762 ipsec: explicitly disabled via build config 00:01:37.762 pdcp: explicitly disabled via build config 00:01:37.762 fib: explicitly disabled via build config 00:01:37.762 port: explicitly disabled via build config 00:01:37.762 pdump: explicitly disabled via build config 00:01:37.762 table: explicitly disabled via build config 00:01:37.762 pipeline: explicitly disabled via build config 00:01:37.762 graph: explicitly disabled via build config 00:01:37.762 node: explicitly disabled via build config 00:01:37.762 00:01:37.762 drivers: 00:01:37.762 common/cpt: not in enabled drivers build config 00:01:37.762 common/dpaax: not in enabled drivers build config 00:01:37.762 common/iavf: not in enabled drivers build config 00:01:37.762 common/idpf: not in enabled drivers build config 00:01:37.762 common/ionic: not in enabled drivers build config 00:01:37.762 common/mvep: not in enabled drivers build config 00:01:37.762 common/octeontx: not in enabled drivers build config 00:01:37.762 bus/cdx: not in enabled drivers build config 00:01:37.762 bus/dpaa: not in enabled drivers build config 00:01:37.762 bus/fslmc: not in enabled drivers build config 00:01:37.762 bus/ifpga: not in enabled drivers build config 00:01:37.762 bus/platform: not in enabled drivers build config 00:01:37.762 bus/uacce: not in enabled drivers build config 00:01:37.762 bus/vmbus: not in enabled drivers build config 00:01:37.762 common/cnxk: not in enabled drivers build config 00:01:37.762 common/nfp: not in enabled drivers build config 00:01:37.762 common/nitrox: not in enabled drivers build config 00:01:37.762 common/sfc_efx: not in enabled drivers build config 00:01:37.762 mempool/bucket: not in enabled drivers build config 00:01:37.762 mempool/cnxk: not in enabled drivers build config 00:01:37.762 mempool/dpaa: not in enabled drivers build config 00:01:37.762 mempool/dpaa2: not in enabled drivers build config 00:01:37.762 mempool/octeontx: not in enabled drivers build config 00:01:37.762 mempool/stack: not in enabled drivers build config 00:01:37.762 dma/cnxk: not in enabled drivers build config 00:01:37.762 dma/dpaa: not in enabled drivers build config 00:01:37.762 dma/dpaa2: not in enabled drivers build config 00:01:37.762 dma/hisilicon: not in enabled drivers build config 00:01:37.762 dma/idxd: not in enabled drivers build config 00:01:37.762 dma/ioat: not in enabled drivers build config 00:01:37.762 dma/skeleton: not in enabled drivers build config 00:01:37.762 net/af_packet: not in enabled drivers build config 00:01:37.762 net/af_xdp: not in enabled drivers build config 00:01:37.762 net/ark: not in enabled drivers build config 00:01:37.762 net/atlantic: not in enabled drivers build config 00:01:37.762 net/avp: not in enabled drivers build config 00:01:37.762 net/axgbe: not in enabled drivers build config 00:01:37.762 net/bnx2x: not in enabled drivers build config 00:01:37.762 net/bnxt: not in enabled drivers build config 00:01:37.762 net/bonding: not in enabled drivers build config 00:01:37.762 net/cnxk: not in enabled drivers build config 00:01:37.762 net/cpfl: not in enabled drivers build config 00:01:37.762 net/cxgbe: not in enabled drivers build config 00:01:37.762 net/dpaa: not in enabled drivers build config 00:01:37.762 net/dpaa2: not in enabled drivers build config 00:01:37.762 net/e1000: not in enabled drivers build config 00:01:37.762 net/ena: not in enabled drivers build config 00:01:37.762 net/enetc: not in enabled drivers build config 00:01:37.762 net/enetfec: not in enabled drivers build config 00:01:37.762 net/enic: not in enabled drivers build config 00:01:37.762 net/failsafe: not in enabled drivers build config 00:01:37.762 net/fm10k: not in enabled drivers build config 00:01:37.762 net/gve: not in enabled drivers build config 00:01:37.762 net/hinic: not in enabled drivers build config 00:01:37.762 net/hns3: not in enabled drivers build config 00:01:37.762 net/i40e: not in enabled drivers build config 00:01:37.762 net/iavf: not in enabled drivers build config 00:01:37.762 net/ice: not in enabled drivers build config 00:01:37.762 net/idpf: not in enabled drivers build config 00:01:37.762 net/igc: not in enabled drivers build config 00:01:37.762 net/ionic: not in enabled drivers build config 00:01:37.762 net/ipn3ke: not in enabled drivers build config 00:01:37.762 net/ixgbe: not in enabled drivers build config 00:01:37.762 net/mana: not in enabled drivers build config 00:01:37.762 net/memif: not in enabled drivers build config 00:01:37.762 net/mlx4: not in enabled drivers build config 00:01:37.762 net/mlx5: not in enabled drivers build config 00:01:37.762 net/mvneta: not in enabled drivers build config 00:01:37.762 net/mvpp2: not in enabled drivers build config 00:01:37.762 net/netvsc: not in enabled drivers build config 00:01:37.762 net/nfb: not in enabled drivers build config 00:01:37.762 net/nfp: not in enabled drivers build config 00:01:37.762 net/ngbe: not in enabled drivers build config 00:01:37.762 net/null: not in enabled drivers build config 00:01:37.762 net/octeontx: not in enabled drivers build config 00:01:37.762 net/octeon_ep: not in enabled drivers build config 00:01:37.762 net/pcap: not in enabled drivers build config 00:01:37.762 net/pfe: not in enabled drivers build config 00:01:37.762 net/qede: not in enabled drivers build config 00:01:37.762 net/ring: not in enabled drivers build config 00:01:37.762 net/sfc: not in enabled drivers build config 00:01:37.762 net/softnic: not in enabled drivers build config 00:01:37.762 net/tap: not in enabled drivers build config 00:01:37.762 net/thunderx: not in enabled drivers build config 00:01:37.762 net/txgbe: not in enabled drivers build config 00:01:37.762 net/vdev_netvsc: not in enabled drivers build config 00:01:37.762 net/vhost: not in enabled drivers build config 00:01:37.762 net/virtio: not in enabled drivers build config 00:01:37.762 net/vmxnet3: not in enabled drivers build config 00:01:37.762 raw/*: missing internal dependency, "rawdev" 00:01:37.762 crypto/armv8: not in enabled drivers build config 00:01:37.762 crypto/bcmfs: not in enabled drivers build config 00:01:37.762 crypto/caam_jr: not in enabled drivers build config 00:01:37.762 crypto/ccp: not in enabled drivers build config 00:01:37.762 crypto/cnxk: not in enabled drivers build config 00:01:37.762 crypto/dpaa_sec: not in enabled drivers build config 00:01:37.762 crypto/dpaa2_sec: not in enabled drivers build config 00:01:37.762 crypto/mvsam: not in enabled drivers build config 00:01:37.762 crypto/nitrox: not in enabled drivers build config 00:01:37.762 crypto/null: not in enabled drivers build config 00:01:37.762 crypto/octeontx: not in enabled drivers build config 00:01:37.762 crypto/openssl: not in enabled drivers build config 00:01:37.762 crypto/scheduler: not in enabled drivers build config 00:01:37.762 crypto/uadk: not in enabled drivers build config 00:01:37.762 crypto/virtio: not in enabled drivers build config 00:01:37.762 compress/nitrox: not in enabled drivers build config 00:01:37.762 compress/octeontx: not in enabled drivers build config 00:01:37.762 compress/zlib: not in enabled drivers build config 00:01:37.762 regex/*: missing internal dependency, "regexdev" 00:01:37.762 ml/*: missing internal dependency, "mldev" 00:01:37.762 vdpa/ifc: not in enabled drivers build config 00:01:37.762 vdpa/mlx5: not in enabled drivers build config 00:01:37.762 vdpa/nfp: not in enabled drivers build config 00:01:37.762 vdpa/sfc: not in enabled drivers build config 00:01:37.762 event/*: missing internal dependency, "eventdev" 00:01:37.762 baseband/*: missing internal dependency, "bbdev" 00:01:37.762 gpu/*: missing internal dependency, "gpudev" 00:01:37.762 00:01:37.762 00:01:38.022 Build targets in project: 115 00:01:38.022 00:01:38.022 DPDK 24.03.0 00:01:38.022 00:01:38.022 User defined options 00:01:38.022 buildtype : debug 00:01:38.022 default_library : shared 00:01:38.022 libdir : lib 00:01:38.022 prefix : /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:01:38.022 b_sanitize : address 00:01:38.022 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -I/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib -DNO_COMPAT_IMB_API_053 -I/var/jenkins/workspace/crypto-phy-autotest/spdk/isa-l -I/var/jenkins/workspace/crypto-phy-autotest/spdk/isalbuild -fPIC -Werror 00:01:38.022 c_link_args : -L/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib -L/var/jenkins/workspace/crypto-phy-autotest/spdk/isa-l/.libs -lisal 00:01:38.022 cpu_instruction_set: native 00:01:38.022 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:38.022 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:38.022 enable_docs : false 00:01:38.022 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,crypto/qat,compress/qat,common/qat,common/mlx5,bus/auxiliary,crypto,crypto/aesni_mb,crypto/mlx5,crypto/ipsec_mb,compress,compress/isal,compress/mlx5 00:01:38.022 enable_kmods : false 00:01:38.022 max_lcores : 128 00:01:38.022 tests : false 00:01:38.022 00:01:38.022 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:38.283 ninja: Entering directory `/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build-tmp' 00:01:38.549 [1/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:38.549 [2/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:38.549 [3/378] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:38.549 [4/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:38.549 [5/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:38.549 [6/378] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:38.549 [7/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:38.549 [8/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:38.549 [9/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:38.549 [10/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:38.549 [11/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:38.549 [12/378] Linking static target lib/librte_kvargs.a 00:01:38.549 [13/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:38.549 [14/378] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:38.549 [15/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:38.549 [16/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:38.549 [17/378] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:38.549 [18/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:38.808 [19/378] Linking static target lib/librte_log.a 00:01:38.808 [20/378] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:38.808 [21/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:38.808 [22/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:38.808 [23/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:38.808 [24/378] Linking static target lib/librte_pci.a 00:01:38.808 [25/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:38.808 [26/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:38.808 [27/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:38.808 [28/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:38.808 [29/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:38.808 [30/378] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:38.808 [31/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:38.808 [32/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:38.808 [33/378] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:38.808 [34/378] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:39.072 [35/378] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:39.072 [36/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:39.072 [37/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:39.072 [38/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:39.072 [39/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:39.072 [40/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:39.072 [41/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:39.072 [42/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:39.072 [43/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:39.072 [44/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:39.072 [45/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:39.072 [46/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:39.072 [47/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:39.072 [48/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:39.072 [49/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:39.072 [50/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:39.072 [51/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:39.072 [52/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:39.072 [53/378] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:39.072 [54/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:39.072 [55/378] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.072 [56/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:39.072 [57/378] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:39.072 [58/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:39.072 [59/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:39.072 [60/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:39.072 [61/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:39.072 [62/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:39.072 [63/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:39.072 [64/378] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.072 [65/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:39.072 [66/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:39.072 [67/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:39.072 [68/378] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:39.072 [69/378] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:39.072 [70/378] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:39.072 [71/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:39.072 [72/378] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:39.072 [73/378] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:39.072 [74/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:39.072 [75/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:39.072 [76/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:39.072 [77/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:39.072 [78/378] Linking static target lib/librte_meter.a 00:01:39.072 [79/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:39.072 [80/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:39.072 [81/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:39.072 [82/378] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:39.072 [83/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:39.072 [84/378] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:39.332 [85/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:39.332 [86/378] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:39.332 [87/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:39.332 [88/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:39.332 [89/378] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:39.332 [90/378] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:39.332 [91/378] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:39.332 [92/378] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:39.332 [93/378] Linking static target lib/librte_ring.a 00:01:39.332 [94/378] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:39.332 [95/378] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:39.332 [96/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:39.332 [97/378] Linking static target lib/librte_telemetry.a 00:01:39.332 [98/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:39.332 [99/378] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:39.332 [100/378] Linking static target lib/librte_cmdline.a 00:01:39.332 [101/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:39.332 [102/378] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:39.332 [103/378] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:39.332 [104/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:39.332 [105/378] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:39.332 [106/378] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_auxiliary_params.c.o 00:01:39.332 [107/378] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:39.332 [108/378] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:39.332 [109/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:39.332 [110/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:39.332 [111/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:39.332 [112/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:39.332 [113/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:39.333 [114/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:39.333 [115/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:39.333 [116/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:39.333 [117/378] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:39.333 [118/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:39.333 [119/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_logs.c.o 00:01:39.333 [120/378] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:39.333 [121/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:39.333 [122/378] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:39.333 [123/378] Linking static target lib/librte_timer.a 00:01:39.333 [124/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:39.333 [125/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:39.333 [126/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:39.333 [127/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:39.333 [128/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:39.333 [129/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:39.333 [130/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:39.595 [131/378] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:39.595 [132/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:39.595 [133/378] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:39.595 [134/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:39.595 [135/378] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:39.595 [136/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:39.595 [137/378] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:39.595 [138/378] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:39.595 [139/378] Linking static target lib/librte_mempool.a 00:01:39.595 [140/378] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:39.595 [141/378] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:39.595 [142/378] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:39.595 [143/378] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:39.595 [144/378] Linking static target lib/librte_net.a 00:01:39.595 [145/378] Linking static target lib/librte_dmadev.a 00:01:39.595 [146/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:39.595 [147/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:39.595 [148/378] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:39.595 [149/378] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:39.595 [150/378] Linking static target lib/librte_eal.a 00:01:39.595 [151/378] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:39.595 [152/378] Linking static target lib/librte_rcu.a 00:01:39.595 [153/378] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:39.595 [154/378] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.595 [155/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:39.595 [156/378] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:39.595 [157/378] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.595 [158/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_glue.c.o 00:01:39.595 [159/378] Linking static target lib/librte_compressdev.a 00:01:39.595 [160/378] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:39.853 [161/378] Linking target lib/librte_log.so.24.1 00:01:39.853 [162/378] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.853 [163/378] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:39.853 [164/378] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_linux_auxiliary.c.o 00:01:39.853 [165/378] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_auxiliary_common.c.o 00:01:39.853 [166/378] Linking static target drivers/libtmp_rte_bus_auxiliary.a 00:01:39.853 [167/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:39.853 [168/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_pf2vf.c.o 00:01:39.853 [169/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:39.853 [170/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:39.853 [171/378] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:39.853 [172/378] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:39.853 [173/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_common.c.o 00:01:39.853 [174/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen1.c.o 00:01:39.853 [175/378] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:39.853 [176/378] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:39.853 [177/378] Linking static target lib/librte_power.a 00:01:39.853 [178/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:39.853 [179/378] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.853 [180/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:39.853 [181/378] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.853 [182/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen4.c.o 00:01:39.853 [183/378] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.853 [184/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen1.c.o 00:01:39.853 [185/378] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:39.853 [186/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:39.853 [187/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen4.c.o 00:01:39.853 [188/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:39.853 [189/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_mp.c.o 00:01:39.853 [190/378] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:39.853 [191/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen3.c.o 00:01:39.853 [192/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_auxiliary.c.o 00:01:39.853 [193/378] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:39.853 [194/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen5.c.o 00:01:39.853 [195/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen2.c.o 00:01:40.111 [196/378] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.112 [197/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_ipsec_mb_ops.c.o 00:01:40.112 [198/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen3.c.o 00:01:40.112 [199/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_malloc.c.o 00:01:40.112 [200/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen2.c.o 00:01:40.112 [201/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen5.c.o 00:01:40.112 [202/378] Linking target lib/librte_kvargs.so.24.1 00:01:40.112 [203/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_pci.c.o 00:01:40.112 [204/378] Linking target lib/librte_telemetry.so.24.1 00:01:40.112 [205/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen2.c.o 00:01:40.112 [206/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen_lce.c.o 00:01:40.112 [207/378] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:40.112 [208/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen5.c.o 00:01:40.112 [209/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_verbs.c.o 00:01:40.112 [210/378] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:40.112 [211/378] Generating drivers/rte_bus_auxiliary.pmd.c with a custom command 00:01:40.112 [212/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_asym_pmd_gen1.c.o 00:01:40.112 [213/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_os.c.o 00:01:40.112 [214/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_device.c.o 00:01:40.112 [215/378] Compiling C object drivers/librte_bus_auxiliary.a.p/meson-generated_.._rte_bus_auxiliary.pmd.c.o 00:01:40.112 [216/378] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:40.112 [217/378] Compiling C object drivers/librte_bus_auxiliary.so.24.1.p/meson-generated_.._rte_bus_auxiliary.pmd.c.o 00:01:40.112 [218/378] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:40.112 [219/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_qat_comp_pmd.c.o 00:01:40.112 [220/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_devx.c.o 00:01:40.112 [221/378] Linking static target drivers/librte_bus_auxiliary.a 00:01:40.112 [222/378] Linking static target lib/librte_reorder.a 00:01:40.112 [223/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common.c.o 00:01:40.112 [224/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_crypto.c.o 00:01:40.112 [225/378] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:40.112 [226/378] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:40.112 [227/378] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:40.112 [228/378] Linking static target drivers/librte_bus_vdev.a 00:01:40.112 [229/378] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:40.112 [230/378] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:40.112 [231/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_dek.c.o 00:01:40.112 [232/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_utils.c.o 00:01:40.112 [233/378] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.112 [234/378] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:40.112 [235/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto.c.o 00:01:40.112 [236/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_sym.c.o 00:01:40.112 [237/378] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:40.112 [238/378] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:40.112 [239/378] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:40.112 [240/378] Linking static target drivers/librte_bus_pci.a 00:01:40.112 [241/378] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:40.112 [242/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:40.112 [243/378] Linking static target lib/librte_mbuf.a 00:01:40.370 [244/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen_lce.c.o 00:01:40.370 [245/378] Compiling C object drivers/libtmp_rte_compress_mlx5.a.p/compress_mlx5_mlx5_compress.c.o 00:01:40.370 [246/378] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:40.370 [247/378] Linking static target drivers/libtmp_rte_compress_mlx5.a 00:01:40.370 [248/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_xts.c.o 00:01:40.370 [249/378] Compiling C object drivers/libtmp_rte_compress_isal.a.p/compress_isal_isal_compress_pmd_ops.c.o 00:01:40.370 [250/378] Linking static target lib/librte_security.a 00:01:40.370 [251/378] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:40.370 [252/378] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:40.370 [253/378] Linking static target lib/librte_hash.a 00:01:40.370 [254/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_nl.c.o 00:01:40.370 [255/378] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.370 [256/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_ipsec_mb_private.c.o 00:01:40.370 [257/378] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:40.370 [258/378] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.370 [259/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_gcm.c.o 00:01:40.370 [260/378] Linking static target drivers/libtmp_rte_crypto_mlx5.a 00:01:40.370 [261/378] Compiling C object drivers/libtmp_rte_compress_isal.a.p/compress_isal_isal_compress_pmd.c.o 00:01:40.370 [262/378] Linking static target drivers/libtmp_rte_compress_isal.a 00:01:40.370 [263/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_qp.c.o 00:01:40.370 [264/378] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:40.370 [265/378] Generating drivers/rte_bus_auxiliary.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.370 [266/378] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:40.370 [267/378] Linking static target drivers/librte_mempool_ring.a 00:01:40.370 [268/378] Generating drivers/rte_compress_mlx5.pmd.c with a custom command 00:01:40.370 [269/378] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:40.370 [270/378] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.370 [271/378] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.629 [272/378] Compiling C object drivers/librte_compress_mlx5.so.24.1.p/meson-generated_.._rte_compress_mlx5.pmd.c.o 00:01:40.629 [273/378] Compiling C object drivers/librte_compress_mlx5.a.p/meson-generated_.._rte_compress_mlx5.pmd.c.o 00:01:40.629 [274/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_zuc.c.o 00:01:40.629 [275/378] Linking static target drivers/librte_compress_mlx5.a 00:01:40.629 [276/378] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.629 [277/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_aesni_gcm.c.o 00:01:40.629 [278/378] Generating drivers/rte_crypto_mlx5.pmd.c with a custom command 00:01:40.629 [279/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_mr.c.o 00:01:40.629 [280/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:40.629 [281/378] Compiling C object drivers/librte_crypto_mlx5.so.24.1.p/meson-generated_.._rte_crypto_mlx5.pmd.c.o 00:01:40.629 [282/378] Compiling C object drivers/librte_crypto_mlx5.a.p/meson-generated_.._rte_crypto_mlx5.pmd.c.o 00:01:40.629 [283/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_chacha_poly.c.o 00:01:40.629 [284/378] Generating drivers/rte_compress_isal.pmd.c with a custom command 00:01:40.629 [285/378] Linking static target drivers/librte_crypto_mlx5.a 00:01:40.629 [286/378] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:40.629 [287/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_kasumi.c.o 00:01:40.629 [288/378] Linking static target lib/librte_cryptodev.a 00:01:40.629 [289/378] Compiling C object drivers/librte_compress_isal.so.24.1.p/meson-generated_.._rte_compress_isal.pmd.c.o 00:01:40.629 [290/378] Compiling C object drivers/librte_compress_isal.a.p/meson-generated_.._rte_compress_isal.pmd.c.o 00:01:40.629 [291/378] Linking static target drivers/librte_compress_isal.a 00:01:40.629 [292/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen4.c.o 00:01:40.629 [293/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_qat_comp.c.o 00:01:40.888 [294/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_snow3g.c.o 00:01:40.888 [295/378] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.888 [296/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_sym_session.c.o 00:01:40.888 [297/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_aesni_mb.c.o 00:01:40.888 [298/378] Linking static target drivers/libtmp_rte_crypto_ipsec_mb.a 00:01:40.888 [299/378] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.147 [300/378] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.147 [301/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_devx_cmds.c.o 00:01:41.147 [302/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen3.c.o 00:01:41.147 [303/378] Linking static target drivers/libtmp_rte_common_mlx5.a 00:01:41.147 [304/378] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.147 [305/378] Generating drivers/rte_crypto_ipsec_mb.pmd.c with a custom command 00:01:41.147 [306/378] Compiling C object drivers/librte_crypto_ipsec_mb.a.p/meson-generated_.._rte_crypto_ipsec_mb.pmd.c.o 00:01:41.147 [307/378] Compiling C object drivers/librte_crypto_ipsec_mb.so.24.1.p/meson-generated_.._rte_crypto_ipsec_mb.pmd.c.o 00:01:41.147 [308/378] Linking static target drivers/librte_crypto_ipsec_mb.a 00:01:41.147 [309/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_sym_pmd_gen1.c.o 00:01:41.406 [310/378] Generating drivers/rte_common_mlx5.pmd.c with a custom command 00:01:41.406 [311/378] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.406 [312/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:41.406 [313/378] Compiling C object drivers/librte_common_mlx5.so.24.1.p/meson-generated_.._rte_common_mlx5.pmd.c.o 00:01:41.406 [314/378] Compiling C object drivers/librte_common_mlx5.a.p/meson-generated_.._rte_common_mlx5.pmd.c.o 00:01:41.406 [315/378] Linking static target lib/librte_ethdev.a 00:01:41.406 [316/378] Linking static target drivers/librte_common_mlx5.a 00:01:42.342 [317/378] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:42.911 [318/378] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.481 [319/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_asym.c.o 00:01:43.481 [320/378] Linking static target drivers/libtmp_rte_common_qat.a 00:01:43.740 [321/378] Generating drivers/rte_common_qat.pmd.c with a custom command 00:01:43.740 [322/378] Compiling C object drivers/librte_common_qat.a.p/meson-generated_.._rte_common_qat.pmd.c.o 00:01:43.740 [323/378] Compiling C object drivers/librte_common_qat.so.24.1.p/meson-generated_.._rte_common_qat.pmd.c.o 00:01:43.740 [324/378] Linking static target drivers/librte_common_qat.a 00:01:45.127 [325/378] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:45.127 [326/378] Linking static target lib/librte_vhost.a 00:01:47.033 [327/378] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.292 [328/378] Generating drivers/rte_common_mlx5.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.517 [329/378] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.085 [330/378] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.085 [331/378] Linking target lib/librte_eal.so.24.1 00:01:52.344 [332/378] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:52.344 [333/378] Linking target lib/librte_ring.so.24.1 00:01:52.344 [334/378] Linking target lib/librte_timer.so.24.1 00:01:52.344 [335/378] Linking target lib/librte_meter.so.24.1 00:01:52.344 [336/378] Linking target lib/librte_dmadev.so.24.1 00:01:52.344 [337/378] Linking target lib/librte_pci.so.24.1 00:01:52.344 [338/378] Linking target drivers/librte_bus_vdev.so.24.1 00:01:52.344 [339/378] Linking target drivers/librte_bus_auxiliary.so.24.1 00:01:52.603 [340/378] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:52.603 [341/378] Generating symbol file drivers/librte_bus_auxiliary.so.24.1.p/librte_bus_auxiliary.so.24.1.symbols 00:01:52.603 [342/378] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:52.603 [343/378] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:52.603 [344/378] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:52.603 [345/378] Generating symbol file drivers/librte_bus_vdev.so.24.1.p/librte_bus_vdev.so.24.1.symbols 00:01:52.603 [346/378] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:52.603 [347/378] Linking target lib/librte_rcu.so.24.1 00:01:52.603 [348/378] Linking target lib/librte_mempool.so.24.1 00:01:52.603 [349/378] Linking target drivers/librte_bus_pci.so.24.1 00:01:52.603 [350/378] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:52.862 [351/378] Generating symbol file drivers/librte_bus_pci.so.24.1.p/librte_bus_pci.so.24.1.symbols 00:01:52.862 [352/378] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:52.862 [353/378] Linking target drivers/librte_mempool_ring.so.24.1 00:01:52.862 [354/378] Linking target lib/librte_mbuf.so.24.1 00:01:52.862 [355/378] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:52.862 [356/378] Linking target lib/librte_net.so.24.1 00:01:52.862 [357/378] Linking target lib/librte_compressdev.so.24.1 00:01:53.120 [358/378] Linking target lib/librte_reorder.so.24.1 00:01:53.120 [359/378] Linking target lib/librte_cryptodev.so.24.1 00:01:53.120 [360/378] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:53.120 [361/378] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:53.120 [362/378] Generating symbol file lib/librte_compressdev.so.24.1.p/librte_compressdev.so.24.1.symbols 00:01:53.120 [363/378] Linking target lib/librte_cmdline.so.24.1 00:01:53.120 [364/378] Linking target lib/librte_hash.so.24.1 00:01:53.120 [365/378] Linking target lib/librte_ethdev.so.24.1 00:01:53.120 [366/378] Linking target lib/librte_security.so.24.1 00:01:53.120 [367/378] Linking target drivers/librte_compress_isal.so.24.1 00:01:53.379 [368/378] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:53.379 [369/378] Generating symbol file lib/librte_security.so.24.1.p/librte_security.so.24.1.symbols 00:01:53.379 [370/378] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:53.379 [371/378] Linking target drivers/librte_common_mlx5.so.24.1 00:01:53.379 [372/378] Linking target lib/librte_power.so.24.1 00:01:53.379 [373/378] Linking target lib/librte_vhost.so.24.1 00:01:53.379 [374/378] Generating symbol file drivers/librte_common_mlx5.so.24.1.p/librte_common_mlx5.so.24.1.symbols 00:01:53.638 [375/378] Linking target drivers/librte_compress_mlx5.so.24.1 00:01:53.638 [376/378] Linking target drivers/librte_crypto_mlx5.so.24.1 00:01:53.638 [377/378] Linking target drivers/librte_crypto_ipsec_mb.so.24.1 00:01:53.638 [378/378] Linking target drivers/librte_common_qat.so.24.1 00:01:53.638 INFO: autodetecting backend as ninja 00:01:53.638 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build-tmp -j 112 00:01:54.574 CC lib/ut/ut.o 00:01:54.833 CC lib/ut_mock/mock.o 00:01:54.833 CC lib/log/log.o 00:01:54.833 CC lib/log/log_flags.o 00:01:54.833 CC lib/log/log_deprecated.o 00:01:54.833 LIB libspdk_ut.a 00:01:54.833 LIB libspdk_log.a 00:01:54.833 SO libspdk_ut.so.2.0 00:01:54.833 LIB libspdk_ut_mock.a 00:01:54.833 SO libspdk_log.so.7.0 00:01:54.833 SO libspdk_ut_mock.so.6.0 00:01:55.093 SYMLINK libspdk_ut.so 00:01:55.093 SYMLINK libspdk_log.so 00:01:55.093 SYMLINK libspdk_ut_mock.so 00:01:55.352 CC lib/dma/dma.o 00:01:55.352 CC lib/util/base64.o 00:01:55.352 CC lib/util/bit_array.o 00:01:55.352 CC lib/util/cpuset.o 00:01:55.352 CC lib/util/crc16.o 00:01:55.352 CC lib/util/crc32_ieee.o 00:01:55.352 CC lib/util/crc32.o 00:01:55.352 CC lib/util/crc32c.o 00:01:55.352 CC lib/util/fd.o 00:01:55.352 CC lib/util/crc64.o 00:01:55.352 CC lib/util/dif.o 00:01:55.352 CC lib/util/file.o 00:01:55.352 CXX lib/trace_parser/trace.o 00:01:55.352 CC lib/util/hexlify.o 00:01:55.352 CC lib/util/iov.o 00:01:55.352 CC lib/util/pipe.o 00:01:55.352 CC lib/util/math.o 00:01:55.352 CC lib/util/strerror_tls.o 00:01:55.352 CC lib/util/string.o 00:01:55.352 CC lib/util/uuid.o 00:01:55.352 CC lib/util/fd_group.o 00:01:55.352 CC lib/util/xor.o 00:01:55.352 CC lib/util/zipf.o 00:01:55.352 CC lib/ioat/ioat.o 00:01:55.611 LIB libspdk_dma.a 00:01:55.611 CC lib/vfio_user/host/vfio_user_pci.o 00:01:55.611 CC lib/vfio_user/host/vfio_user.o 00:01:55.611 SO libspdk_dma.so.4.0 00:01:55.611 SYMLINK libspdk_dma.so 00:01:55.611 LIB libspdk_ioat.a 00:01:55.611 SO libspdk_ioat.so.7.0 00:01:55.871 SYMLINK libspdk_ioat.so 00:01:55.871 LIB libspdk_vfio_user.a 00:01:55.871 SO libspdk_vfio_user.so.5.0 00:01:55.871 LIB libspdk_util.a 00:01:55.871 SYMLINK libspdk_vfio_user.so 00:01:55.871 SO libspdk_util.so.9.1 00:01:56.130 SYMLINK libspdk_util.so 00:01:56.130 LIB libspdk_trace_parser.a 00:01:56.130 SO libspdk_trace_parser.so.5.0 00:01:56.388 SYMLINK libspdk_trace_parser.so 00:01:56.388 CC lib/rdma_provider/common.o 00:01:56.388 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:56.388 CC lib/reduce/reduce.o 00:01:56.388 CC lib/conf/conf.o 00:01:56.388 CC lib/json/json_parse.o 00:01:56.388 CC lib/json/json_util.o 00:01:56.388 CC lib/json/json_write.o 00:01:56.388 CC lib/rdma_utils/rdma_utils.o 00:01:56.388 CC lib/env_dpdk/env.o 00:01:56.388 CC lib/idxd/idxd.o 00:01:56.388 CC lib/env_dpdk/memory.o 00:01:56.388 CC lib/env_dpdk/pci.o 00:01:56.388 CC lib/idxd/idxd_user.o 00:01:56.388 CC lib/env_dpdk/init.o 00:01:56.388 CC lib/idxd/idxd_kernel.o 00:01:56.388 CC lib/env_dpdk/threads.o 00:01:56.388 CC lib/env_dpdk/pci_ioat.o 00:01:56.388 CC lib/env_dpdk/pci_virtio.o 00:01:56.388 CC lib/env_dpdk/pci_vmd.o 00:01:56.388 CC lib/vmd/vmd.o 00:01:56.388 CC lib/env_dpdk/pci_idxd.o 00:01:56.388 CC lib/vmd/led.o 00:01:56.388 CC lib/env_dpdk/pci_event.o 00:01:56.388 CC lib/env_dpdk/sigbus_handler.o 00:01:56.388 CC lib/env_dpdk/pci_dpdk.o 00:01:56.388 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:56.388 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:56.647 LIB libspdk_rdma_provider.a 00:01:56.647 SO libspdk_rdma_provider.so.6.0 00:01:56.647 LIB libspdk_conf.a 00:01:56.647 SO libspdk_conf.so.6.0 00:01:56.647 LIB libspdk_rdma_utils.a 00:01:56.647 SYMLINK libspdk_rdma_provider.so 00:01:56.647 LIB libspdk_json.a 00:01:56.905 SO libspdk_rdma_utils.so.1.0 00:01:56.905 SYMLINK libspdk_conf.so 00:01:56.905 SO libspdk_json.so.6.0 00:01:56.905 SYMLINK libspdk_rdma_utils.so 00:01:56.905 SYMLINK libspdk_json.so 00:01:57.164 LIB libspdk_idxd.a 00:01:57.164 LIB libspdk_reduce.a 00:01:57.164 SO libspdk_idxd.so.12.0 00:01:57.164 LIB libspdk_vmd.a 00:01:57.164 SO libspdk_reduce.so.6.0 00:01:57.164 SO libspdk_vmd.so.6.0 00:01:57.164 SYMLINK libspdk_reduce.so 00:01:57.164 SYMLINK libspdk_idxd.so 00:01:57.164 SYMLINK libspdk_vmd.so 00:01:57.164 CC lib/jsonrpc/jsonrpc_server.o 00:01:57.164 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:57.164 CC lib/jsonrpc/jsonrpc_client.o 00:01:57.164 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:57.422 LIB libspdk_jsonrpc.a 00:01:57.681 SO libspdk_jsonrpc.so.6.0 00:01:57.681 SYMLINK libspdk_jsonrpc.so 00:01:57.681 LIB libspdk_env_dpdk.a 00:01:57.939 SO libspdk_env_dpdk.so.14.1 00:01:57.939 SYMLINK libspdk_env_dpdk.so 00:01:57.939 CC lib/rpc/rpc.o 00:01:58.198 LIB libspdk_rpc.a 00:01:58.198 SO libspdk_rpc.so.6.0 00:01:58.198 SYMLINK libspdk_rpc.so 00:01:58.766 CC lib/notify/notify.o 00:01:58.766 CC lib/notify/notify_rpc.o 00:01:58.766 CC lib/keyring/keyring.o 00:01:58.766 CC lib/keyring/keyring_rpc.o 00:01:58.766 CC lib/trace/trace.o 00:01:58.766 CC lib/trace/trace_flags.o 00:01:58.766 CC lib/trace/trace_rpc.o 00:01:58.766 LIB libspdk_notify.a 00:01:58.766 SO libspdk_notify.so.6.0 00:01:58.766 LIB libspdk_keyring.a 00:01:59.024 SYMLINK libspdk_notify.so 00:01:59.024 LIB libspdk_trace.a 00:01:59.024 SO libspdk_keyring.so.1.0 00:01:59.024 SO libspdk_trace.so.10.0 00:01:59.024 SYMLINK libspdk_keyring.so 00:01:59.024 SYMLINK libspdk_trace.so 00:01:59.284 CC lib/sock/sock.o 00:01:59.284 CC lib/sock/sock_rpc.o 00:01:59.284 CC lib/thread/thread.o 00:01:59.284 CC lib/thread/iobuf.o 00:01:59.852 LIB libspdk_sock.a 00:01:59.852 SO libspdk_sock.so.10.0 00:01:59.852 SYMLINK libspdk_sock.so 00:02:00.111 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:00.111 CC lib/nvme/nvme_ctrlr.o 00:02:00.111 CC lib/nvme/nvme_fabric.o 00:02:00.111 CC lib/nvme/nvme_ns_cmd.o 00:02:00.111 CC lib/nvme/nvme_ns.o 00:02:00.111 CC lib/nvme/nvme_pcie_common.o 00:02:00.111 CC lib/nvme/nvme_pcie.o 00:02:00.370 CC lib/nvme/nvme_qpair.o 00:02:00.370 CC lib/nvme/nvme.o 00:02:00.370 CC lib/nvme/nvme_quirks.o 00:02:00.370 CC lib/nvme/nvme_transport.o 00:02:00.370 CC lib/nvme/nvme_discovery.o 00:02:00.370 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:00.370 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:00.370 CC lib/nvme/nvme_tcp.o 00:02:00.370 CC lib/nvme/nvme_opal.o 00:02:00.370 CC lib/nvme/nvme_io_msg.o 00:02:00.370 CC lib/nvme/nvme_poll_group.o 00:02:00.370 CC lib/nvme/nvme_zns.o 00:02:00.370 CC lib/nvme/nvme_stubs.o 00:02:00.370 CC lib/nvme/nvme_auth.o 00:02:00.370 CC lib/nvme/nvme_cuse.o 00:02:00.370 CC lib/nvme/nvme_rdma.o 00:02:00.629 LIB libspdk_thread.a 00:02:00.887 SO libspdk_thread.so.10.1 00:02:00.887 SYMLINK libspdk_thread.so 00:02:01.145 CC lib/accel/accel.o 00:02:01.145 CC lib/blob/blobstore.o 00:02:01.145 CC lib/accel/accel_rpc.o 00:02:01.145 CC lib/blob/request.o 00:02:01.145 CC lib/accel/accel_sw.o 00:02:01.145 CC lib/blob/zeroes.o 00:02:01.145 CC lib/blob/blob_bs_dev.o 00:02:01.145 CC lib/virtio/virtio.o 00:02:01.145 CC lib/virtio/virtio_vfio_user.o 00:02:01.145 CC lib/virtio/virtio_vhost_user.o 00:02:01.145 CC lib/virtio/virtio_pci.o 00:02:01.145 CC lib/init/subsystem.o 00:02:01.145 CC lib/init/json_config.o 00:02:01.145 CC lib/init/subsystem_rpc.o 00:02:01.145 CC lib/init/rpc.o 00:02:01.403 LIB libspdk_init.a 00:02:01.403 SO libspdk_init.so.5.0 00:02:01.662 LIB libspdk_virtio.a 00:02:01.662 SO libspdk_virtio.so.7.0 00:02:01.662 SYMLINK libspdk_init.so 00:02:01.662 SYMLINK libspdk_virtio.so 00:02:01.920 CC lib/event/app.o 00:02:01.920 CC lib/event/reactor.o 00:02:01.920 CC lib/event/log_rpc.o 00:02:01.920 CC lib/event/app_rpc.o 00:02:01.920 CC lib/event/scheduler_static.o 00:02:02.179 LIB libspdk_accel.a 00:02:02.179 SO libspdk_accel.so.15.1 00:02:02.179 LIB libspdk_nvme.a 00:02:02.179 SYMLINK libspdk_accel.so 00:02:02.179 SO libspdk_nvme.so.13.1 00:02:02.438 LIB libspdk_event.a 00:02:02.438 SO libspdk_event.so.14.0 00:02:02.438 SYMLINK libspdk_event.so 00:02:02.697 SYMLINK libspdk_nvme.so 00:02:02.697 CC lib/bdev/bdev.o 00:02:02.697 CC lib/bdev/bdev_rpc.o 00:02:02.697 CC lib/bdev/scsi_nvme.o 00:02:02.697 CC lib/bdev/bdev_zone.o 00:02:02.697 CC lib/bdev/part.o 00:02:04.073 LIB libspdk_blob.a 00:02:04.073 SO libspdk_blob.so.11.0 00:02:04.073 SYMLINK libspdk_blob.so 00:02:04.676 CC lib/lvol/lvol.o 00:02:04.676 CC lib/blobfs/blobfs.o 00:02:04.676 CC lib/blobfs/tree.o 00:02:04.936 LIB libspdk_bdev.a 00:02:04.936 SO libspdk_bdev.so.15.1 00:02:04.936 SYMLINK libspdk_bdev.so 00:02:05.194 LIB libspdk_blobfs.a 00:02:05.194 LIB libspdk_lvol.a 00:02:05.194 SO libspdk_blobfs.so.10.0 00:02:05.453 CC lib/nbd/nbd_rpc.o 00:02:05.453 CC lib/nbd/nbd.o 00:02:05.453 SO libspdk_lvol.so.10.0 00:02:05.453 CC lib/ublk/ublk.o 00:02:05.453 CC lib/ublk/ublk_rpc.o 00:02:05.453 CC lib/nvmf/ctrlr.o 00:02:05.453 CC lib/nvmf/ctrlr_discovery.o 00:02:05.453 CC lib/nvmf/ctrlr_bdev.o 00:02:05.453 SYMLINK libspdk_blobfs.so 00:02:05.453 CC lib/nvmf/subsystem.o 00:02:05.453 CC lib/nvmf/nvmf.o 00:02:05.453 CC lib/nvmf/nvmf_rpc.o 00:02:05.453 CC lib/nvmf/transport.o 00:02:05.453 CC lib/nvmf/tcp.o 00:02:05.453 CC lib/nvmf/stubs.o 00:02:05.453 CC lib/scsi/dev.o 00:02:05.453 CC lib/nvmf/mdns_server.o 00:02:05.453 CC lib/scsi/lun.o 00:02:05.453 CC lib/ftl/ftl_core.o 00:02:05.453 CC lib/nvmf/rdma.o 00:02:05.453 CC lib/scsi/port.o 00:02:05.453 CC lib/ftl/ftl_init.o 00:02:05.453 CC lib/nvmf/auth.o 00:02:05.453 CC lib/scsi/scsi.o 00:02:05.453 CC lib/ftl/ftl_layout.o 00:02:05.453 CC lib/ftl/ftl_io.o 00:02:05.453 CC lib/scsi/scsi_bdev.o 00:02:05.453 CC lib/ftl/ftl_debug.o 00:02:05.453 CC lib/scsi/scsi_pr.o 00:02:05.453 CC lib/scsi/scsi_rpc.o 00:02:05.453 CC lib/ftl/ftl_sb.o 00:02:05.453 CC lib/scsi/task.o 00:02:05.453 CC lib/ftl/ftl_l2p.o 00:02:05.453 CC lib/ftl/ftl_nv_cache.o 00:02:05.453 CC lib/ftl/ftl_l2p_flat.o 00:02:05.453 CC lib/ftl/ftl_band.o 00:02:05.453 CC lib/ftl/ftl_writer.o 00:02:05.453 CC lib/ftl/ftl_band_ops.o 00:02:05.453 CC lib/ftl/ftl_rq.o 00:02:05.453 CC lib/ftl/ftl_reloc.o 00:02:05.453 CC lib/ftl/ftl_l2p_cache.o 00:02:05.453 CC lib/ftl/mngt/ftl_mngt.o 00:02:05.453 CC lib/ftl/ftl_p2l.o 00:02:05.453 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:05.453 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:05.453 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:05.453 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:05.453 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:05.453 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:05.453 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:05.453 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:05.453 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:05.453 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:05.453 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:05.453 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:05.453 SYMLINK libspdk_lvol.so 00:02:05.453 CC lib/ftl/utils/ftl_mempool.o 00:02:05.453 CC lib/ftl/utils/ftl_md.o 00:02:05.453 CC lib/ftl/utils/ftl_conf.o 00:02:05.453 CC lib/ftl/utils/ftl_property.o 00:02:05.453 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:05.453 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:05.453 CC lib/ftl/utils/ftl_bitmap.o 00:02:05.453 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:05.453 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:05.453 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:05.453 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:05.453 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:05.453 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:05.453 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:05.453 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:05.453 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:05.453 CC lib/ftl/base/ftl_base_dev.o 00:02:05.453 CC lib/ftl/base/ftl_base_bdev.o 00:02:05.453 CC lib/ftl/ftl_trace.o 00:02:05.712 LIB libspdk_nbd.a 00:02:05.970 SO libspdk_nbd.so.7.0 00:02:05.970 SYMLINK libspdk_nbd.so 00:02:06.229 LIB libspdk_scsi.a 00:02:06.229 LIB libspdk_ublk.a 00:02:06.229 SO libspdk_ublk.so.3.0 00:02:06.229 SO libspdk_scsi.so.9.0 00:02:06.229 SYMLINK libspdk_ublk.so 00:02:06.229 SYMLINK libspdk_scsi.so 00:02:06.488 LIB libspdk_ftl.a 00:02:06.747 CC lib/iscsi/conn.o 00:02:06.747 CC lib/iscsi/init_grp.o 00:02:06.747 CC lib/iscsi/iscsi.o 00:02:06.747 CC lib/iscsi/portal_grp.o 00:02:06.747 CC lib/iscsi/md5.o 00:02:06.747 CC lib/iscsi/param.o 00:02:06.747 CC lib/iscsi/task.o 00:02:06.747 CC lib/iscsi/tgt_node.o 00:02:06.747 CC lib/iscsi/iscsi_subsystem.o 00:02:06.747 CC lib/iscsi/iscsi_rpc.o 00:02:06.747 CC lib/vhost/vhost.o 00:02:06.747 CC lib/vhost/vhost_rpc.o 00:02:06.747 CC lib/vhost/vhost_scsi.o 00:02:06.747 CC lib/vhost/vhost_blk.o 00:02:06.747 CC lib/vhost/rte_vhost_user.o 00:02:06.747 SO libspdk_ftl.so.9.0 00:02:07.007 SYMLINK libspdk_ftl.so 00:02:07.584 LIB libspdk_nvmf.a 00:02:07.584 LIB libspdk_vhost.a 00:02:07.584 SO libspdk_vhost.so.8.0 00:02:07.584 SO libspdk_nvmf.so.18.1 00:02:07.584 SYMLINK libspdk_vhost.so 00:02:07.843 SYMLINK libspdk_nvmf.so 00:02:07.843 LIB libspdk_iscsi.a 00:02:07.843 SO libspdk_iscsi.so.8.0 00:02:08.102 SYMLINK libspdk_iscsi.so 00:02:08.670 CC module/env_dpdk/env_dpdk_rpc.o 00:02:08.670 LIB libspdk_env_dpdk_rpc.a 00:02:08.929 CC module/sock/posix/posix.o 00:02:08.929 CC module/blob/bdev/blob_bdev.o 00:02:08.929 SO libspdk_env_dpdk_rpc.so.6.0 00:02:08.929 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:08.929 CC module/accel/dpdk_compressdev/accel_dpdk_compressdev.o 00:02:08.929 CC module/accel/dpdk_compressdev/accel_dpdk_compressdev_rpc.o 00:02:08.929 CC module/accel/iaa/accel_iaa.o 00:02:08.929 CC module/accel/iaa/accel_iaa_rpc.o 00:02:08.929 CC module/scheduler/gscheduler/gscheduler.o 00:02:08.929 CC module/accel/dsa/accel_dsa.o 00:02:08.929 CC module/accel/error/accel_error.o 00:02:08.929 CC module/accel/dsa/accel_dsa_rpc.o 00:02:08.929 CC module/keyring/file/keyring.o 00:02:08.929 CC module/accel/error/accel_error_rpc.o 00:02:08.929 CC module/keyring/file/keyring_rpc.o 00:02:08.929 CC module/accel/dpdk_cryptodev/accel_dpdk_cryptodev.o 00:02:08.929 CC module/accel/dpdk_cryptodev/accel_dpdk_cryptodev_rpc.o 00:02:08.929 CC module/keyring/linux/keyring.o 00:02:08.929 CC module/accel/ioat/accel_ioat.o 00:02:08.929 CC module/keyring/linux/keyring_rpc.o 00:02:08.929 CC module/accel/ioat/accel_ioat_rpc.o 00:02:08.929 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:08.929 SYMLINK libspdk_env_dpdk_rpc.so 00:02:08.929 LIB libspdk_scheduler_dpdk_governor.a 00:02:08.929 LIB libspdk_scheduler_gscheduler.a 00:02:08.929 LIB libspdk_keyring_linux.a 00:02:08.929 LIB libspdk_keyring_file.a 00:02:08.929 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:08.929 SO libspdk_scheduler_gscheduler.so.4.0 00:02:08.929 LIB libspdk_accel_ioat.a 00:02:09.188 LIB libspdk_accel_error.a 00:02:09.188 SO libspdk_keyring_linux.so.1.0 00:02:09.188 SO libspdk_keyring_file.so.1.0 00:02:09.188 LIB libspdk_accel_iaa.a 00:02:09.188 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:09.188 LIB libspdk_scheduler_dynamic.a 00:02:09.188 SO libspdk_accel_ioat.so.6.0 00:02:09.188 LIB libspdk_accel_dsa.a 00:02:09.188 SO libspdk_accel_error.so.2.0 00:02:09.188 LIB libspdk_blob_bdev.a 00:02:09.188 SYMLINK libspdk_scheduler_gscheduler.so 00:02:09.188 SO libspdk_accel_iaa.so.3.0 00:02:09.188 SO libspdk_scheduler_dynamic.so.4.0 00:02:09.188 SO libspdk_accel_dsa.so.5.0 00:02:09.188 SYMLINK libspdk_keyring_linux.so 00:02:09.188 SYMLINK libspdk_keyring_file.so 00:02:09.188 SO libspdk_blob_bdev.so.11.0 00:02:09.189 SYMLINK libspdk_accel_ioat.so 00:02:09.189 SYMLINK libspdk_accel_iaa.so 00:02:09.189 SYMLINK libspdk_accel_error.so 00:02:09.189 SYMLINK libspdk_scheduler_dynamic.so 00:02:09.189 SYMLINK libspdk_accel_dsa.so 00:02:09.189 SYMLINK libspdk_blob_bdev.so 00:02:09.447 LIB libspdk_sock_posix.a 00:02:09.447 SO libspdk_sock_posix.so.6.0 00:02:09.706 SYMLINK libspdk_sock_posix.so 00:02:09.706 CC module/bdev/delay/vbdev_delay.o 00:02:09.706 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:09.706 CC module/bdev/lvol/vbdev_lvol.o 00:02:09.706 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:09.706 CC module/bdev/malloc/bdev_malloc.o 00:02:09.706 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:09.706 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:09.706 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:09.706 CC module/bdev/error/vbdev_error.o 00:02:09.706 CC module/bdev/error/vbdev_error_rpc.o 00:02:09.706 CC module/bdev/gpt/gpt.o 00:02:09.706 CC module/bdev/gpt/vbdev_gpt.o 00:02:09.706 CC module/bdev/null/bdev_null_rpc.o 00:02:09.706 CC module/bdev/null/bdev_null.o 00:02:09.706 CC module/bdev/nvme/bdev_nvme.o 00:02:09.706 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:09.706 CC module/bdev/nvme/nvme_rpc.o 00:02:09.706 CC module/bdev/aio/bdev_aio.o 00:02:09.706 CC module/bdev/nvme/vbdev_opal.o 00:02:09.706 CC module/bdev/compress/vbdev_compress.o 00:02:09.706 CC module/bdev/ftl/bdev_ftl.o 00:02:09.706 CC module/bdev/aio/bdev_aio_rpc.o 00:02:09.706 CC module/bdev/nvme/bdev_mdns_client.o 00:02:09.706 CC module/bdev/compress/vbdev_compress_rpc.o 00:02:09.706 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:09.706 CC module/blobfs/bdev/blobfs_bdev.o 00:02:09.706 CC module/bdev/passthru/vbdev_passthru.o 00:02:09.706 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:09.706 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:09.706 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:09.706 CC module/bdev/split/vbdev_split.o 00:02:09.706 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:09.706 CC module/bdev/raid/bdev_raid.o 00:02:09.706 CC module/bdev/raid/bdev_raid_rpc.o 00:02:09.706 CC module/bdev/raid/bdev_raid_sb.o 00:02:09.706 CC module/bdev/split/vbdev_split_rpc.o 00:02:09.706 CC module/bdev/crypto/vbdev_crypto.o 00:02:09.706 CC module/bdev/crypto/vbdev_crypto_rpc.o 00:02:09.706 CC module/bdev/raid/raid0.o 00:02:09.706 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:09.706 CC module/bdev/raid/raid1.o 00:02:09.706 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:09.706 CC module/bdev/raid/concat.o 00:02:09.706 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:09.706 CC module/bdev/iscsi/bdev_iscsi.o 00:02:09.706 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:09.706 LIB libspdk_accel_dpdk_compressdev.a 00:02:09.965 SO libspdk_accel_dpdk_compressdev.so.3.0 00:02:09.965 SYMLINK libspdk_accel_dpdk_compressdev.so 00:02:09.965 LIB libspdk_blobfs_bdev.a 00:02:09.965 SO libspdk_blobfs_bdev.so.6.0 00:02:09.965 LIB libspdk_bdev_split.a 00:02:09.965 LIB libspdk_bdev_null.a 00:02:09.965 LIB libspdk_bdev_error.a 00:02:09.965 LIB libspdk_bdev_gpt.a 00:02:09.965 SO libspdk_bdev_split.so.6.0 00:02:10.223 SYMLINK libspdk_blobfs_bdev.so 00:02:10.223 LIB libspdk_bdev_passthru.a 00:02:10.223 SO libspdk_bdev_error.so.6.0 00:02:10.223 SO libspdk_bdev_null.so.6.0 00:02:10.223 LIB libspdk_bdev_ftl.a 00:02:10.223 SO libspdk_bdev_gpt.so.6.0 00:02:10.223 LIB libspdk_bdev_zone_block.a 00:02:10.223 LIB libspdk_bdev_aio.a 00:02:10.223 SO libspdk_bdev_passthru.so.6.0 00:02:10.223 LIB libspdk_bdev_delay.a 00:02:10.223 LIB libspdk_bdev_malloc.a 00:02:10.223 LIB libspdk_accel_dpdk_cryptodev.a 00:02:10.223 LIB libspdk_bdev_crypto.a 00:02:10.223 SO libspdk_bdev_ftl.so.6.0 00:02:10.223 LIB libspdk_bdev_compress.a 00:02:10.223 SYMLINK libspdk_bdev_split.so 00:02:10.223 SO libspdk_bdev_zone_block.so.6.0 00:02:10.223 SYMLINK libspdk_bdev_error.so 00:02:10.223 SO libspdk_bdev_malloc.so.6.0 00:02:10.223 SYMLINK libspdk_bdev_null.so 00:02:10.223 SO libspdk_accel_dpdk_cryptodev.so.3.0 00:02:10.223 SO libspdk_bdev_delay.so.6.0 00:02:10.223 SO libspdk_bdev_aio.so.6.0 00:02:10.223 LIB libspdk_bdev_iscsi.a 00:02:10.223 SYMLINK libspdk_bdev_gpt.so 00:02:10.223 SO libspdk_bdev_crypto.so.6.0 00:02:10.223 SO libspdk_bdev_compress.so.6.0 00:02:10.223 SYMLINK libspdk_bdev_passthru.so 00:02:10.223 SO libspdk_bdev_iscsi.so.6.0 00:02:10.223 SYMLINK libspdk_bdev_ftl.so 00:02:10.223 SYMLINK libspdk_bdev_zone_block.so 00:02:10.223 SYMLINK libspdk_bdev_malloc.so 00:02:10.223 SYMLINK libspdk_accel_dpdk_cryptodev.so 00:02:10.224 SYMLINK libspdk_bdev_aio.so 00:02:10.224 SYMLINK libspdk_bdev_delay.so 00:02:10.224 SYMLINK libspdk_bdev_compress.so 00:02:10.224 SYMLINK libspdk_bdev_crypto.so 00:02:10.224 LIB libspdk_bdev_lvol.a 00:02:10.224 SYMLINK libspdk_bdev_iscsi.so 00:02:10.224 SO libspdk_bdev_lvol.so.6.0 00:02:10.224 LIB libspdk_bdev_virtio.a 00:02:10.483 SYMLINK libspdk_bdev_lvol.so 00:02:10.483 SO libspdk_bdev_virtio.so.6.0 00:02:10.483 SYMLINK libspdk_bdev_virtio.so 00:02:10.742 LIB libspdk_bdev_raid.a 00:02:10.742 SO libspdk_bdev_raid.so.6.0 00:02:11.001 SYMLINK libspdk_bdev_raid.so 00:02:11.938 LIB libspdk_bdev_nvme.a 00:02:11.938 SO libspdk_bdev_nvme.so.7.0 00:02:11.938 SYMLINK libspdk_bdev_nvme.so 00:02:12.875 CC module/event/subsystems/scheduler/scheduler.o 00:02:12.875 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:12.875 CC module/event/subsystems/iobuf/iobuf.o 00:02:12.875 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:12.875 CC module/event/subsystems/vmd/vmd.o 00:02:12.875 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:12.875 CC module/event/subsystems/keyring/keyring.o 00:02:12.875 CC module/event/subsystems/sock/sock.o 00:02:12.875 LIB libspdk_event_scheduler.a 00:02:12.875 LIB libspdk_event_vhost_blk.a 00:02:12.875 LIB libspdk_event_keyring.a 00:02:12.875 SO libspdk_event_scheduler.so.4.0 00:02:12.875 LIB libspdk_event_vmd.a 00:02:12.875 LIB libspdk_event_iobuf.a 00:02:12.875 LIB libspdk_event_sock.a 00:02:12.875 SO libspdk_event_keyring.so.1.0 00:02:12.875 SO libspdk_event_vhost_blk.so.3.0 00:02:12.875 SO libspdk_event_vmd.so.6.0 00:02:12.875 SO libspdk_event_iobuf.so.3.0 00:02:12.875 SYMLINK libspdk_event_scheduler.so 00:02:12.875 SO libspdk_event_sock.so.5.0 00:02:12.875 SYMLINK libspdk_event_keyring.so 00:02:12.875 SYMLINK libspdk_event_vhost_blk.so 00:02:12.875 SYMLINK libspdk_event_sock.so 00:02:12.875 SYMLINK libspdk_event_vmd.so 00:02:12.875 SYMLINK libspdk_event_iobuf.so 00:02:13.442 CC module/event/subsystems/accel/accel.o 00:02:13.442 LIB libspdk_event_accel.a 00:02:13.442 SO libspdk_event_accel.so.6.0 00:02:13.701 SYMLINK libspdk_event_accel.so 00:02:13.959 CC module/event/subsystems/bdev/bdev.o 00:02:14.217 LIB libspdk_event_bdev.a 00:02:14.217 SO libspdk_event_bdev.so.6.0 00:02:14.217 SYMLINK libspdk_event_bdev.so 00:02:14.475 CC module/event/subsystems/nbd/nbd.o 00:02:14.475 CC module/event/subsystems/scsi/scsi.o 00:02:14.475 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:14.475 CC module/event/subsystems/ublk/ublk.o 00:02:14.475 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:14.733 LIB libspdk_event_nbd.a 00:02:14.733 SO libspdk_event_nbd.so.6.0 00:02:14.733 LIB libspdk_event_ublk.a 00:02:14.733 LIB libspdk_event_scsi.a 00:02:14.733 SO libspdk_event_ublk.so.3.0 00:02:14.733 SO libspdk_event_scsi.so.6.0 00:02:14.733 LIB libspdk_event_nvmf.a 00:02:14.733 SYMLINK libspdk_event_nbd.so 00:02:14.733 SO libspdk_event_nvmf.so.6.0 00:02:14.733 SYMLINK libspdk_event_ublk.so 00:02:14.733 SYMLINK libspdk_event_scsi.so 00:02:14.991 SYMLINK libspdk_event_nvmf.so 00:02:15.250 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:15.250 CC module/event/subsystems/iscsi/iscsi.o 00:02:15.250 LIB libspdk_event_vhost_scsi.a 00:02:15.250 LIB libspdk_event_iscsi.a 00:02:15.508 SO libspdk_event_vhost_scsi.so.3.0 00:02:15.508 SO libspdk_event_iscsi.so.6.0 00:02:15.508 SYMLINK libspdk_event_vhost_scsi.so 00:02:15.508 SYMLINK libspdk_event_iscsi.so 00:02:15.767 SO libspdk.so.6.0 00:02:15.768 SYMLINK libspdk.so 00:02:16.036 CC test/rpc_client/rpc_client_test.o 00:02:16.036 CXX app/trace/trace.o 00:02:16.036 CC app/spdk_nvme_identify/identify.o 00:02:16.036 CC app/trace_record/trace_record.o 00:02:16.036 TEST_HEADER include/spdk/accel.h 00:02:16.036 TEST_HEADER include/spdk/assert.h 00:02:16.036 TEST_HEADER include/spdk/accel_module.h 00:02:16.036 TEST_HEADER include/spdk/barrier.h 00:02:16.036 CC app/spdk_nvme_perf/perf.o 00:02:16.036 TEST_HEADER include/spdk/base64.h 00:02:16.036 TEST_HEADER include/spdk/bdev.h 00:02:16.036 TEST_HEADER include/spdk/bdev_module.h 00:02:16.036 TEST_HEADER include/spdk/bdev_zone.h 00:02:16.036 TEST_HEADER include/spdk/bit_array.h 00:02:16.036 CC app/spdk_lspci/spdk_lspci.o 00:02:16.036 TEST_HEADER include/spdk/bit_pool.h 00:02:16.036 TEST_HEADER include/spdk/blob_bdev.h 00:02:16.036 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:16.036 TEST_HEADER include/spdk/blobfs.h 00:02:16.036 TEST_HEADER include/spdk/conf.h 00:02:16.036 CC app/spdk_nvme_discover/discovery_aer.o 00:02:16.036 TEST_HEADER include/spdk/blob.h 00:02:16.036 TEST_HEADER include/spdk/config.h 00:02:16.036 TEST_HEADER include/spdk/cpuset.h 00:02:16.036 TEST_HEADER include/spdk/crc16.h 00:02:16.036 TEST_HEADER include/spdk/crc32.h 00:02:16.036 TEST_HEADER include/spdk/dif.h 00:02:16.036 TEST_HEADER include/spdk/dma.h 00:02:16.036 TEST_HEADER include/spdk/crc64.h 00:02:16.036 TEST_HEADER include/spdk/endian.h 00:02:16.036 TEST_HEADER include/spdk/env_dpdk.h 00:02:16.036 CC app/spdk_top/spdk_top.o 00:02:16.036 TEST_HEADER include/spdk/env.h 00:02:16.036 TEST_HEADER include/spdk/event.h 00:02:16.036 TEST_HEADER include/spdk/fd_group.h 00:02:16.036 TEST_HEADER include/spdk/fd.h 00:02:16.036 TEST_HEADER include/spdk/file.h 00:02:16.036 TEST_HEADER include/spdk/gpt_spec.h 00:02:16.036 TEST_HEADER include/spdk/ftl.h 00:02:16.036 TEST_HEADER include/spdk/histogram_data.h 00:02:16.036 TEST_HEADER include/spdk/hexlify.h 00:02:16.036 TEST_HEADER include/spdk/idxd.h 00:02:16.036 TEST_HEADER include/spdk/idxd_spec.h 00:02:16.036 TEST_HEADER include/spdk/init.h 00:02:16.036 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:16.036 TEST_HEADER include/spdk/ioat.h 00:02:16.036 TEST_HEADER include/spdk/ioat_spec.h 00:02:16.036 TEST_HEADER include/spdk/iscsi_spec.h 00:02:16.036 TEST_HEADER include/spdk/json.h 00:02:16.036 TEST_HEADER include/spdk/jsonrpc.h 00:02:16.036 TEST_HEADER include/spdk/keyring.h 00:02:16.036 CC app/nvmf_tgt/nvmf_main.o 00:02:16.036 TEST_HEADER include/spdk/keyring_module.h 00:02:16.036 TEST_HEADER include/spdk/likely.h 00:02:16.036 TEST_HEADER include/spdk/log.h 00:02:16.036 TEST_HEADER include/spdk/memory.h 00:02:16.036 TEST_HEADER include/spdk/lvol.h 00:02:16.036 CC app/spdk_dd/spdk_dd.o 00:02:16.036 TEST_HEADER include/spdk/mmio.h 00:02:16.036 TEST_HEADER include/spdk/nbd.h 00:02:16.036 TEST_HEADER include/spdk/nvme_intel.h 00:02:16.036 TEST_HEADER include/spdk/notify.h 00:02:16.036 TEST_HEADER include/spdk/nvme.h 00:02:16.036 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:16.036 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:16.036 TEST_HEADER include/spdk/nvme_zns.h 00:02:16.036 TEST_HEADER include/spdk/nvme_spec.h 00:02:16.036 TEST_HEADER include/spdk/nvmf.h 00:02:16.036 TEST_HEADER include/spdk/nvmf_spec.h 00:02:16.036 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:16.036 TEST_HEADER include/spdk/nvmf_transport.h 00:02:16.036 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:16.036 TEST_HEADER include/spdk/opal.h 00:02:16.036 TEST_HEADER include/spdk/opal_spec.h 00:02:16.036 TEST_HEADER include/spdk/pipe.h 00:02:16.036 TEST_HEADER include/spdk/pci_ids.h 00:02:16.036 TEST_HEADER include/spdk/scheduler.h 00:02:16.036 TEST_HEADER include/spdk/rpc.h 00:02:16.036 TEST_HEADER include/spdk/queue.h 00:02:16.036 TEST_HEADER include/spdk/reduce.h 00:02:16.036 TEST_HEADER include/spdk/scsi.h 00:02:16.036 TEST_HEADER include/spdk/sock.h 00:02:16.036 TEST_HEADER include/spdk/string.h 00:02:16.036 TEST_HEADER include/spdk/trace.h 00:02:16.036 TEST_HEADER include/spdk/thread.h 00:02:16.036 TEST_HEADER include/spdk/stdinc.h 00:02:16.036 TEST_HEADER include/spdk/trace_parser.h 00:02:16.036 TEST_HEADER include/spdk/tree.h 00:02:16.036 TEST_HEADER include/spdk/scsi_spec.h 00:02:16.036 TEST_HEADER include/spdk/util.h 00:02:16.036 TEST_HEADER include/spdk/ublk.h 00:02:16.036 TEST_HEADER include/spdk/uuid.h 00:02:16.036 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:16.036 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:16.036 TEST_HEADER include/spdk/version.h 00:02:16.036 TEST_HEADER include/spdk/vmd.h 00:02:16.036 TEST_HEADER include/spdk/xor.h 00:02:16.036 TEST_HEADER include/spdk/vhost.h 00:02:16.036 TEST_HEADER include/spdk/zipf.h 00:02:16.036 CXX test/cpp_headers/accel.o 00:02:16.036 CXX test/cpp_headers/accel_module.o 00:02:16.036 CXX test/cpp_headers/assert.o 00:02:16.036 CC app/spdk_tgt/spdk_tgt.o 00:02:16.036 CXX test/cpp_headers/base64.o 00:02:16.036 CXX test/cpp_headers/barrier.o 00:02:16.036 CXX test/cpp_headers/bdev_zone.o 00:02:16.036 CXX test/cpp_headers/bdev_module.o 00:02:16.036 CXX test/cpp_headers/bdev.o 00:02:16.036 CXX test/cpp_headers/bit_array.o 00:02:16.036 CXX test/cpp_headers/bit_pool.o 00:02:16.036 CXX test/cpp_headers/blob.o 00:02:16.036 CXX test/cpp_headers/blobfs_bdev.o 00:02:16.036 CXX test/cpp_headers/blob_bdev.o 00:02:16.036 CXX test/cpp_headers/blobfs.o 00:02:16.036 CXX test/cpp_headers/conf.o 00:02:16.036 CXX test/cpp_headers/cpuset.o 00:02:16.036 CXX test/cpp_headers/config.o 00:02:16.036 CXX test/cpp_headers/crc16.o 00:02:16.036 CXX test/cpp_headers/crc64.o 00:02:16.036 CXX test/cpp_headers/crc32.o 00:02:16.036 CXX test/cpp_headers/endian.o 00:02:16.036 CXX test/cpp_headers/dma.o 00:02:16.036 CXX test/cpp_headers/dif.o 00:02:16.036 CXX test/cpp_headers/env_dpdk.o 00:02:16.036 CXX test/cpp_headers/env.o 00:02:16.036 CXX test/cpp_headers/fd_group.o 00:02:16.036 CXX test/cpp_headers/event.o 00:02:16.036 CXX test/cpp_headers/fd.o 00:02:16.036 CXX test/cpp_headers/file.o 00:02:16.036 CXX test/cpp_headers/gpt_spec.o 00:02:16.036 CXX test/cpp_headers/ftl.o 00:02:16.036 CXX test/cpp_headers/hexlify.o 00:02:16.036 CXX test/cpp_headers/histogram_data.o 00:02:16.036 CXX test/cpp_headers/idxd.o 00:02:16.036 CXX test/cpp_headers/idxd_spec.o 00:02:16.036 CXX test/cpp_headers/ioat.o 00:02:16.036 CXX test/cpp_headers/init.o 00:02:16.036 CC app/iscsi_tgt/iscsi_tgt.o 00:02:16.036 CXX test/cpp_headers/ioat_spec.o 00:02:16.036 CXX test/cpp_headers/json.o 00:02:16.036 CXX test/cpp_headers/keyring.o 00:02:16.036 CXX test/cpp_headers/iscsi_spec.o 00:02:16.036 CXX test/cpp_headers/keyring_module.o 00:02:16.036 CXX test/cpp_headers/jsonrpc.o 00:02:16.036 CXX test/cpp_headers/likely.o 00:02:16.036 CXX test/cpp_headers/log.o 00:02:16.036 CXX test/cpp_headers/memory.o 00:02:16.036 CXX test/cpp_headers/lvol.o 00:02:16.036 CXX test/cpp_headers/mmio.o 00:02:16.036 CXX test/cpp_headers/nbd.o 00:02:16.036 CXX test/cpp_headers/notify.o 00:02:16.036 CXX test/cpp_headers/nvme.o 00:02:16.036 CXX test/cpp_headers/nvme_ocssd.o 00:02:16.036 CXX test/cpp_headers/nvme_intel.o 00:02:16.036 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:16.036 CXX test/cpp_headers/nvme_spec.o 00:02:16.036 CXX test/cpp_headers/nvme_zns.o 00:02:16.036 CXX test/cpp_headers/nvmf_cmd.o 00:02:16.036 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:16.309 CXX test/cpp_headers/nvmf.o 00:02:16.309 CXX test/cpp_headers/nvmf_spec.o 00:02:16.309 CXX test/cpp_headers/nvmf_transport.o 00:02:16.309 CXX test/cpp_headers/opal.o 00:02:16.309 CXX test/cpp_headers/opal_spec.o 00:02:16.309 CXX test/cpp_headers/pci_ids.o 00:02:16.309 CXX test/cpp_headers/pipe.o 00:02:16.309 CXX test/cpp_headers/queue.o 00:02:16.309 CXX test/cpp_headers/reduce.o 00:02:16.309 CXX test/cpp_headers/rpc.o 00:02:16.309 CXX test/cpp_headers/scheduler.o 00:02:16.309 CXX test/cpp_headers/scsi.o 00:02:16.309 CXX test/cpp_headers/scsi_spec.o 00:02:16.309 CC test/thread/poller_perf/poller_perf.o 00:02:16.309 CXX test/cpp_headers/sock.o 00:02:16.309 CXX test/cpp_headers/stdinc.o 00:02:16.309 CXX test/cpp_headers/string.o 00:02:16.309 CXX test/cpp_headers/thread.o 00:02:16.309 CXX test/cpp_headers/trace.o 00:02:16.309 CXX test/cpp_headers/trace_parser.o 00:02:16.309 CXX test/cpp_headers/tree.o 00:02:16.309 CXX test/cpp_headers/ublk.o 00:02:16.309 CXX test/cpp_headers/util.o 00:02:16.309 CXX test/cpp_headers/uuid.o 00:02:16.309 CXX test/cpp_headers/version.o 00:02:16.309 CXX test/cpp_headers/vfio_user_pci.o 00:02:16.309 CC test/app/jsoncat/jsoncat.o 00:02:16.309 CC test/app/histogram_perf/histogram_perf.o 00:02:16.309 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:16.309 CC test/env/memory/memory_ut.o 00:02:16.309 CC examples/util/zipf/zipf.o 00:02:16.309 CC test/env/pci/pci_ut.o 00:02:16.309 CC test/app/stub/stub.o 00:02:16.309 CC test/env/vtophys/vtophys.o 00:02:16.309 CC examples/ioat/verify/verify.o 00:02:16.309 CC examples/ioat/perf/perf.o 00:02:16.309 CXX test/cpp_headers/vfio_user_spec.o 00:02:16.309 CC app/fio/nvme/fio_plugin.o 00:02:16.309 CXX test/cpp_headers/vhost.o 00:02:16.309 CC test/dma/test_dma/test_dma.o 00:02:16.309 CC test/app/bdev_svc/bdev_svc.o 00:02:16.309 LINK spdk_lspci 00:02:16.591 CC app/fio/bdev/fio_plugin.o 00:02:16.591 LINK rpc_client_test 00:02:16.854 LINK interrupt_tgt 00:02:16.854 LINK nvmf_tgt 00:02:16.854 CC test/env/mem_callbacks/mem_callbacks.o 00:02:16.854 LINK spdk_nvme_discover 00:02:16.854 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:16.854 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:16.854 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:16.854 LINK jsoncat 00:02:16.854 LINK poller_perf 00:02:16.854 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:16.854 CXX test/cpp_headers/vmd.o 00:02:16.854 CXX test/cpp_headers/xor.o 00:02:16.854 LINK histogram_perf 00:02:16.854 CXX test/cpp_headers/zipf.o 00:02:16.854 LINK vtophys 00:02:16.854 LINK zipf 00:02:16.854 LINK spdk_trace_record 00:02:16.854 LINK spdk_tgt 00:02:16.854 LINK iscsi_tgt 00:02:16.854 LINK env_dpdk_post_init 00:02:17.112 LINK stub 00:02:17.112 LINK bdev_svc 00:02:17.112 LINK ioat_perf 00:02:17.112 LINK verify 00:02:17.112 LINK spdk_dd 00:02:17.112 LINK spdk_trace 00:02:17.112 LINK test_dma 00:02:17.371 LINK pci_ut 00:02:17.371 LINK nvme_fuzz 00:02:17.371 LINK spdk_bdev 00:02:17.371 LINK spdk_nvme 00:02:17.371 CC test/event/reactor/reactor.o 00:02:17.371 LINK vhost_fuzz 00:02:17.371 CC test/event/event_perf/event_perf.o 00:02:17.371 CC test/event/reactor_perf/reactor_perf.o 00:02:17.371 LINK mem_callbacks 00:02:17.371 CC test/event/app_repeat/app_repeat.o 00:02:17.371 CC examples/idxd/perf/perf.o 00:02:17.371 CC test/event/scheduler/scheduler.o 00:02:17.371 CC examples/sock/hello_world/hello_sock.o 00:02:17.371 CC examples/vmd/led/led.o 00:02:17.371 CC examples/vmd/lsvmd/lsvmd.o 00:02:17.371 CC examples/thread/thread/thread_ex.o 00:02:17.630 LINK spdk_nvme_identify 00:02:17.630 LINK spdk_nvme_perf 00:02:17.630 LINK reactor 00:02:17.630 CC app/vhost/vhost.o 00:02:17.630 LINK event_perf 00:02:17.630 LINK reactor_perf 00:02:17.630 LINK spdk_top 00:02:17.630 LINK app_repeat 00:02:17.630 LINK lsvmd 00:02:17.630 LINK led 00:02:17.630 LINK scheduler 00:02:17.630 LINK hello_sock 00:02:17.887 LINK thread 00:02:17.887 CC test/nvme/connect_stress/connect_stress.o 00:02:17.887 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:17.887 CC test/nvme/e2edp/nvme_dp.o 00:02:17.887 CC test/nvme/sgl/sgl.o 00:02:17.887 CC test/nvme/reserve/reserve.o 00:02:17.887 CC test/nvme/reset/reset.o 00:02:17.887 CC test/nvme/aer/aer.o 00:02:17.887 LINK idxd_perf 00:02:17.887 CC test/nvme/fdp/fdp.o 00:02:17.887 CC test/nvme/startup/startup.o 00:02:17.887 CC test/blobfs/mkfs/mkfs.o 00:02:17.887 CC test/nvme/err_injection/err_injection.o 00:02:17.887 CC test/nvme/fused_ordering/fused_ordering.o 00:02:17.887 CC test/nvme/boot_partition/boot_partition.o 00:02:17.887 LINK vhost 00:02:17.887 CC test/nvme/overhead/overhead.o 00:02:17.887 CC test/nvme/simple_copy/simple_copy.o 00:02:17.887 CC test/nvme/cuse/cuse.o 00:02:17.887 CC test/nvme/compliance/nvme_compliance.o 00:02:17.887 CC test/accel/dif/dif.o 00:02:17.887 LINK memory_ut 00:02:17.887 CC test/lvol/esnap/esnap.o 00:02:17.887 LINK boot_partition 00:02:17.887 LINK connect_stress 00:02:17.887 LINK doorbell_aers 00:02:17.887 LINK err_injection 00:02:17.887 LINK reserve 00:02:17.887 LINK startup 00:02:17.887 LINK mkfs 00:02:17.888 LINK fused_ordering 00:02:18.146 LINK nvme_dp 00:02:18.146 LINK sgl 00:02:18.146 LINK simple_copy 00:02:18.146 LINK reset 00:02:18.146 LINK overhead 00:02:18.146 LINK aer 00:02:18.146 LINK fdp 00:02:18.146 LINK nvme_compliance 00:02:18.146 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:18.146 CC examples/nvme/hotplug/hotplug.o 00:02:18.146 CC examples/nvme/reconnect/reconnect.o 00:02:18.146 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:18.146 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:18.146 CC examples/nvme/abort/abort.o 00:02:18.146 CC examples/nvme/hello_world/hello_world.o 00:02:18.146 CC examples/nvme/arbitration/arbitration.o 00:02:18.405 LINK dif 00:02:18.405 CC examples/accel/perf/accel_perf.o 00:02:18.405 CC examples/blob/hello_world/hello_blob.o 00:02:18.405 CC examples/blob/cli/blobcli.o 00:02:18.405 LINK pmr_persistence 00:02:18.405 LINK hello_world 00:02:18.405 LINK cmb_copy 00:02:18.405 LINK hotplug 00:02:18.664 LINK reconnect 00:02:18.664 LINK arbitration 00:02:18.664 LINK iscsi_fuzz 00:02:18.664 LINK abort 00:02:18.664 LINK hello_blob 00:02:18.664 LINK nvme_manage 00:02:18.664 LINK accel_perf 00:02:18.924 LINK blobcli 00:02:18.924 CC test/bdev/bdevio/bdevio.o 00:02:18.924 LINK cuse 00:02:19.183 LINK bdevio 00:02:19.441 CC examples/bdev/hello_world/hello_bdev.o 00:02:19.441 CC examples/bdev/bdevperf/bdevperf.o 00:02:19.700 LINK hello_bdev 00:02:19.958 LINK bdevperf 00:02:20.526 CC examples/nvmf/nvmf/nvmf.o 00:02:21.095 LINK nvmf 00:02:22.475 LINK esnap 00:02:22.734 00:02:22.734 real 1m16.287s 00:02:22.734 user 13m34.326s 00:02:22.734 sys 4m54.475s 00:02:22.734 21:45:42 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:22.993 21:45:42 make -- common/autotest_common.sh@10 -- $ set +x 00:02:22.993 ************************************ 00:02:22.993 END TEST make 00:02:22.993 ************************************ 00:02:22.993 21:45:42 -- common/autotest_common.sh@1142 -- $ return 0 00:02:22.993 21:45:42 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:22.993 21:45:42 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:22.993 21:45:42 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:22.993 21:45:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:22.993 21:45:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:22.993 21:45:42 -- pm/common@44 -- $ pid=1144433 00:02:22.993 21:45:42 -- pm/common@50 -- $ kill -TERM 1144433 00:02:22.993 21:45:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:22.993 21:45:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:22.993 21:45:42 -- pm/common@44 -- $ pid=1144435 00:02:22.993 21:45:42 -- pm/common@50 -- $ kill -TERM 1144435 00:02:22.993 21:45:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:22.993 21:45:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:22.993 21:45:42 -- pm/common@44 -- $ pid=1144437 00:02:22.993 21:45:42 -- pm/common@50 -- $ kill -TERM 1144437 00:02:22.993 21:45:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:22.993 21:45:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:22.993 21:45:42 -- pm/common@44 -- $ pid=1144463 00:02:22.993 21:45:42 -- pm/common@50 -- $ sudo -E kill -TERM 1144463 00:02:22.993 21:45:42 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:02:22.994 21:45:42 -- nvmf/common.sh@7 -- # uname -s 00:02:22.994 21:45:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:22.994 21:45:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:22.994 21:45:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:22.994 21:45:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:22.994 21:45:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:22.994 21:45:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:22.994 21:45:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:22.994 21:45:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:22.994 21:45:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:22.994 21:45:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:22.994 21:45:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:02:22.994 21:45:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:02:22.994 21:45:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:22.994 21:45:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:22.994 21:45:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:22.994 21:45:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:22.994 21:45:42 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:02:22.994 21:45:42 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:22.994 21:45:42 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:22.994 21:45:42 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:22.994 21:45:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:22.994 21:45:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:22.994 21:45:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:22.994 21:45:42 -- paths/export.sh@5 -- # export PATH 00:02:22.994 21:45:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:22.994 21:45:42 -- nvmf/common.sh@47 -- # : 0 00:02:22.994 21:45:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:22.994 21:45:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:22.994 21:45:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:22.994 21:45:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:22.994 21:45:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:22.994 21:45:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:22.994 21:45:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:22.994 21:45:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:22.994 21:45:42 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:22.994 21:45:42 -- spdk/autotest.sh@32 -- # uname -s 00:02:22.994 21:45:42 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:22.994 21:45:42 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:22.994 21:45:42 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/coredumps 00:02:22.994 21:45:42 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:22.994 21:45:42 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/coredumps 00:02:22.994 21:45:42 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:22.994 21:45:42 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:22.994 21:45:42 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:22.994 21:45:42 -- spdk/autotest.sh@48 -- # udevadm_pid=1213831 00:02:22.994 21:45:42 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:22.994 21:45:42 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:22.994 21:45:42 -- pm/common@17 -- # local monitor 00:02:22.994 21:45:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:22.994 21:45:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:22.994 21:45:42 -- pm/common@21 -- # date +%s 00:02:22.994 21:45:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:22.994 21:45:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:22.994 21:45:42 -- pm/common@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720899942 00:02:22.994 21:45:42 -- pm/common@21 -- # date +%s 00:02:22.994 21:45:42 -- pm/common@25 -- # sleep 1 00:02:22.994 21:45:42 -- pm/common@21 -- # date +%s 00:02:22.994 21:45:42 -- pm/common@21 -- # date +%s 00:02:22.994 21:45:42 -- pm/common@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720899942 00:02:22.994 21:45:42 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720899942 00:02:22.994 21:45:42 -- pm/common@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720899942 00:02:23.254 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720899942_collect-cpu-load.pm.log 00:02:23.254 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720899942_collect-vmstat.pm.log 00:02:23.254 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720899942_collect-cpu-temp.pm.log 00:02:23.254 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720899942_collect-bmc-pm.bmc.pm.log 00:02:24.190 21:45:43 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:24.190 21:45:43 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:24.190 21:45:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:24.190 21:45:43 -- common/autotest_common.sh@10 -- # set +x 00:02:24.190 21:45:43 -- spdk/autotest.sh@59 -- # create_test_list 00:02:24.190 21:45:43 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:24.190 21:45:43 -- common/autotest_common.sh@10 -- # set +x 00:02:24.190 21:45:43 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/autotest.sh 00:02:24.190 21:45:43 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk 00:02:24.190 21:45:43 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:02:24.190 21:45:43 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:02:24.190 21:45:43 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:02:24.190 21:45:43 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:24.190 21:45:43 -- common/autotest_common.sh@1455 -- # uname 00:02:24.190 21:45:43 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:24.190 21:45:43 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:24.190 21:45:43 -- common/autotest_common.sh@1475 -- # uname 00:02:24.190 21:45:43 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:24.190 21:45:43 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:24.190 21:45:43 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:24.190 21:45:43 -- spdk/autotest.sh@72 -- # hash lcov 00:02:24.190 21:45:43 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:24.190 21:45:43 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:24.190 --rc lcov_branch_coverage=1 00:02:24.190 --rc lcov_function_coverage=1 00:02:24.190 --rc genhtml_branch_coverage=1 00:02:24.190 --rc genhtml_function_coverage=1 00:02:24.190 --rc genhtml_legend=1 00:02:24.190 --rc geninfo_all_blocks=1 00:02:24.190 ' 00:02:24.190 21:45:43 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:24.190 --rc lcov_branch_coverage=1 00:02:24.190 --rc lcov_function_coverage=1 00:02:24.190 --rc genhtml_branch_coverage=1 00:02:24.190 --rc genhtml_function_coverage=1 00:02:24.190 --rc genhtml_legend=1 00:02:24.190 --rc geninfo_all_blocks=1 00:02:24.190 ' 00:02:24.190 21:45:43 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:24.190 --rc lcov_branch_coverage=1 00:02:24.191 --rc lcov_function_coverage=1 00:02:24.191 --rc genhtml_branch_coverage=1 00:02:24.191 --rc genhtml_function_coverage=1 00:02:24.191 --rc genhtml_legend=1 00:02:24.191 --rc geninfo_all_blocks=1 00:02:24.191 --no-external' 00:02:24.191 21:45:43 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:24.191 --rc lcov_branch_coverage=1 00:02:24.191 --rc lcov_function_coverage=1 00:02:24.191 --rc genhtml_branch_coverage=1 00:02:24.191 --rc genhtml_function_coverage=1 00:02:24.191 --rc genhtml_legend=1 00:02:24.191 --rc geninfo_all_blocks=1 00:02:24.191 --no-external' 00:02:24.191 21:45:43 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:24.191 lcov: LCOV version 1.14 00:02:24.191 21:45:43 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/crypto-phy-autotest/spdk -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_base.info 00:02:28.383 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:28.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:28.383 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:28.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:28.383 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:28.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:28.383 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:28.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:28.383 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:28.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:28.383 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:28.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:28.383 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:28.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:28.383 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:28.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:28.383 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:28.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:28.383 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:28.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:28.383 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:28.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:28.383 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:28.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:28.383 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:28.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:28.383 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:28.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:28.383 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:28.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:28.383 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:28.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:28.383 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:28.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:28.383 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:28.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:28.383 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:28.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:28.383 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:28.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:28.383 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:28.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:28.383 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:28.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:28.383 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:28.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:28.383 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:28.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:28.383 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:28.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:28.384 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:28.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:28.384 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:28.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:28.384 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:28.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:28.384 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:28.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:28.384 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:28.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:28.384 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:28.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:28.384 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:28.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:28.384 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:28.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:28.384 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:28.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:28.384 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:28.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:28.384 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:28.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:28.384 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:28.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:28.384 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:28.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:28.384 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:28.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:28.384 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:28.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:28.384 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:28.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:28.384 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:28.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:28.384 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:28.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:28.384 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:28.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:28.384 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:28.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:28.384 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:28.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:28.384 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:28.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:28.384 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:28.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:28.384 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:28.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:28.384 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:28.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:28.384 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:28.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:28.384 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:28.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:28.384 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:28.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:28.384 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:28.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:28.384 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:28.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:28.384 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:28.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:28.384 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:28.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:28.384 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:28.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:28.384 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:28.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:28.384 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:28.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:28.384 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:28.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:28.645 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:28.645 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:28.645 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:28.645 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:28.645 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:28.645 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:28.645 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:28.645 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:28.645 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:28.645 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:28.645 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:28.645 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:28.645 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:28.645 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:28.645 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:28.645 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:28.645 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:28.645 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:28.645 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:28.645 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:28.645 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:28.645 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:28.645 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:28.645 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:28.645 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:28.645 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:28.645 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:28.645 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:28.645 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:28.645 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:28.645 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:28.645 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:28.645 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:28.645 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:28.645 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:28.645 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:28.645 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:28.645 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:28.645 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:28.645 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:28.645 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:28.645 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:28.645 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:28.645 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:28.645 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:28.645 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:28.645 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:28.645 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:28.645 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:28.645 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:28.904 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:28.904 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:28.905 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:28.905 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:43.790 /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:43.790 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:50.358 21:46:08 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:50.358 21:46:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:50.358 21:46:08 -- common/autotest_common.sh@10 -- # set +x 00:02:50.358 21:46:08 -- spdk/autotest.sh@91 -- # rm -f 00:02:50.358 21:46:08 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:02:53.651 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:53.651 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:53.651 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:53.651 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:53.651 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:53.651 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:53.651 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:53.651 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:53.651 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:53.651 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:53.651 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:53.651 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:53.651 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:53.651 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:53.651 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:53.651 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:53.651 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:02:53.651 21:46:12 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:53.651 21:46:12 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:53.651 21:46:12 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:53.651 21:46:12 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:53.651 21:46:12 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:53.651 21:46:12 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:53.651 21:46:12 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:53.651 21:46:12 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:53.651 21:46:12 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:53.651 21:46:12 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:53.651 21:46:12 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:53.651 21:46:12 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:53.651 21:46:12 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:53.651 21:46:12 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:53.651 21:46:12 -- scripts/common.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:53.651 No valid GPT data, bailing 00:02:53.651 21:46:13 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:53.651 21:46:13 -- scripts/common.sh@391 -- # pt= 00:02:53.651 21:46:13 -- scripts/common.sh@392 -- # return 1 00:02:53.651 21:46:13 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:53.651 1+0 records in 00:02:53.651 1+0 records out 00:02:53.651 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00679095 s, 154 MB/s 00:02:53.651 21:46:13 -- spdk/autotest.sh@118 -- # sync 00:02:53.651 21:46:13 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:53.651 21:46:13 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:53.651 21:46:13 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:01.772 21:46:20 -- spdk/autotest.sh@124 -- # uname -s 00:03:01.772 21:46:20 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:01.772 21:46:20 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/test-setup.sh 00:03:01.772 21:46:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:01.772 21:46:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:01.772 21:46:20 -- common/autotest_common.sh@10 -- # set +x 00:03:01.772 ************************************ 00:03:01.772 START TEST setup.sh 00:03:01.772 ************************************ 00:03:01.772 21:46:20 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/test-setup.sh 00:03:01.772 * Looking for test storage... 00:03:01.772 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:03:01.772 21:46:20 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:01.772 21:46:20 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:01.772 21:46:20 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/acl.sh 00:03:01.772 21:46:20 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:01.772 21:46:20 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:01.772 21:46:20 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:01.772 ************************************ 00:03:01.772 START TEST acl 00:03:01.772 ************************************ 00:03:01.772 21:46:20 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/acl.sh 00:03:01.772 * Looking for test storage... 00:03:01.772 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:03:01.772 21:46:20 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:01.772 21:46:20 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:01.772 21:46:20 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:01.772 21:46:20 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:01.772 21:46:20 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:01.772 21:46:20 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:01.772 21:46:20 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:01.772 21:46:20 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:01.772 21:46:20 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:01.772 21:46:20 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:01.772 21:46:20 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:01.772 21:46:20 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:01.772 21:46:20 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:01.772 21:46:20 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:01.772 21:46:20 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:01.772 21:46:20 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:03:05.966 21:46:24 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:05.966 21:46:24 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:05.966 21:46:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.966 21:46:24 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:05.966 21:46:24 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:05.966 21:46:24 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:03:10.193 Hugepages 00:03:10.193 node hugesize free / total 00:03:10.193 21:46:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.194 00:03:10.194 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:10.194 21:46:28 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:10.194 21:46:28 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:10.194 21:46:28 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:10.194 21:46:28 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:10.194 ************************************ 00:03:10.194 START TEST denied 00:03:10.194 ************************************ 00:03:10.194 21:46:29 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:10.194 21:46:29 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:03:10.194 21:46:29 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:03:10.194 21:46:29 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:10.194 21:46:29 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:10.194 21:46:29 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:03:14.389 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:03:14.389 21:46:33 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:03:14.389 21:46:33 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:14.389 21:46:33 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:14.389 21:46:33 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:03:14.389 21:46:33 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:03:14.389 21:46:33 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:14.389 21:46:33 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:14.389 21:46:33 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:14.389 21:46:33 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:14.389 21:46:33 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:03:19.660 00:03:19.660 real 0m9.473s 00:03:19.660 user 0m2.941s 00:03:19.660 sys 0m5.890s 00:03:19.660 21:46:38 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:19.660 21:46:38 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:19.660 ************************************ 00:03:19.660 END TEST denied 00:03:19.660 ************************************ 00:03:19.660 21:46:38 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:19.660 21:46:38 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:19.660 21:46:38 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:19.660 21:46:38 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:19.660 21:46:38 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:19.660 ************************************ 00:03:19.660 START TEST allowed 00:03:19.660 ************************************ 00:03:19.660 21:46:38 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:19.660 21:46:38 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:03:19.660 21:46:38 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:19.660 21:46:38 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:03:19.660 21:46:38 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:19.660 21:46:38 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:03:26.227 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:26.227 21:46:44 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:26.227 21:46:44 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:26.227 21:46:44 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:26.227 21:46:44 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:26.227 21:46:44 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:03:30.424 00:03:30.424 real 0m10.646s 00:03:30.424 user 0m3.029s 00:03:30.424 sys 0m5.860s 00:03:30.424 21:46:49 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:30.424 21:46:49 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:30.424 ************************************ 00:03:30.424 END TEST allowed 00:03:30.424 ************************************ 00:03:30.424 21:46:49 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:30.424 00:03:30.424 real 0m29.019s 00:03:30.424 user 0m9.089s 00:03:30.424 sys 0m17.806s 00:03:30.424 21:46:49 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:30.424 21:46:49 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:30.424 ************************************ 00:03:30.424 END TEST acl 00:03:30.424 ************************************ 00:03:30.424 21:46:49 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:30.424 21:46:49 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/hugepages.sh 00:03:30.424 21:46:49 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:30.424 21:46:49 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:30.424 21:46:49 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:30.424 ************************************ 00:03:30.424 START TEST hugepages 00:03:30.424 ************************************ 00:03:30.424 21:46:49 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/hugepages.sh 00:03:30.424 * Looking for test storage... 00:03:30.424 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 37458452 kB' 'MemAvailable: 41037612 kB' 'Buffers: 5128 kB' 'Cached: 14765460 kB' 'SwapCached: 0 kB' 'Active: 11788568 kB' 'Inactive: 3520372 kB' 'Active(anon): 11376032 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541768 kB' 'Mapped: 165240 kB' 'Shmem: 10837680 kB' 'KReclaimable: 286784 kB' 'Slab: 917052 kB' 'SReclaimable: 286784 kB' 'SUnreclaim: 630268 kB' 'KernelStack: 22160 kB' 'PageTables: 8364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439056 kB' 'Committed_AS: 12813576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218828 kB' 'VmallocChunk: 0 kB' 'Percpu: 92736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3480948 kB' 'DirectMap2M: 24516608 kB' 'DirectMap1G: 40894464 kB' 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.424 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.425 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.426 21:46:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.426 21:46:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.426 21:46:49 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:30.426 21:46:49 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:30.426 21:46:49 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:30.426 21:46:49 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:30.426 21:46:49 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:30.426 21:46:49 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:30.426 21:46:49 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:30.426 21:46:49 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:30.426 21:46:49 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:30.426 21:46:49 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:30.426 21:46:49 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:30.426 21:46:49 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.426 21:46:49 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:30.426 21:46:49 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.426 21:46:49 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:30.426 21:46:49 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:30.426 21:46:49 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:30.426 21:46:49 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:30.426 21:46:49 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:30.426 21:46:49 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:30.426 21:46:49 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:30.426 21:46:49 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:30.426 21:46:49 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:30.426 21:46:49 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:30.426 21:46:49 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:30.426 21:46:49 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:30.426 21:46:49 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:30.426 21:46:49 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:30.426 21:46:49 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:30.426 21:46:49 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:30.426 21:46:49 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:30.426 21:46:49 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:30.426 21:46:49 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:30.426 21:46:49 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:30.426 21:46:49 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:30.426 ************************************ 00:03:30.426 START TEST default_setup 00:03:30.426 ************************************ 00:03:30.426 21:46:49 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:30.426 21:46:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:30.426 21:46:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:30.426 21:46:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:30.426 21:46:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:30.426 21:46:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:30.426 21:46:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:30.426 21:46:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:30.426 21:46:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:30.426 21:46:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:30.426 21:46:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:30.426 21:46:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:30.426 21:46:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:30.426 21:46:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:30.426 21:46:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:30.426 21:46:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:30.426 21:46:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:30.426 21:46:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:30.426 21:46:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:30.426 21:46:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:30.426 21:46:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:30.426 21:46:49 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.426 21:46:49 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:03:33.758 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:33.758 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:33.758 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:33.758 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:33.758 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:33.758 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:33.758 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:33.758 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:33.758 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:33.758 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:33.758 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:33.758 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:33.758 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:33.758 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:34.016 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:34.016 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:35.924 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:35.924 21:46:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:35.924 21:46:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:35.924 21:46:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:35.924 21:46:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:35.924 21:46:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:35.924 21:46:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:35.924 21:46:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:35.924 21:46:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:35.924 21:46:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:35.924 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:35.924 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:35.924 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:35.924 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:35.924 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.924 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.924 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.924 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.924 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.924 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.924 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.924 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39593776 kB' 'MemAvailable: 43172776 kB' 'Buffers: 5128 kB' 'Cached: 14765592 kB' 'SwapCached: 0 kB' 'Active: 11804316 kB' 'Inactive: 3520372 kB' 'Active(anon): 11391780 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557308 kB' 'Mapped: 165400 kB' 'Shmem: 10837812 kB' 'KReclaimable: 286464 kB' 'Slab: 914528 kB' 'SReclaimable: 286464 kB' 'SUnreclaim: 628064 kB' 'KernelStack: 22272 kB' 'PageTables: 8496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12824792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218844 kB' 'VmallocChunk: 0 kB' 'Percpu: 92736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3480948 kB' 'DirectMap2M: 24516608 kB' 'DirectMap1G: 40894464 kB' 00:03:35.924 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.924 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.924 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.924 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.924 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.924 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.924 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.924 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.925 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39593856 kB' 'MemAvailable: 43172856 kB' 'Buffers: 5128 kB' 'Cached: 14765596 kB' 'SwapCached: 0 kB' 'Active: 11804392 kB' 'Inactive: 3520372 kB' 'Active(anon): 11391856 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557396 kB' 'Mapped: 165284 kB' 'Shmem: 10837816 kB' 'KReclaimable: 286464 kB' 'Slab: 914416 kB' 'SReclaimable: 286464 kB' 'SUnreclaim: 627952 kB' 'KernelStack: 22192 kB' 'PageTables: 8484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12824812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218780 kB' 'VmallocChunk: 0 kB' 'Percpu: 92736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3480948 kB' 'DirectMap2M: 24516608 kB' 'DirectMap1G: 40894464 kB' 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.926 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.927 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39594152 kB' 'MemAvailable: 43173152 kB' 'Buffers: 5128 kB' 'Cached: 14765612 kB' 'SwapCached: 0 kB' 'Active: 11804128 kB' 'Inactive: 3520372 kB' 'Active(anon): 11391592 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557056 kB' 'Mapped: 165284 kB' 'Shmem: 10837832 kB' 'KReclaimable: 286464 kB' 'Slab: 914416 kB' 'SReclaimable: 286464 kB' 'SUnreclaim: 627952 kB' 'KernelStack: 22240 kB' 'PageTables: 8864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12824832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218844 kB' 'VmallocChunk: 0 kB' 'Percpu: 92736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3480948 kB' 'DirectMap2M: 24516608 kB' 'DirectMap1G: 40894464 kB' 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.928 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.191 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:36.192 nr_hugepages=1024 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:36.192 resv_hugepages=0 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:36.192 surplus_hugepages=0 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:36.192 anon_hugepages=0 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39593908 kB' 'MemAvailable: 43172908 kB' 'Buffers: 5128 kB' 'Cached: 14765612 kB' 'SwapCached: 0 kB' 'Active: 11805308 kB' 'Inactive: 3520372 kB' 'Active(anon): 11392772 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558300 kB' 'Mapped: 165284 kB' 'Shmem: 10837832 kB' 'KReclaimable: 286464 kB' 'Slab: 914416 kB' 'SReclaimable: 286464 kB' 'SUnreclaim: 627952 kB' 'KernelStack: 22176 kB' 'PageTables: 8832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12839120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218844 kB' 'VmallocChunk: 0 kB' 'Percpu: 92736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3480948 kB' 'DirectMap2M: 24516608 kB' 'DirectMap1G: 40894464 kB' 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.192 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.193 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 21636548 kB' 'MemUsed: 11002592 kB' 'SwapCached: 0 kB' 'Active: 7253604 kB' 'Inactive: 175376 kB' 'Active(anon): 7048524 kB' 'Inactive(anon): 0 kB' 'Active(file): 205080 kB' 'Inactive(file): 175376 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7027076 kB' 'Mapped: 114848 kB' 'AnonPages: 405492 kB' 'Shmem: 6646620 kB' 'KernelStack: 12088 kB' 'PageTables: 5536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132936 kB' 'Slab: 434528 kB' 'SReclaimable: 132936 kB' 'SUnreclaim: 301592 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.194 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:36.195 node0=1024 expecting 1024 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:36.195 00:03:36.195 real 0m5.838s 00:03:36.195 user 0m1.269s 00:03:36.195 sys 0m2.584s 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:36.195 21:46:55 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:36.195 ************************************ 00:03:36.195 END TEST default_setup 00:03:36.195 ************************************ 00:03:36.195 21:46:55 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:36.195 21:46:55 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:36.195 21:46:55 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:36.195 21:46:55 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:36.195 21:46:55 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:36.195 ************************************ 00:03:36.195 START TEST per_node_1G_alloc 00:03:36.195 ************************************ 00:03:36.195 21:46:55 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:36.195 21:46:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:36.195 21:46:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:36.195 21:46:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:36.195 21:46:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:36.195 21:46:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:36.195 21:46:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:36.195 21:46:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:36.195 21:46:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:36.195 21:46:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:36.195 21:46:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:36.195 21:46:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:36.195 21:46:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:36.195 21:46:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:36.195 21:46:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:36.195 21:46:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:36.195 21:46:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:36.195 21:46:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:36.195 21:46:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:36.195 21:46:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:36.195 21:46:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:36.195 21:46:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:36.195 21:46:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:36.195 21:46:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:36.195 21:46:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:36.195 21:46:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:36.195 21:46:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.195 21:46:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:03:40.396 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:40.396 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:40.396 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:40.396 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:40.396 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:40.396 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:40.396 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:40.396 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:40.396 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:40.396 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:40.396 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:40.396 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:40.396 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:40.396 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:40.396 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:40.396 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:40.396 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39594908 kB' 'MemAvailable: 43173908 kB' 'Buffers: 5128 kB' 'Cached: 14765752 kB' 'SwapCached: 0 kB' 'Active: 11802992 kB' 'Inactive: 3520372 kB' 'Active(anon): 11390456 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555672 kB' 'Mapped: 164204 kB' 'Shmem: 10837972 kB' 'KReclaimable: 286464 kB' 'Slab: 914756 kB' 'SReclaimable: 286464 kB' 'SUnreclaim: 628292 kB' 'KernelStack: 22016 kB' 'PageTables: 8116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12816444 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218748 kB' 'VmallocChunk: 0 kB' 'Percpu: 92736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3480948 kB' 'DirectMap2M: 24516608 kB' 'DirectMap1G: 40894464 kB' 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.396 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39595320 kB' 'MemAvailable: 43174320 kB' 'Buffers: 5128 kB' 'Cached: 14765756 kB' 'SwapCached: 0 kB' 'Active: 11803008 kB' 'Inactive: 3520372 kB' 'Active(anon): 11390472 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555720 kB' 'Mapped: 164204 kB' 'Shmem: 10837976 kB' 'KReclaimable: 286464 kB' 'Slab: 914828 kB' 'SReclaimable: 286464 kB' 'SUnreclaim: 628364 kB' 'KernelStack: 22032 kB' 'PageTables: 8180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12816464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218732 kB' 'VmallocChunk: 0 kB' 'Percpu: 92736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3480948 kB' 'DirectMap2M: 24516608 kB' 'DirectMap1G: 40894464 kB' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.397 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39595824 kB' 'MemAvailable: 43174824 kB' 'Buffers: 5128 kB' 'Cached: 14765756 kB' 'SwapCached: 0 kB' 'Active: 11803008 kB' 'Inactive: 3520372 kB' 'Active(anon): 11390472 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555720 kB' 'Mapped: 164204 kB' 'Shmem: 10837976 kB' 'KReclaimable: 286464 kB' 'Slab: 914828 kB' 'SReclaimable: 286464 kB' 'SUnreclaim: 628364 kB' 'KernelStack: 22032 kB' 'PageTables: 8180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12816484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218732 kB' 'VmallocChunk: 0 kB' 'Percpu: 92736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3480948 kB' 'DirectMap2M: 24516608 kB' 'DirectMap1G: 40894464 kB' 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.398 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:40.399 nr_hugepages=1024 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:40.399 resv_hugepages=0 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:40.399 surplus_hugepages=0 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:40.399 anon_hugepages=0 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39595992 kB' 'MemAvailable: 43174992 kB' 'Buffers: 5128 kB' 'Cached: 14765796 kB' 'SwapCached: 0 kB' 'Active: 11803060 kB' 'Inactive: 3520372 kB' 'Active(anon): 11390524 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555724 kB' 'Mapped: 164204 kB' 'Shmem: 10838016 kB' 'KReclaimable: 286464 kB' 'Slab: 914828 kB' 'SReclaimable: 286464 kB' 'SUnreclaim: 628364 kB' 'KernelStack: 22032 kB' 'PageTables: 8180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12816508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218732 kB' 'VmallocChunk: 0 kB' 'Percpu: 92736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3480948 kB' 'DirectMap2M: 24516608 kB' 'DirectMap1G: 40894464 kB' 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.399 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 22661388 kB' 'MemUsed: 9977752 kB' 'SwapCached: 0 kB' 'Active: 7253884 kB' 'Inactive: 175376 kB' 'Active(anon): 7048804 kB' 'Inactive(anon): 0 kB' 'Active(file): 205080 kB' 'Inactive(file): 175376 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7027092 kB' 'Mapped: 113816 kB' 'AnonPages: 405276 kB' 'Shmem: 6646636 kB' 'KernelStack: 12024 kB' 'PageTables: 5316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132936 kB' 'Slab: 434892 kB' 'SReclaimable: 132936 kB' 'SUnreclaim: 301956 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.400 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656072 kB' 'MemFree: 16935268 kB' 'MemUsed: 10720804 kB' 'SwapCached: 0 kB' 'Active: 4549548 kB' 'Inactive: 3344996 kB' 'Active(anon): 4342092 kB' 'Inactive(anon): 0 kB' 'Active(file): 207456 kB' 'Inactive(file): 3344996 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7743876 kB' 'Mapped: 50388 kB' 'AnonPages: 150792 kB' 'Shmem: 4191424 kB' 'KernelStack: 10024 kB' 'PageTables: 2912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 153528 kB' 'Slab: 479936 kB' 'SReclaimable: 153528 kB' 'SUnreclaim: 326408 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:40.401 node0=512 expecting 512 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:40.401 node1=512 expecting 512 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:40.401 00:03:40.401 real 0m3.989s 00:03:40.401 user 0m1.392s 00:03:40.401 sys 0m2.551s 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:40.401 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:40.401 ************************************ 00:03:40.401 END TEST per_node_1G_alloc 00:03:40.401 ************************************ 00:03:40.401 21:46:59 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:40.401 21:46:59 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:40.401 21:46:59 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:40.401 21:46:59 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.401 21:46:59 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:40.401 ************************************ 00:03:40.401 START TEST even_2G_alloc 00:03:40.401 ************************************ 00:03:40.401 21:46:59 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:40.401 21:46:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:40.401 21:46:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:40.401 21:46:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:40.401 21:46:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:40.401 21:46:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:40.401 21:46:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:40.401 21:46:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:40.401 21:46:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:40.401 21:46:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:40.401 21:46:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:40.401 21:46:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:40.401 21:46:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:40.401 21:46:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:40.401 21:46:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:40.401 21:46:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:40.401 21:46:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:40.401 21:46:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:40.401 21:46:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:40.401 21:46:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:40.401 21:46:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:40.401 21:46:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:40.401 21:46:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:40.401 21:46:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:40.401 21:46:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:40.401 21:46:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:40.401 21:46:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:40.401 21:46:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.402 21:46:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:03:44.599 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:44.599 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:44.599 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:44.599 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:44.599 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:44.599 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:44.599 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:44.599 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:44.599 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:44.599 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:44.599 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:44.599 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:44.599 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:44.599 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:44.599 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:44.599 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:44.599 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39589708 kB' 'MemAvailable: 43168708 kB' 'Buffers: 5128 kB' 'Cached: 14765936 kB' 'SwapCached: 0 kB' 'Active: 11803572 kB' 'Inactive: 3520372 kB' 'Active(anon): 11391036 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555804 kB' 'Mapped: 164392 kB' 'Shmem: 10838156 kB' 'KReclaimable: 286464 kB' 'Slab: 915368 kB' 'SReclaimable: 286464 kB' 'SUnreclaim: 628904 kB' 'KernelStack: 22064 kB' 'PageTables: 8284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12817440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218748 kB' 'VmallocChunk: 0 kB' 'Percpu: 92736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3480948 kB' 'DirectMap2M: 24516608 kB' 'DirectMap1G: 40894464 kB' 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.599 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39588772 kB' 'MemAvailable: 43167772 kB' 'Buffers: 5128 kB' 'Cached: 14765940 kB' 'SwapCached: 0 kB' 'Active: 11803272 kB' 'Inactive: 3520372 kB' 'Active(anon): 11390736 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555932 kB' 'Mapped: 164236 kB' 'Shmem: 10838160 kB' 'KReclaimable: 286464 kB' 'Slab: 915348 kB' 'SReclaimable: 286464 kB' 'SUnreclaim: 628884 kB' 'KernelStack: 22032 kB' 'PageTables: 8180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12817460 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218732 kB' 'VmallocChunk: 0 kB' 'Percpu: 92736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3480948 kB' 'DirectMap2M: 24516608 kB' 'DirectMap1G: 40894464 kB' 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.600 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.601 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39589364 kB' 'MemAvailable: 43168364 kB' 'Buffers: 5128 kB' 'Cached: 14765956 kB' 'SwapCached: 0 kB' 'Active: 11803328 kB' 'Inactive: 3520372 kB' 'Active(anon): 11390792 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555932 kB' 'Mapped: 164236 kB' 'Shmem: 10838176 kB' 'KReclaimable: 286464 kB' 'Slab: 915348 kB' 'SReclaimable: 286464 kB' 'SUnreclaim: 628884 kB' 'KernelStack: 22032 kB' 'PageTables: 8180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12817480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218732 kB' 'VmallocChunk: 0 kB' 'Percpu: 92736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3480948 kB' 'DirectMap2M: 24516608 kB' 'DirectMap1G: 40894464 kB' 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.603 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:44.604 nr_hugepages=1024 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:44.604 resv_hugepages=0 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:44.604 surplus_hugepages=0 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:44.604 anon_hugepages=0 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39590284 kB' 'MemAvailable: 43169284 kB' 'Buffers: 5128 kB' 'Cached: 14765980 kB' 'SwapCached: 0 kB' 'Active: 11803184 kB' 'Inactive: 3520372 kB' 'Active(anon): 11390648 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555740 kB' 'Mapped: 164236 kB' 'Shmem: 10838200 kB' 'KReclaimable: 286464 kB' 'Slab: 915348 kB' 'SReclaimable: 286464 kB' 'SUnreclaim: 628884 kB' 'KernelStack: 22016 kB' 'PageTables: 8128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12817504 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218732 kB' 'VmallocChunk: 0 kB' 'Percpu: 92736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3480948 kB' 'DirectMap2M: 24516608 kB' 'DirectMap1G: 40894464 kB' 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 22652908 kB' 'MemUsed: 9986232 kB' 'SwapCached: 0 kB' 'Active: 7253412 kB' 'Inactive: 175376 kB' 'Active(anon): 7048332 kB' 'Inactive(anon): 0 kB' 'Active(file): 205080 kB' 'Inactive(file): 175376 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7027100 kB' 'Mapped: 113848 kB' 'AnonPages: 404820 kB' 'Shmem: 6646644 kB' 'KernelStack: 12008 kB' 'PageTables: 5272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132936 kB' 'Slab: 434992 kB' 'SReclaimable: 132936 kB' 'SUnreclaim: 302056 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.606 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.607 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656072 kB' 'MemFree: 16936368 kB' 'MemUsed: 10719704 kB' 'SwapCached: 0 kB' 'Active: 4550304 kB' 'Inactive: 3344996 kB' 'Active(anon): 4342848 kB' 'Inactive(anon): 0 kB' 'Active(file): 207456 kB' 'Inactive(file): 3344996 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7744048 kB' 'Mapped: 50388 kB' 'AnonPages: 151452 kB' 'Shmem: 4191596 kB' 'KernelStack: 10040 kB' 'PageTables: 2956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 153528 kB' 'Slab: 480356 kB' 'SReclaimable: 153528 kB' 'SUnreclaim: 326828 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.608 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.609 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.609 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.609 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.609 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.609 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.609 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.609 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.609 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.609 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.609 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.609 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.609 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.609 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.609 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.609 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.609 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.609 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.609 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.609 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.609 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.609 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.609 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:44.609 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:44.609 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:44.609 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:44.609 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:44.609 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:44.609 node0=512 expecting 512 00:03:44.609 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:44.609 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:44.609 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:44.609 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:44.609 node1=512 expecting 512 00:03:44.609 21:47:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:44.609 00:03:44.609 real 0m4.228s 00:03:44.609 user 0m1.545s 00:03:44.609 sys 0m2.763s 00:03:44.609 21:47:03 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:44.609 21:47:03 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:44.609 ************************************ 00:03:44.609 END TEST even_2G_alloc 00:03:44.609 ************************************ 00:03:44.609 21:47:03 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:44.609 21:47:03 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:44.609 21:47:03 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:44.609 21:47:03 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.609 21:47:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:44.609 ************************************ 00:03:44.609 START TEST odd_alloc 00:03:44.609 ************************************ 00:03:44.609 21:47:03 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:44.609 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:44.609 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:44.609 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:44.609 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:44.609 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:44.609 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:44.609 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:44.609 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:44.609 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:44.609 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:44.609 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:44.609 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:44.609 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:44.609 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:44.609 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:44.609 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:44.609 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:44.609 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:44.609 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:44.609 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:44.609 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:44.609 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:44.609 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:44.609 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:44.609 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:44.609 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:44.609 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.609 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:03:48.805 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:48.805 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:48.805 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:48.805 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:48.805 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:48.805 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:48.805 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:48.805 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:48.805 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:48.805 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:48.805 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:48.805 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:48.805 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:48.805 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:48.805 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:48.805 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:48.805 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:48.805 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:48.805 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:48.805 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:48.805 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:48.805 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:48.805 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:48.805 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:48.805 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:48.805 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:48.805 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:48.805 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:48.805 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:48.805 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.805 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.805 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.805 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.805 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.805 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.805 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.805 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.805 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39600548 kB' 'MemAvailable: 43179548 kB' 'Buffers: 5128 kB' 'Cached: 14766100 kB' 'SwapCached: 0 kB' 'Active: 11804936 kB' 'Inactive: 3520372 kB' 'Active(anon): 11392400 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557388 kB' 'Mapped: 164360 kB' 'Shmem: 10838320 kB' 'KReclaimable: 286464 kB' 'Slab: 915360 kB' 'SReclaimable: 286464 kB' 'SUnreclaim: 628896 kB' 'KernelStack: 22064 kB' 'PageTables: 8352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486608 kB' 'Committed_AS: 12818008 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218668 kB' 'VmallocChunk: 0 kB' 'Percpu: 92736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3480948 kB' 'DirectMap2M: 24516608 kB' 'DirectMap1G: 40894464 kB' 00:03:48.805 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.805 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.806 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39601216 kB' 'MemAvailable: 43180216 kB' 'Buffers: 5128 kB' 'Cached: 14766104 kB' 'SwapCached: 0 kB' 'Active: 11804544 kB' 'Inactive: 3520372 kB' 'Active(anon): 11392008 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556916 kB' 'Mapped: 164360 kB' 'Shmem: 10838324 kB' 'KReclaimable: 286464 kB' 'Slab: 915360 kB' 'SReclaimable: 286464 kB' 'SUnreclaim: 628896 kB' 'KernelStack: 22032 kB' 'PageTables: 8252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486608 kB' 'Committed_AS: 12818024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218636 kB' 'VmallocChunk: 0 kB' 'Percpu: 92736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3480948 kB' 'DirectMap2M: 24516608 kB' 'DirectMap1G: 40894464 kB' 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.807 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:48.808 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39601764 kB' 'MemAvailable: 43180764 kB' 'Buffers: 5128 kB' 'Cached: 14766120 kB' 'SwapCached: 0 kB' 'Active: 11804232 kB' 'Inactive: 3520372 kB' 'Active(anon): 11391696 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556560 kB' 'Mapped: 164280 kB' 'Shmem: 10838340 kB' 'KReclaimable: 286464 kB' 'Slab: 915380 kB' 'SReclaimable: 286464 kB' 'SUnreclaim: 628916 kB' 'KernelStack: 22048 kB' 'PageTables: 8268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486608 kB' 'Committed_AS: 12818184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218636 kB' 'VmallocChunk: 0 kB' 'Percpu: 92736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3480948 kB' 'DirectMap2M: 24516608 kB' 'DirectMap1G: 40894464 kB' 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.809 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:48.810 nr_hugepages=1025 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:48.810 resv_hugepages=0 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:48.810 surplus_hugepages=0 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:48.810 anon_hugepages=0 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.810 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39602320 kB' 'MemAvailable: 43181320 kB' 'Buffers: 5128 kB' 'Cached: 14766172 kB' 'SwapCached: 0 kB' 'Active: 11804420 kB' 'Inactive: 3520372 kB' 'Active(anon): 11391884 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556796 kB' 'Mapped: 164280 kB' 'Shmem: 10838392 kB' 'KReclaimable: 286464 kB' 'Slab: 915380 kB' 'SReclaimable: 286464 kB' 'SUnreclaim: 628916 kB' 'KernelStack: 22048 kB' 'PageTables: 8304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486608 kB' 'Committed_AS: 12818572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218636 kB' 'VmallocChunk: 0 kB' 'Percpu: 92736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3480948 kB' 'DirectMap2M: 24516608 kB' 'DirectMap1G: 40894464 kB' 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.811 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.812 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.812 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.812 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.812 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.812 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.812 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.812 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.812 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.812 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.812 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.812 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.812 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.812 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.812 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.812 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.812 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.812 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.812 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.812 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.812 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.812 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.812 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.812 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.812 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.812 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.812 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.812 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.812 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.812 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.812 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.812 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.812 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.812 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.812 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.812 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 22672256 kB' 'MemUsed: 9966884 kB' 'SwapCached: 0 kB' 'Active: 7256264 kB' 'Inactive: 175376 kB' 'Active(anon): 7051184 kB' 'Inactive(anon): 0 kB' 'Active(file): 205080 kB' 'Inactive(file): 175376 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7027200 kB' 'Mapped: 113860 kB' 'AnonPages: 407608 kB' 'Shmem: 6646744 kB' 'KernelStack: 12088 kB' 'PageTables: 5500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132936 kB' 'Slab: 434920 kB' 'SReclaimable: 132936 kB' 'SUnreclaim: 301984 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.072 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.073 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656072 kB' 'MemFree: 16930076 kB' 'MemUsed: 10725996 kB' 'SwapCached: 0 kB' 'Active: 4548068 kB' 'Inactive: 3344996 kB' 'Active(anon): 4340612 kB' 'Inactive(anon): 0 kB' 'Active(file): 207456 kB' 'Inactive(file): 3344996 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7744100 kB' 'Mapped: 50420 kB' 'AnonPages: 149084 kB' 'Shmem: 4191648 kB' 'KernelStack: 9976 kB' 'PageTables: 2772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 153528 kB' 'Slab: 480460 kB' 'SReclaimable: 153528 kB' 'SUnreclaim: 326932 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.074 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.075 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:49.076 node0=512 expecting 513 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:49.076 node1=513 expecting 512 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:49.076 00:03:49.076 real 0m4.390s 00:03:49.076 user 0m1.588s 00:03:49.076 sys 0m2.883s 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:49.076 21:47:08 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:49.076 ************************************ 00:03:49.076 END TEST odd_alloc 00:03:49.076 ************************************ 00:03:49.076 21:47:08 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:49.076 21:47:08 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:49.076 21:47:08 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:49.076 21:47:08 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:49.076 21:47:08 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:49.076 ************************************ 00:03:49.076 START TEST custom_alloc 00:03:49.076 ************************************ 00:03:49.076 21:47:08 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:49.076 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:49.076 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:49.076 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:49.076 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:49.076 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:49.076 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:49.076 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:49.076 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:49.076 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:49.076 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:49.076 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:49.076 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:49.076 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:49.076 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:49.076 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:49.077 21:47:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:03:53.327 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:53.327 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:53.327 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:53.327 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:53.327 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:53.327 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:53.327 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:53.327 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:53.327 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:53.327 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:53.327 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:53.327 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:53.327 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:53.327 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:53.327 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:53.327 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:53.327 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 38545376 kB' 'MemAvailable: 42124376 kB' 'Buffers: 5128 kB' 'Cached: 14766272 kB' 'SwapCached: 0 kB' 'Active: 11806076 kB' 'Inactive: 3520372 kB' 'Active(anon): 11393540 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558048 kB' 'Mapped: 164324 kB' 'Shmem: 10838492 kB' 'KReclaimable: 286464 kB' 'Slab: 915412 kB' 'SReclaimable: 286464 kB' 'SUnreclaim: 628948 kB' 'KernelStack: 22128 kB' 'PageTables: 8172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963344 kB' 'Committed_AS: 12820396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218844 kB' 'VmallocChunk: 0 kB' 'Percpu: 92736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3480948 kB' 'DirectMap2M: 24516608 kB' 'DirectMap1G: 40894464 kB' 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.327 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 38546140 kB' 'MemAvailable: 42125140 kB' 'Buffers: 5128 kB' 'Cached: 14766276 kB' 'SwapCached: 0 kB' 'Active: 11805800 kB' 'Inactive: 3520372 kB' 'Active(anon): 11393264 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558080 kB' 'Mapped: 164260 kB' 'Shmem: 10838496 kB' 'KReclaimable: 286464 kB' 'Slab: 915472 kB' 'SReclaimable: 286464 kB' 'SUnreclaim: 629008 kB' 'KernelStack: 22048 kB' 'PageTables: 8560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963344 kB' 'Committed_AS: 12822032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218780 kB' 'VmallocChunk: 0 kB' 'Percpu: 92736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3480948 kB' 'DirectMap2M: 24516608 kB' 'DirectMap1G: 40894464 kB' 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.328 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.329 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 38545560 kB' 'MemAvailable: 42124560 kB' 'Buffers: 5128 kB' 'Cached: 14766292 kB' 'SwapCached: 0 kB' 'Active: 11806112 kB' 'Inactive: 3520372 kB' 'Active(anon): 11393576 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558412 kB' 'Mapped: 164260 kB' 'Shmem: 10838512 kB' 'KReclaimable: 286464 kB' 'Slab: 915476 kB' 'SReclaimable: 286464 kB' 'SUnreclaim: 629012 kB' 'KernelStack: 22064 kB' 'PageTables: 8136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963344 kB' 'Committed_AS: 12822052 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218780 kB' 'VmallocChunk: 0 kB' 'Percpu: 92736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3480948 kB' 'DirectMap2M: 24516608 kB' 'DirectMap1G: 40894464 kB' 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.330 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.331 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:53.332 nr_hugepages=1536 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:53.332 resv_hugepages=0 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:53.332 surplus_hugepages=0 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:53.332 anon_hugepages=0 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 38546908 kB' 'MemAvailable: 42125908 kB' 'Buffers: 5128 kB' 'Cached: 14766312 kB' 'SwapCached: 0 kB' 'Active: 11805964 kB' 'Inactive: 3520372 kB' 'Active(anon): 11393428 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558172 kB' 'Mapped: 164260 kB' 'Shmem: 10838532 kB' 'KReclaimable: 286464 kB' 'Slab: 915476 kB' 'SReclaimable: 286464 kB' 'SUnreclaim: 629012 kB' 'KernelStack: 22096 kB' 'PageTables: 8468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963344 kB' 'Committed_AS: 12821824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218812 kB' 'VmallocChunk: 0 kB' 'Percpu: 92736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3480948 kB' 'DirectMap2M: 24516608 kB' 'DirectMap1G: 40894464 kB' 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.332 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.333 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 22669680 kB' 'MemUsed: 9969460 kB' 'SwapCached: 0 kB' 'Active: 7255944 kB' 'Inactive: 175376 kB' 'Active(anon): 7050864 kB' 'Inactive(anon): 0 kB' 'Active(file): 205080 kB' 'Inactive(file): 175376 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7027340 kB' 'Mapped: 113872 kB' 'AnonPages: 407200 kB' 'Shmem: 6646884 kB' 'KernelStack: 12056 kB' 'PageTables: 5472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132936 kB' 'Slab: 434892 kB' 'SReclaimable: 132936 kB' 'SUnreclaim: 301956 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.334 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656072 kB' 'MemFree: 15876056 kB' 'MemUsed: 11780016 kB' 'SwapCached: 0 kB' 'Active: 4549956 kB' 'Inactive: 3344996 kB' 'Active(anon): 4342500 kB' 'Inactive(anon): 0 kB' 'Active(file): 207456 kB' 'Inactive(file): 3344996 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7744124 kB' 'Mapped: 50396 kB' 'AnonPages: 150888 kB' 'Shmem: 4191672 kB' 'KernelStack: 10008 kB' 'PageTables: 2376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 153528 kB' 'Slab: 480584 kB' 'SReclaimable: 153528 kB' 'SUnreclaim: 327056 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.335 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.336 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.337 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.337 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.337 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.337 21:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:53.337 21:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:53.337 21:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:53.337 21:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:53.337 21:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:53.337 21:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:53.337 node0=512 expecting 512 00:03:53.337 21:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:53.337 21:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:53.337 21:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:53.337 21:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:53.337 node1=1024 expecting 1024 00:03:53.337 21:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:53.337 00:03:53.337 real 0m3.942s 00:03:53.337 user 0m1.295s 00:03:53.337 sys 0m2.562s 00:03:53.337 21:47:12 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:53.337 21:47:12 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:53.337 ************************************ 00:03:53.337 END TEST custom_alloc 00:03:53.337 ************************************ 00:03:53.337 21:47:12 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:53.337 21:47:12 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:53.337 21:47:12 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:53.337 21:47:12 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:53.337 21:47:12 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:53.337 ************************************ 00:03:53.337 START TEST no_shrink_alloc 00:03:53.337 ************************************ 00:03:53.337 21:47:12 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:53.337 21:47:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:53.337 21:47:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:53.337 21:47:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:53.337 21:47:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:53.337 21:47:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:53.337 21:47:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:53.337 21:47:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:53.337 21:47:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:53.337 21:47:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:53.337 21:47:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:53.337 21:47:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:53.337 21:47:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:53.337 21:47:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:53.337 21:47:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:53.337 21:47:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:53.337 21:47:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:53.337 21:47:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:53.337 21:47:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:53.337 21:47:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:53.337 21:47:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:53.337 21:47:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.337 21:47:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:03:57.535 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:57.535 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:57.535 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:57.535 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:57.535 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:57.535 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:57.535 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:57.535 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:57.535 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:57.535 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:57.535 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:57.535 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:57.535 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:57.535 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:57.535 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:57.535 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:57.535 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39574924 kB' 'MemAvailable: 43153924 kB' 'Buffers: 5128 kB' 'Cached: 14766428 kB' 'SwapCached: 0 kB' 'Active: 11806784 kB' 'Inactive: 3520372 kB' 'Active(anon): 11394248 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558844 kB' 'Mapped: 164356 kB' 'Shmem: 10838648 kB' 'KReclaimable: 286464 kB' 'Slab: 915624 kB' 'SReclaimable: 286464 kB' 'SUnreclaim: 629160 kB' 'KernelStack: 22000 kB' 'PageTables: 8024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12819588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218764 kB' 'VmallocChunk: 0 kB' 'Percpu: 92736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3480948 kB' 'DirectMap2M: 24516608 kB' 'DirectMap1G: 40894464 kB' 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.535 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39576832 kB' 'MemAvailable: 43155832 kB' 'Buffers: 5128 kB' 'Cached: 14766440 kB' 'SwapCached: 0 kB' 'Active: 11806772 kB' 'Inactive: 3520372 kB' 'Active(anon): 11394236 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558936 kB' 'Mapped: 164272 kB' 'Shmem: 10838660 kB' 'KReclaimable: 286464 kB' 'Slab: 915656 kB' 'SReclaimable: 286464 kB' 'SUnreclaim: 629192 kB' 'KernelStack: 22048 kB' 'PageTables: 8296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12819844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218748 kB' 'VmallocChunk: 0 kB' 'Percpu: 92736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3480948 kB' 'DirectMap2M: 24516608 kB' 'DirectMap1G: 40894464 kB' 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39577564 kB' 'MemAvailable: 43156564 kB' 'Buffers: 5128 kB' 'Cached: 14766464 kB' 'SwapCached: 0 kB' 'Active: 11806432 kB' 'Inactive: 3520372 kB' 'Active(anon): 11393896 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558544 kB' 'Mapped: 164272 kB' 'Shmem: 10838684 kB' 'KReclaimable: 286464 kB' 'Slab: 915632 kB' 'SReclaimable: 286464 kB' 'SUnreclaim: 629168 kB' 'KernelStack: 22032 kB' 'PageTables: 8184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12820000 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218748 kB' 'VmallocChunk: 0 kB' 'Percpu: 92736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3480948 kB' 'DirectMap2M: 24516608 kB' 'DirectMap1G: 40894464 kB' 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.539 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:57.540 nr_hugepages=1024 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:57.540 resv_hugepages=0 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:57.540 surplus_hugepages=0 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:57.540 anon_hugepages=0 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39577564 kB' 'MemAvailable: 43156564 kB' 'Buffers: 5128 kB' 'Cached: 14766484 kB' 'SwapCached: 0 kB' 'Active: 11806456 kB' 'Inactive: 3520372 kB' 'Active(anon): 11393920 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558516 kB' 'Mapped: 164272 kB' 'Shmem: 10838704 kB' 'KReclaimable: 286464 kB' 'Slab: 915632 kB' 'SReclaimable: 286464 kB' 'SUnreclaim: 629168 kB' 'KernelStack: 22016 kB' 'PageTables: 8132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12820020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218748 kB' 'VmallocChunk: 0 kB' 'Percpu: 92736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3480948 kB' 'DirectMap2M: 24516608 kB' 'DirectMap1G: 40894464 kB' 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 21624896 kB' 'MemUsed: 11014244 kB' 'SwapCached: 0 kB' 'Active: 7256256 kB' 'Inactive: 175376 kB' 'Active(anon): 7051176 kB' 'Inactive(anon): 0 kB' 'Active(file): 205080 kB' 'Inactive(file): 175376 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7027472 kB' 'Mapped: 113884 kB' 'AnonPages: 407392 kB' 'Shmem: 6647016 kB' 'KernelStack: 12056 kB' 'PageTables: 5512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132936 kB' 'Slab: 435220 kB' 'SReclaimable: 132936 kB' 'SUnreclaim: 302284 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.542 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:57.543 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:57.544 node0=1024 expecting 1024 00:03:57.544 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:57.544 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:57.544 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:57.544 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:57.544 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.544 21:47:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:00.835 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:00.835 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:00.836 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:00.836 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:00.836 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:00.836 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:00.836 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:00.836 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:00.836 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:00.836 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:00.836 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:00.836 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:00.836 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:00.836 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:00.836 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:00.836 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:01.100 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:01.100 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39554124 kB' 'MemAvailable: 43133124 kB' 'Buffers: 5128 kB' 'Cached: 14766588 kB' 'SwapCached: 0 kB' 'Active: 11808572 kB' 'Inactive: 3520372 kB' 'Active(anon): 11396036 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560252 kB' 'Mapped: 164276 kB' 'Shmem: 10838808 kB' 'KReclaimable: 286464 kB' 'Slab: 915860 kB' 'SReclaimable: 286464 kB' 'SUnreclaim: 629396 kB' 'KernelStack: 22192 kB' 'PageTables: 8632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12820752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218812 kB' 'VmallocChunk: 0 kB' 'Percpu: 92736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3480948 kB' 'DirectMap2M: 24516608 kB' 'DirectMap1G: 40894464 kB' 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.100 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.101 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39554852 kB' 'MemAvailable: 43133852 kB' 'Buffers: 5128 kB' 'Cached: 14766592 kB' 'SwapCached: 0 kB' 'Active: 11807532 kB' 'Inactive: 3520372 kB' 'Active(anon): 11394996 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559576 kB' 'Mapped: 164276 kB' 'Shmem: 10838812 kB' 'KReclaimable: 286464 kB' 'Slab: 915940 kB' 'SReclaimable: 286464 kB' 'SUnreclaim: 629476 kB' 'KernelStack: 22032 kB' 'PageTables: 8176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12820768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218764 kB' 'VmallocChunk: 0 kB' 'Percpu: 92736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3480948 kB' 'DirectMap2M: 24516608 kB' 'DirectMap1G: 40894464 kB' 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.102 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.103 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39555952 kB' 'MemAvailable: 43134952 kB' 'Buffers: 5128 kB' 'Cached: 14766608 kB' 'SwapCached: 0 kB' 'Active: 11807408 kB' 'Inactive: 3520372 kB' 'Active(anon): 11394872 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559416 kB' 'Mapped: 164276 kB' 'Shmem: 10838828 kB' 'KReclaimable: 286464 kB' 'Slab: 915940 kB' 'SReclaimable: 286464 kB' 'SUnreclaim: 629476 kB' 'KernelStack: 22016 kB' 'PageTables: 8128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12821800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218764 kB' 'VmallocChunk: 0 kB' 'Percpu: 92736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3480948 kB' 'DirectMap2M: 24516608 kB' 'DirectMap1G: 40894464 kB' 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.104 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.105 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:01.106 nr_hugepages=1024 00:04:01.106 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:01.107 resv_hugepages=0 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:01.107 surplus_hugepages=0 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:01.107 anon_hugepages=0 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39555796 kB' 'MemAvailable: 43134796 kB' 'Buffers: 5128 kB' 'Cached: 14766628 kB' 'SwapCached: 0 kB' 'Active: 11810328 kB' 'Inactive: 3520372 kB' 'Active(anon): 11397792 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562324 kB' 'Mapped: 164780 kB' 'Shmem: 10838848 kB' 'KReclaimable: 286464 kB' 'Slab: 915940 kB' 'SReclaimable: 286464 kB' 'SUnreclaim: 629476 kB' 'KernelStack: 22016 kB' 'PageTables: 8128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12824808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218732 kB' 'VmallocChunk: 0 kB' 'Percpu: 92736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3480948 kB' 'DirectMap2M: 24516608 kB' 'DirectMap1G: 40894464 kB' 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.107 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.108 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.109 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 21623608 kB' 'MemUsed: 11015532 kB' 'SwapCached: 0 kB' 'Active: 7255416 kB' 'Inactive: 175376 kB' 'Active(anon): 7050336 kB' 'Inactive(anon): 0 kB' 'Active(file): 205080 kB' 'Inactive(file): 175376 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7027584 kB' 'Mapped: 114044 kB' 'AnonPages: 406416 kB' 'Shmem: 6647128 kB' 'KernelStack: 12040 kB' 'PageTables: 5368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132936 kB' 'Slab: 435300 kB' 'SReclaimable: 132936 kB' 'SUnreclaim: 302364 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:01.109 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.109 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.109 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.109 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.109 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.109 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.109 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.109 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.109 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.109 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.109 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.109 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.109 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.109 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.109 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.109 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.109 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.109 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.109 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.109 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.109 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.109 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.109 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.109 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.109 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.109 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.109 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.109 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.109 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.109 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.109 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.109 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.109 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.109 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.109 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.109 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.109 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.109 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.109 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.109 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.109 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.109 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.109 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.109 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.109 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.109 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.109 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.109 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.370 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.371 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.371 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.371 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.371 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:01.371 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:01.371 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:01.371 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:01.371 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:01.371 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:01.371 node0=1024 expecting 1024 00:04:01.371 21:47:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:01.371 00:04:01.371 real 0m8.190s 00:04:01.371 user 0m3.076s 00:04:01.371 sys 0m5.223s 00:04:01.371 21:47:20 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.371 21:47:20 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:01.371 ************************************ 00:04:01.371 END TEST no_shrink_alloc 00:04:01.371 ************************************ 00:04:01.371 21:47:20 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:01.371 21:47:20 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:01.371 21:47:20 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:01.371 21:47:20 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:01.371 21:47:20 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:01.371 21:47:20 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:01.371 21:47:20 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:01.371 21:47:20 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:01.371 21:47:20 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:01.371 21:47:20 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:01.371 21:47:20 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:01.371 21:47:20 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:01.371 21:47:20 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:01.371 21:47:20 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:01.371 21:47:20 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:01.371 00:04:01.371 real 0m31.208s 00:04:01.371 user 0m10.400s 00:04:01.371 sys 0m19.002s 00:04:01.371 21:47:20 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.371 21:47:20 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:01.371 ************************************ 00:04:01.371 END TEST hugepages 00:04:01.371 ************************************ 00:04:01.371 21:47:20 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:01.371 21:47:20 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/driver.sh 00:04:01.371 21:47:20 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.371 21:47:20 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.371 21:47:20 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:01.371 ************************************ 00:04:01.371 START TEST driver 00:04:01.371 ************************************ 00:04:01.371 21:47:20 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/driver.sh 00:04:01.371 * Looking for test storage... 00:04:01.371 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:04:01.371 21:47:20 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:01.371 21:47:20 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:01.371 21:47:20 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:04:07.945 21:47:26 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:07.945 21:47:26 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.945 21:47:26 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.945 21:47:26 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:07.945 ************************************ 00:04:07.945 START TEST guess_driver 00:04:07.945 ************************************ 00:04:07.945 21:47:26 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:07.945 21:47:26 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:07.945 21:47:26 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:07.945 21:47:26 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:07.945 21:47:26 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:07.945 21:47:26 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:07.945 21:47:26 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:07.945 21:47:26 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:07.945 21:47:26 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:07.945 21:47:26 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:07.945 21:47:26 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 256 > 0 )) 00:04:07.945 21:47:26 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:07.945 21:47:26 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:07.945 21:47:26 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:07.945 21:47:26 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:07.945 21:47:26 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:07.945 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:07.945 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:07.945 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:07.945 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:07.945 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:07.945 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:07.945 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:07.945 21:47:26 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:07.945 21:47:26 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:07.945 21:47:26 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:07.945 21:47:26 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:07.945 21:47:26 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:07.945 Looking for driver=vfio-pci 00:04:07.945 21:47:26 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.945 21:47:26 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:07.945 21:47:26 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.945 21:47:26 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:11.239 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.239 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.239 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.239 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.239 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.239 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.239 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.239 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.239 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.239 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.239 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.239 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.239 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.239 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.239 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.239 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.239 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.239 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.239 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.239 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.239 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.239 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.239 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.239 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.239 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.239 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.239 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.239 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.239 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.239 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.239 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.239 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.239 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.239 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.239 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.239 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.239 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.239 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.239 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.239 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.239 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.239 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.239 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.239 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.239 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.239 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.239 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.239 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.149 21:47:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.149 21:47:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.149 21:47:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.149 21:47:32 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:13.149 21:47:32 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:13.149 21:47:32 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:13.149 21:47:32 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:04:18.424 00:04:18.424 real 0m11.081s 00:04:18.424 user 0m2.808s 00:04:18.424 sys 0m5.609s 00:04:18.424 21:47:37 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:18.424 21:47:37 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:18.424 ************************************ 00:04:18.424 END TEST guess_driver 00:04:18.424 ************************************ 00:04:18.424 21:47:37 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:18.424 00:04:18.424 real 0m16.808s 00:04:18.424 user 0m4.466s 00:04:18.424 sys 0m8.936s 00:04:18.424 21:47:37 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:18.424 21:47:37 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:18.424 ************************************ 00:04:18.424 END TEST driver 00:04:18.424 ************************************ 00:04:18.424 21:47:37 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:18.424 21:47:37 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/devices.sh 00:04:18.424 21:47:37 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:18.424 21:47:37 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.424 21:47:37 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:18.424 ************************************ 00:04:18.424 START TEST devices 00:04:18.424 ************************************ 00:04:18.424 21:47:37 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/devices.sh 00:04:18.424 * Looking for test storage... 00:04:18.424 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:04:18.424 21:47:37 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:18.424 21:47:37 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:18.424 21:47:37 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:18.424 21:47:37 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:04:22.618 21:47:41 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:22.618 21:47:41 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:22.618 21:47:41 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:22.618 21:47:41 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:22.618 21:47:41 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:22.618 21:47:41 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:22.618 21:47:41 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:22.618 21:47:41 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:22.618 21:47:41 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:22.618 21:47:41 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:22.618 21:47:41 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:22.618 21:47:41 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:22.618 21:47:41 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:22.618 21:47:41 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:22.618 21:47:41 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:22.618 21:47:41 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:22.618 21:47:41 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:22.618 21:47:41 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:04:22.618 21:47:41 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:04:22.618 21:47:41 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:22.618 21:47:41 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:22.618 21:47:41 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:22.877 No valid GPT data, bailing 00:04:22.877 21:47:42 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:22.877 21:47:42 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:22.877 21:47:42 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:22.877 21:47:42 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:22.877 21:47:42 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:22.877 21:47:42 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:22.877 21:47:42 setup.sh.devices -- setup/common.sh@80 -- # echo 2000398934016 00:04:22.877 21:47:42 setup.sh.devices -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:04:22.877 21:47:42 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:22.877 21:47:42 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:04:22.877 21:47:42 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:22.877 21:47:42 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:22.877 21:47:42 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:22.877 21:47:42 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:22.877 21:47:42 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.877 21:47:42 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:22.877 ************************************ 00:04:22.877 START TEST nvme_mount 00:04:22.877 ************************************ 00:04:22.877 21:47:42 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:22.877 21:47:42 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:22.877 21:47:42 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:22.877 21:47:42 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:22.877 21:47:42 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:22.877 21:47:42 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:22.877 21:47:42 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:22.877 21:47:42 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:22.877 21:47:42 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:22.877 21:47:42 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:22.877 21:47:42 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:22.877 21:47:42 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:22.877 21:47:42 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:22.877 21:47:42 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:22.877 21:47:42 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:22.877 21:47:42 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:22.877 21:47:42 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:22.877 21:47:42 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:22.877 21:47:42 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:22.877 21:47:42 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:23.813 Creating new GPT entries in memory. 00:04:23.813 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:23.813 other utilities. 00:04:23.813 21:47:43 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:23.813 21:47:43 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:23.813 21:47:43 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:23.813 21:47:43 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:23.813 21:47:43 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:24.751 Creating new GPT entries in memory. 00:04:24.751 The operation has completed successfully. 00:04:24.751 21:47:44 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:24.751 21:47:44 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:24.751 21:47:44 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1253255 00:04:25.009 21:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.009 21:47:44 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:25.009 21:47:44 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.009 21:47:44 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:25.009 21:47:44 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:25.009 21:47:44 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.009 21:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:25.009 21:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:25.009 21:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:25.009 21:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.009 21:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:25.009 21:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:25.009 21:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:25.009 21:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:25.009 21:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:25.009 21:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.009 21:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:25.009 21:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:25.009 21:47:44 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.009 21:47:44 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:28.303 21:47:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.303 21:47:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.303 21:47:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.303 21:47:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.303 21:47:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.303 21:47:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.303 21:47:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.303 21:47:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.303 21:47:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.303 21:47:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.303 21:47:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.303 21:47:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.303 21:47:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.303 21:47:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.303 21:47:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.303 21:47:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.303 21:47:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.303 21:47:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.303 21:47:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.303 21:47:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.303 21:47:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.303 21:47:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.304 21:47:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.304 21:47:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.304 21:47:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.304 21:47:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.304 21:47:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.304 21:47:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.304 21:47:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.304 21:47:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.304 21:47:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.304 21:47:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.304 21:47:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.304 21:47:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:28.304 21:47:47 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:28.304 21:47:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.304 21:47:47 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:28.304 21:47:47 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:28.304 21:47:47 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.304 21:47:47 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:28.304 21:47:47 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:28.304 21:47:47 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:28.304 21:47:47 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.562 21:47:47 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.562 21:47:47 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:28.562 21:47:47 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:28.562 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:28.562 21:47:47 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:28.562 21:47:47 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:28.820 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:28.820 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:04:28.820 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:28.820 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:28.820 21:47:47 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:28.820 21:47:47 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:28.820 21:47:47 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.820 21:47:47 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:28.820 21:47:47 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:28.820 21:47:48 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.820 21:47:48 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:28.820 21:47:48 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:28.820 21:47:48 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:28.820 21:47:48 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.820 21:47:48 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:28.820 21:47:48 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:28.820 21:47:48 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:28.820 21:47:48 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:28.820 21:47:48 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:28.820 21:47:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.820 21:47:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:28.820 21:47:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:28.820 21:47:48 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.820 21:47:48 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:32.108 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.108 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.108 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.108 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.108 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.108 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.108 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.108 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.108 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.108 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.108 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.108 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.108 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.108 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.108 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.108 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.108 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.108 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.108 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.108 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.108 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.108 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.108 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.108 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.108 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.108 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.108 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.108 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.108 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.108 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.108 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.108 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.367 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.367 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:32.367 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:32.367 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.367 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:32.367 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:32.367 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:32.367 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:32.367 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:32.367 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:32.367 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:04:32.367 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:32.367 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:32.367 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:32.367 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:32.367 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:32.367 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:32.367 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:32.367 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.367 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:32.367 21:47:51 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:32.367 21:47:51 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.367 21:47:51 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:35.742 21:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.742 21:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.742 21:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.742 21:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.742 21:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.742 21:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.742 21:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.742 21:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.742 21:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.742 21:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.742 21:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.742 21:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.742 21:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.742 21:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.742 21:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.742 21:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.742 21:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.742 21:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.742 21:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.742 21:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.742 21:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.742 21:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.742 21:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.742 21:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.742 21:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.742 21:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.742 21:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.742 21:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.742 21:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.742 21:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.742 21:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.742 21:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.742 21:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.742 21:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:35.742 21:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:35.742 21:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.002 21:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:36.002 21:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:36.002 21:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:36.002 21:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:36.002 21:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:36.002 21:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:36.002 21:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:36.002 21:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:36.002 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:36.002 00:04:36.002 real 0m13.059s 00:04:36.002 user 0m3.552s 00:04:36.002 sys 0m7.093s 00:04:36.002 21:47:55 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.002 21:47:55 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:36.002 ************************************ 00:04:36.002 END TEST nvme_mount 00:04:36.002 ************************************ 00:04:36.002 21:47:55 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:36.002 21:47:55 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:36.002 21:47:55 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.002 21:47:55 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.002 21:47:55 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:36.002 ************************************ 00:04:36.002 START TEST dm_mount 00:04:36.002 ************************************ 00:04:36.002 21:47:55 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:36.002 21:47:55 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:36.002 21:47:55 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:36.002 21:47:55 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:36.002 21:47:55 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:36.002 21:47:55 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:36.002 21:47:55 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:36.002 21:47:55 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:36.002 21:47:55 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:36.002 21:47:55 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:36.002 21:47:55 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:36.002 21:47:55 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:36.002 21:47:55 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:36.002 21:47:55 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:36.002 21:47:55 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:36.002 21:47:55 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:36.002 21:47:55 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:36.002 21:47:55 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:36.003 21:47:55 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:36.003 21:47:55 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:36.003 21:47:55 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:36.003 21:47:55 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:36.943 Creating new GPT entries in memory. 00:04:36.943 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:36.943 other utilities. 00:04:36.943 21:47:56 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:36.943 21:47:56 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:36.943 21:47:56 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:36.943 21:47:56 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:36.943 21:47:56 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:38.323 Creating new GPT entries in memory. 00:04:38.323 The operation has completed successfully. 00:04:38.323 21:47:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:38.323 21:47:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:38.323 21:47:57 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:38.323 21:47:57 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:38.323 21:47:57 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:39.260 The operation has completed successfully. 00:04:39.260 21:47:58 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:39.260 21:47:58 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:39.260 21:47:58 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1258151 00:04:39.260 21:47:58 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:39.260 21:47:58 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:39.260 21:47:58 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:39.260 21:47:58 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:39.260 21:47:58 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:39.260 21:47:58 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:39.260 21:47:58 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:39.260 21:47:58 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:39.260 21:47:58 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:39.260 21:47:58 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:04:39.260 21:47:58 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:04:39.260 21:47:58 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:04:39.260 21:47:58 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:04:39.260 21:47:58 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:39.260 21:47:58 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount size= 00:04:39.260 21:47:58 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:39.260 21:47:58 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:39.260 21:47:58 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:39.260 21:47:58 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:39.260 21:47:58 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:39.260 21:47:58 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:39.260 21:47:58 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:39.260 21:47:58 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:39.260 21:47:58 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:39.260 21:47:58 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:39.260 21:47:58 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:39.260 21:47:58 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:39.260 21:47:58 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:39.260 21:47:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.260 21:47:58 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:39.260 21:47:58 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:39.260 21:47:58 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.260 21:47:58 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.447 21:48:02 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:47.657 21:48:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.657 21:48:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.657 21:48:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.657 21:48:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.657 21:48:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.657 21:48:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.657 21:48:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.657 21:48:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.657 21:48:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.657 21:48:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.657 21:48:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.657 21:48:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.657 21:48:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.657 21:48:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.657 21:48:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.657 21:48:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.657 21:48:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.657 21:48:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.657 21:48:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.657 21:48:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.657 21:48:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.657 21:48:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.657 21:48:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.657 21:48:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.657 21:48:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.657 21:48:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.657 21:48:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.657 21:48:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.657 21:48:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.657 21:48:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.657 21:48:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.657 21:48:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.657 21:48:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.657 21:48:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:04:47.657 21:48:06 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:47.657 21:48:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.657 21:48:06 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:47.657 21:48:06 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:47.657 21:48:06 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:47.657 21:48:06 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:47.657 21:48:06 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:47.657 21:48:06 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:47.657 21:48:06 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:47.657 21:48:06 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:47.657 21:48:06 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:47.657 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:47.657 21:48:06 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:47.657 21:48:06 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:47.657 00:04:47.657 real 0m11.338s 00:04:47.657 user 0m2.913s 00:04:47.657 sys 0m5.492s 00:04:47.657 21:48:06 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.657 21:48:06 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:47.657 ************************************ 00:04:47.657 END TEST dm_mount 00:04:47.657 ************************************ 00:04:47.657 21:48:06 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:47.657 21:48:06 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:47.657 21:48:06 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:47.657 21:48:06 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:47.657 21:48:06 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:47.657 21:48:06 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:47.657 21:48:06 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:47.657 21:48:06 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:47.657 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:47.657 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:04:47.657 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:47.657 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:47.657 21:48:06 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:47.657 21:48:06 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:47.657 21:48:06 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:47.657 21:48:06 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:47.657 21:48:06 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:47.657 21:48:06 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:47.657 21:48:06 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:47.657 00:04:47.657 real 0m29.394s 00:04:47.657 user 0m8.150s 00:04:47.657 sys 0m15.803s 00:04:47.657 21:48:06 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.657 21:48:06 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:47.657 ************************************ 00:04:47.657 END TEST devices 00:04:47.657 ************************************ 00:04:47.657 21:48:06 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:47.657 00:04:47.657 real 1m46.855s 00:04:47.657 user 0m32.271s 00:04:47.657 sys 1m1.842s 00:04:47.657 21:48:06 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.657 21:48:06 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:47.657 ************************************ 00:04:47.657 END TEST setup.sh 00:04:47.657 ************************************ 00:04:47.657 21:48:06 -- common/autotest_common.sh@1142 -- # return 0 00:04:47.657 21:48:06 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:04:51.840 Hugepages 00:04:51.840 node hugesize free / total 00:04:51.840 node0 1048576kB 0 / 0 00:04:51.840 node0 2048kB 1024 / 1024 00:04:51.840 node1 1048576kB 0 / 0 00:04:51.840 node1 2048kB 1024 / 1024 00:04:51.840 00:04:51.840 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:51.840 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:51.840 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:51.840 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:51.840 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:51.840 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:51.840 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:51.840 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:51.840 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:51.840 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:51.840 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:51.840 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:51.840 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:51.840 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:51.840 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:51.840 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:51.840 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:51.840 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:51.840 21:48:11 -- spdk/autotest.sh@130 -- # uname -s 00:04:51.840 21:48:11 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:51.840 21:48:11 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:51.840 21:48:11 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:56.064 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:56.064 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:56.064 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:56.064 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:56.064 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:56.064 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:56.064 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:56.064 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:56.064 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:56.064 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:56.064 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:56.064 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:56.064 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:56.064 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:56.064 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:56.064 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:57.965 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:57.965 21:48:17 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:58.901 21:48:18 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:58.901 21:48:18 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:58.901 21:48:18 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:58.901 21:48:18 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:58.901 21:48:18 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:58.901 21:48:18 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:58.901 21:48:18 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:58.901 21:48:18 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:58.901 21:48:18 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:58.901 21:48:18 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:58.901 21:48:18 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:04:58.901 21:48:18 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:05:03.081 Waiting for block devices as requested 00:05:03.081 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:03.081 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:03.081 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:03.081 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:03.081 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:03.081 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:03.081 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:03.081 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:03.081 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:03.081 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:03.340 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:03.340 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:03.340 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:03.598 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:03.598 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:03.598 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:03.857 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:05:03.857 21:48:23 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:03.857 21:48:23 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:05:03.857 21:48:23 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:03.857 21:48:23 -- common/autotest_common.sh@1502 -- # grep 0000:d8:00.0/nvme/nvme 00:05:03.857 21:48:23 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:03.857 21:48:23 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:05:03.857 21:48:23 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:03.857 21:48:23 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:03.857 21:48:23 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:03.857 21:48:23 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:03.857 21:48:23 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:03.857 21:48:23 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:03.857 21:48:23 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:03.857 21:48:23 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:05:03.857 21:48:23 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:03.857 21:48:23 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:03.857 21:48:23 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:03.857 21:48:23 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:03.857 21:48:23 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:03.857 21:48:23 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:03.857 21:48:23 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:03.857 21:48:23 -- common/autotest_common.sh@1557 -- # continue 00:05:03.857 21:48:23 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:03.857 21:48:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:03.857 21:48:23 -- common/autotest_common.sh@10 -- # set +x 00:05:03.857 21:48:23 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:03.857 21:48:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:03.857 21:48:23 -- common/autotest_common.sh@10 -- # set +x 00:05:03.857 21:48:23 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:05:08.049 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:08.049 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:08.049 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:08.049 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:08.049 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:08.049 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:08.049 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:08.049 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:08.049 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:08.049 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:08.049 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:08.049 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:08.049 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:08.049 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:08.049 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:08.049 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:09.428 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:09.428 21:48:28 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:09.428 21:48:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:09.428 21:48:28 -- common/autotest_common.sh@10 -- # set +x 00:05:09.428 21:48:28 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:09.428 21:48:28 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:09.428 21:48:28 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:09.428 21:48:28 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:09.428 21:48:28 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:09.428 21:48:28 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:09.428 21:48:28 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:09.428 21:48:28 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:09.428 21:48:28 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:09.428 21:48:28 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:09.428 21:48:28 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:09.687 21:48:28 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:09.687 21:48:28 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:05:09.687 21:48:28 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:09.687 21:48:28 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:05:09.687 21:48:28 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:09.687 21:48:28 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:09.687 21:48:28 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:09.687 21:48:28 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:d8:00.0 00:05:09.687 21:48:28 -- common/autotest_common.sh@1592 -- # [[ -z 0000:d8:00.0 ]] 00:05:09.687 21:48:28 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=1269989 00:05:09.687 21:48:28 -- common/autotest_common.sh@1598 -- # waitforlisten 1269989 00:05:09.687 21:48:28 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:09.687 21:48:28 -- common/autotest_common.sh@829 -- # '[' -z 1269989 ']' 00:05:09.687 21:48:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.687 21:48:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:09.687 21:48:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.687 21:48:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:09.687 21:48:28 -- common/autotest_common.sh@10 -- # set +x 00:05:09.687 [2024-07-13 21:48:28.996896] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:09.687 [2024-07-13 21:48:28.996993] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1269989 ] 00:05:09.947 [2024-07-13 21:48:29.141229] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.206 [2024-07-13 21:48:29.345924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.143 21:48:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:11.143 21:48:30 -- common/autotest_common.sh@862 -- # return 0 00:05:11.143 21:48:30 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:05:11.143 21:48:30 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:11.143 21:48:30 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:05:14.466 nvme0n1 00:05:14.466 21:48:33 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:14.466 [2024-07-13 21:48:33.431907] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:14.466 request: 00:05:14.466 { 00:05:14.466 "nvme_ctrlr_name": "nvme0", 00:05:14.466 "password": "test", 00:05:14.466 "method": "bdev_nvme_opal_revert", 00:05:14.466 "req_id": 1 00:05:14.466 } 00:05:14.466 Got JSON-RPC error response 00:05:14.466 response: 00:05:14.466 { 00:05:14.466 "code": -32602, 00:05:14.466 "message": "Invalid parameters" 00:05:14.466 } 00:05:14.466 21:48:33 -- common/autotest_common.sh@1604 -- # true 00:05:14.466 21:48:33 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:14.466 21:48:33 -- common/autotest_common.sh@1608 -- # killprocess 1269989 00:05:14.466 21:48:33 -- common/autotest_common.sh@948 -- # '[' -z 1269989 ']' 00:05:14.466 21:48:33 -- common/autotest_common.sh@952 -- # kill -0 1269989 00:05:14.466 21:48:33 -- common/autotest_common.sh@953 -- # uname 00:05:14.466 21:48:33 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:14.466 21:48:33 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1269989 00:05:14.466 21:48:33 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:14.466 21:48:33 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:14.466 21:48:33 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1269989' 00:05:14.466 killing process with pid 1269989 00:05:14.466 21:48:33 -- common/autotest_common.sh@967 -- # kill 1269989 00:05:14.466 21:48:33 -- common/autotest_common.sh@972 -- # wait 1269989 00:05:18.655 21:48:37 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:18.655 21:48:37 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:18.655 21:48:37 -- spdk/autotest.sh@155 -- # [[ 1 -eq 1 ]] 00:05:18.655 21:48:37 -- spdk/autotest.sh@156 -- # [[ 0 -eq 1 ]] 00:05:18.655 21:48:37 -- spdk/autotest.sh@159 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/qat_setup.sh 00:05:19.222 Restarting all devices. 00:05:25.790 lstat() error: No such file or directory 00:05:25.790 QAT Error: No GENERAL section found 00:05:25.790 Failed to configure qat_dev0 00:05:25.790 lstat() error: No such file or directory 00:05:25.790 QAT Error: No GENERAL section found 00:05:25.790 Failed to configure qat_dev1 00:05:25.790 lstat() error: No such file or directory 00:05:25.790 QAT Error: No GENERAL section found 00:05:25.790 Failed to configure qat_dev2 00:05:25.790 lstat() error: No such file or directory 00:05:25.790 QAT Error: No GENERAL section found 00:05:25.790 Failed to configure qat_dev3 00:05:25.790 lstat() error: No such file or directory 00:05:25.790 QAT Error: No GENERAL section found 00:05:25.790 Failed to configure qat_dev4 00:05:25.790 enable sriov 00:05:25.790 Checking status of all devices. 00:05:25.790 There is 5 QAT acceleration device(s) in the system: 00:05:25.790 qat_dev0 - type: c6xx, inst_id: 0, node_id: 0, bsf: 0000:1a:00.0, #accel: 5 #engines: 10 state: down 00:05:25.790 qat_dev1 - type: c6xx, inst_id: 1, node_id: 0, bsf: 0000:1c:00.0, #accel: 5 #engines: 10 state: down 00:05:25.790 qat_dev2 - type: c6xx, inst_id: 2, node_id: 0, bsf: 0000:1e:00.0, #accel: 5 #engines: 10 state: down 00:05:25.790 qat_dev3 - type: c6xx, inst_id: 3, node_id: 0, bsf: 0000:3d:00.0, #accel: 5 #engines: 10 state: down 00:05:25.790 qat_dev4 - type: c6xx, inst_id: 4, node_id: 0, bsf: 0000:3f:00.0, #accel: 5 #engines: 10 state: down 00:05:25.790 0000:1a:00.0 set to 16 VFs 00:05:26.727 0000:1c:00.0 set to 16 VFs 00:05:27.295 0000:1e:00.0 set to 16 VFs 00:05:28.233 0000:3d:00.0 set to 16 VFs 00:05:28.801 0000:3f:00.0 set to 16 VFs 00:05:31.336 Properly configured the qat device with driver uio_pci_generic. 00:05:31.336 21:48:50 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:31.336 21:48:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:31.336 21:48:50 -- common/autotest_common.sh@10 -- # set +x 00:05:31.336 21:48:50 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:31.336 21:48:50 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env.sh 00:05:31.336 21:48:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.336 21:48:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.336 21:48:50 -- common/autotest_common.sh@10 -- # set +x 00:05:31.336 ************************************ 00:05:31.336 START TEST env 00:05:31.336 ************************************ 00:05:31.336 21:48:50 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env.sh 00:05:31.336 * Looking for test storage... 00:05:31.336 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env 00:05:31.336 21:48:50 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/memory/memory_ut 00:05:31.336 21:48:50 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.336 21:48:50 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.336 21:48:50 env -- common/autotest_common.sh@10 -- # set +x 00:05:31.336 ************************************ 00:05:31.336 START TEST env_memory 00:05:31.336 ************************************ 00:05:31.336 21:48:50 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/memory/memory_ut 00:05:31.336 00:05:31.336 00:05:31.336 CUnit - A unit testing framework for C - Version 2.1-3 00:05:31.336 http://cunit.sourceforge.net/ 00:05:31.336 00:05:31.336 00:05:31.336 Suite: memory 00:05:31.336 Test: alloc and free memory map ...[2024-07-13 21:48:50.652831] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:31.336 passed 00:05:31.336 Test: mem map translation ...[2024-07-13 21:48:50.690021] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:31.336 [2024-07-13 21:48:50.690047] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:31.336 [2024-07-13 21:48:50.690119] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:31.336 [2024-07-13 21:48:50.690141] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:31.595 passed 00:05:31.595 Test: mem map registration ...[2024-07-13 21:48:50.747327] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:31.595 [2024-07-13 21:48:50.747352] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:31.595 passed 00:05:31.595 Test: mem map adjacent registrations ...passed 00:05:31.595 00:05:31.595 Run Summary: Type Total Ran Passed Failed Inactive 00:05:31.595 suites 1 1 n/a 0 0 00:05:31.595 tests 4 4 4 0 0 00:05:31.595 asserts 152 152 152 0 n/a 00:05:31.595 00:05:31.595 Elapsed time = 0.207 seconds 00:05:31.595 00:05:31.595 real 0m0.247s 00:05:31.595 user 0m0.222s 00:05:31.595 sys 0m0.024s 00:05:31.596 21:48:50 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.596 21:48:50 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:31.596 ************************************ 00:05:31.596 END TEST env_memory 00:05:31.596 ************************************ 00:05:31.596 21:48:50 env -- common/autotest_common.sh@1142 -- # return 0 00:05:31.596 21:48:50 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:31.596 21:48:50 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.596 21:48:50 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.596 21:48:50 env -- common/autotest_common.sh@10 -- # set +x 00:05:31.596 ************************************ 00:05:31.596 START TEST env_vtophys 00:05:31.596 ************************************ 00:05:31.596 21:48:50 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:31.596 EAL: lib.eal log level changed from notice to debug 00:05:31.596 EAL: Detected lcore 0 as core 0 on socket 0 00:05:31.596 EAL: Detected lcore 1 as core 1 on socket 0 00:05:31.596 EAL: Detected lcore 2 as core 2 on socket 0 00:05:31.596 EAL: Detected lcore 3 as core 3 on socket 0 00:05:31.596 EAL: Detected lcore 4 as core 4 on socket 0 00:05:31.596 EAL: Detected lcore 5 as core 5 on socket 0 00:05:31.596 EAL: Detected lcore 6 as core 6 on socket 0 00:05:31.596 EAL: Detected lcore 7 as core 8 on socket 0 00:05:31.596 EAL: Detected lcore 8 as core 9 on socket 0 00:05:31.596 EAL: Detected lcore 9 as core 10 on socket 0 00:05:31.596 EAL: Detected lcore 10 as core 11 on socket 0 00:05:31.596 EAL: Detected lcore 11 as core 12 on socket 0 00:05:31.596 EAL: Detected lcore 12 as core 13 on socket 0 00:05:31.596 EAL: Detected lcore 13 as core 14 on socket 0 00:05:31.596 EAL: Detected lcore 14 as core 16 on socket 0 00:05:31.596 EAL: Detected lcore 15 as core 17 on socket 0 00:05:31.596 EAL: Detected lcore 16 as core 18 on socket 0 00:05:31.596 EAL: Detected lcore 17 as core 19 on socket 0 00:05:31.596 EAL: Detected lcore 18 as core 20 on socket 0 00:05:31.596 EAL: Detected lcore 19 as core 21 on socket 0 00:05:31.596 EAL: Detected lcore 20 as core 22 on socket 0 00:05:31.596 EAL: Detected lcore 21 as core 24 on socket 0 00:05:31.596 EAL: Detected lcore 22 as core 25 on socket 0 00:05:31.596 EAL: Detected lcore 23 as core 26 on socket 0 00:05:31.596 EAL: Detected lcore 24 as core 27 on socket 0 00:05:31.596 EAL: Detected lcore 25 as core 28 on socket 0 00:05:31.596 EAL: Detected lcore 26 as core 29 on socket 0 00:05:31.596 EAL: Detected lcore 27 as core 30 on socket 0 00:05:31.596 EAL: Detected lcore 28 as core 0 on socket 1 00:05:31.596 EAL: Detected lcore 29 as core 1 on socket 1 00:05:31.596 EAL: Detected lcore 30 as core 2 on socket 1 00:05:31.596 EAL: Detected lcore 31 as core 3 on socket 1 00:05:31.596 EAL: Detected lcore 32 as core 4 on socket 1 00:05:31.596 EAL: Detected lcore 33 as core 5 on socket 1 00:05:31.596 EAL: Detected lcore 34 as core 6 on socket 1 00:05:31.596 EAL: Detected lcore 35 as core 8 on socket 1 00:05:31.596 EAL: Detected lcore 36 as core 9 on socket 1 00:05:31.596 EAL: Detected lcore 37 as core 10 on socket 1 00:05:31.596 EAL: Detected lcore 38 as core 11 on socket 1 00:05:31.596 EAL: Detected lcore 39 as core 12 on socket 1 00:05:31.596 EAL: Detected lcore 40 as core 13 on socket 1 00:05:31.596 EAL: Detected lcore 41 as core 14 on socket 1 00:05:31.596 EAL: Detected lcore 42 as core 16 on socket 1 00:05:31.596 EAL: Detected lcore 43 as core 17 on socket 1 00:05:31.596 EAL: Detected lcore 44 as core 18 on socket 1 00:05:31.596 EAL: Detected lcore 45 as core 19 on socket 1 00:05:31.596 EAL: Detected lcore 46 as core 20 on socket 1 00:05:31.596 EAL: Detected lcore 47 as core 21 on socket 1 00:05:31.596 EAL: Detected lcore 48 as core 22 on socket 1 00:05:31.596 EAL: Detected lcore 49 as core 24 on socket 1 00:05:31.596 EAL: Detected lcore 50 as core 25 on socket 1 00:05:31.596 EAL: Detected lcore 51 as core 26 on socket 1 00:05:31.596 EAL: Detected lcore 52 as core 27 on socket 1 00:05:31.596 EAL: Detected lcore 53 as core 28 on socket 1 00:05:31.596 EAL: Detected lcore 54 as core 29 on socket 1 00:05:31.596 EAL: Detected lcore 55 as core 30 on socket 1 00:05:31.596 EAL: Detected lcore 56 as core 0 on socket 0 00:05:31.596 EAL: Detected lcore 57 as core 1 on socket 0 00:05:31.596 EAL: Detected lcore 58 as core 2 on socket 0 00:05:31.596 EAL: Detected lcore 59 as core 3 on socket 0 00:05:31.596 EAL: Detected lcore 60 as core 4 on socket 0 00:05:31.596 EAL: Detected lcore 61 as core 5 on socket 0 00:05:31.596 EAL: Detected lcore 62 as core 6 on socket 0 00:05:31.596 EAL: Detected lcore 63 as core 8 on socket 0 00:05:31.596 EAL: Detected lcore 64 as core 9 on socket 0 00:05:31.596 EAL: Detected lcore 65 as core 10 on socket 0 00:05:31.596 EAL: Detected lcore 66 as core 11 on socket 0 00:05:31.596 EAL: Detected lcore 67 as core 12 on socket 0 00:05:31.596 EAL: Detected lcore 68 as core 13 on socket 0 00:05:31.596 EAL: Detected lcore 69 as core 14 on socket 0 00:05:31.596 EAL: Detected lcore 70 as core 16 on socket 0 00:05:31.596 EAL: Detected lcore 71 as core 17 on socket 0 00:05:31.596 EAL: Detected lcore 72 as core 18 on socket 0 00:05:31.596 EAL: Detected lcore 73 as core 19 on socket 0 00:05:31.596 EAL: Detected lcore 74 as core 20 on socket 0 00:05:31.596 EAL: Detected lcore 75 as core 21 on socket 0 00:05:31.596 EAL: Detected lcore 76 as core 22 on socket 0 00:05:31.596 EAL: Detected lcore 77 as core 24 on socket 0 00:05:31.596 EAL: Detected lcore 78 as core 25 on socket 0 00:05:31.596 EAL: Detected lcore 79 as core 26 on socket 0 00:05:31.596 EAL: Detected lcore 80 as core 27 on socket 0 00:05:31.596 EAL: Detected lcore 81 as core 28 on socket 0 00:05:31.596 EAL: Detected lcore 82 as core 29 on socket 0 00:05:31.596 EAL: Detected lcore 83 as core 30 on socket 0 00:05:31.596 EAL: Detected lcore 84 as core 0 on socket 1 00:05:31.596 EAL: Detected lcore 85 as core 1 on socket 1 00:05:31.596 EAL: Detected lcore 86 as core 2 on socket 1 00:05:31.596 EAL: Detected lcore 87 as core 3 on socket 1 00:05:31.596 EAL: Detected lcore 88 as core 4 on socket 1 00:05:31.596 EAL: Detected lcore 89 as core 5 on socket 1 00:05:31.596 EAL: Detected lcore 90 as core 6 on socket 1 00:05:31.596 EAL: Detected lcore 91 as core 8 on socket 1 00:05:31.596 EAL: Detected lcore 92 as core 9 on socket 1 00:05:31.596 EAL: Detected lcore 93 as core 10 on socket 1 00:05:31.596 EAL: Detected lcore 94 as core 11 on socket 1 00:05:31.596 EAL: Detected lcore 95 as core 12 on socket 1 00:05:31.596 EAL: Detected lcore 96 as core 13 on socket 1 00:05:31.596 EAL: Detected lcore 97 as core 14 on socket 1 00:05:31.596 EAL: Detected lcore 98 as core 16 on socket 1 00:05:31.596 EAL: Detected lcore 99 as core 17 on socket 1 00:05:31.596 EAL: Detected lcore 100 as core 18 on socket 1 00:05:31.596 EAL: Detected lcore 101 as core 19 on socket 1 00:05:31.596 EAL: Detected lcore 102 as core 20 on socket 1 00:05:31.596 EAL: Detected lcore 103 as core 21 on socket 1 00:05:31.596 EAL: Detected lcore 104 as core 22 on socket 1 00:05:31.596 EAL: Detected lcore 105 as core 24 on socket 1 00:05:31.596 EAL: Detected lcore 106 as core 25 on socket 1 00:05:31.596 EAL: Detected lcore 107 as core 26 on socket 1 00:05:31.596 EAL: Detected lcore 108 as core 27 on socket 1 00:05:31.596 EAL: Detected lcore 109 as core 28 on socket 1 00:05:31.596 EAL: Detected lcore 110 as core 29 on socket 1 00:05:31.596 EAL: Detected lcore 111 as core 30 on socket 1 00:05:31.596 EAL: Maximum logical cores by configuration: 128 00:05:31.596 EAL: Detected CPU lcores: 112 00:05:31.596 EAL: Detected NUMA nodes: 2 00:05:31.596 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:31.596 EAL: Detected shared linkage of DPDK 00:05:31.858 EAL: No shared files mode enabled, IPC will be disabled 00:05:31.858 EAL: No shared files mode enabled, IPC is disabled 00:05:31.858 EAL: PCI driver qat for device 0000:1a:01.0 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:1a:01.1 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:1a:01.2 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:1a:01.3 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:1a:01.4 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:1a:01.5 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:1a:01.6 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:1a:01.7 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:1a:02.0 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:1a:02.1 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:1a:02.2 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:1a:02.3 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:1a:02.4 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:1a:02.5 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:1a:02.6 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:1a:02.7 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:1c:01.0 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:1c:01.1 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:1c:01.2 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:1c:01.3 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:1c:01.4 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:1c:01.5 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:1c:01.6 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:1c:01.7 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:1c:02.0 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:1c:02.1 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:1c:02.2 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:1c:02.3 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:1c:02.4 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:1c:02.5 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:1c:02.6 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:1c:02.7 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:1e:01.0 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:1e:01.1 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:1e:01.2 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:1e:01.3 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:1e:01.4 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:1e:01.5 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:1e:01.6 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:1e:01.7 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:1e:02.0 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:1e:02.1 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:1e:02.2 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:1e:02.3 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:1e:02.4 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:1e:02.5 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:1e:02.6 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:1e:02.7 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:3d:01.0 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:3d:01.1 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:3d:01.2 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:3d:01.3 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:3d:01.4 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:3d:01.5 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:3d:01.6 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:3d:01.7 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:3d:02.0 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:3d:02.1 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:3d:02.2 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:3d:02.3 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:3d:02.4 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:3d:02.5 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:3d:02.6 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:3d:02.7 wants IOVA as 'PA' 00:05:31.858 EAL: PCI driver qat for device 0000:3f:01.0 wants IOVA as 'PA' 00:05:31.859 EAL: PCI driver qat for device 0000:3f:01.1 wants IOVA as 'PA' 00:05:31.859 EAL: PCI driver qat for device 0000:3f:01.2 wants IOVA as 'PA' 00:05:31.859 EAL: PCI driver qat for device 0000:3f:01.3 wants IOVA as 'PA' 00:05:31.859 EAL: PCI driver qat for device 0000:3f:01.4 wants IOVA as 'PA' 00:05:31.859 EAL: PCI driver qat for device 0000:3f:01.5 wants IOVA as 'PA' 00:05:31.859 EAL: PCI driver qat for device 0000:3f:01.6 wants IOVA as 'PA' 00:05:31.859 EAL: PCI driver qat for device 0000:3f:01.7 wants IOVA as 'PA' 00:05:31.859 EAL: PCI driver qat for device 0000:3f:02.0 wants IOVA as 'PA' 00:05:31.859 EAL: PCI driver qat for device 0000:3f:02.1 wants IOVA as 'PA' 00:05:31.859 EAL: PCI driver qat for device 0000:3f:02.2 wants IOVA as 'PA' 00:05:31.859 EAL: PCI driver qat for device 0000:3f:02.3 wants IOVA as 'PA' 00:05:31.859 EAL: PCI driver qat for device 0000:3f:02.4 wants IOVA as 'PA' 00:05:31.859 EAL: PCI driver qat for device 0000:3f:02.5 wants IOVA as 'PA' 00:05:31.859 EAL: PCI driver qat for device 0000:3f:02.6 wants IOVA as 'PA' 00:05:31.859 EAL: PCI driver qat for device 0000:3f:02.7 wants IOVA as 'PA' 00:05:31.859 EAL: Bus pci wants IOVA as 'PA' 00:05:31.859 EAL: Bus auxiliary wants IOVA as 'DC' 00:05:31.859 EAL: Bus vdev wants IOVA as 'DC' 00:05:31.859 EAL: Selected IOVA mode 'PA' 00:05:31.859 EAL: Probing VFIO support... 00:05:31.859 EAL: IOMMU type 1 (Type 1) is supported 00:05:31.859 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:31.859 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:31.859 EAL: VFIO support initialized 00:05:31.859 EAL: Ask a virtual area of 0x2e000 bytes 00:05:31.859 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:31.859 EAL: Setting up physically contiguous memory... 00:05:31.859 EAL: Setting maximum number of open files to 524288 00:05:31.859 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:31.859 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:31.859 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:31.859 EAL: Ask a virtual area of 0x61000 bytes 00:05:31.859 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:31.859 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:31.859 EAL: Ask a virtual area of 0x400000000 bytes 00:05:31.859 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:31.859 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:31.859 EAL: Ask a virtual area of 0x61000 bytes 00:05:31.859 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:31.859 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:31.859 EAL: Ask a virtual area of 0x400000000 bytes 00:05:31.859 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:31.859 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:31.859 EAL: Ask a virtual area of 0x61000 bytes 00:05:31.859 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:31.859 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:31.859 EAL: Ask a virtual area of 0x400000000 bytes 00:05:31.859 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:31.859 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:31.859 EAL: Ask a virtual area of 0x61000 bytes 00:05:31.859 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:31.859 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:31.859 EAL: Ask a virtual area of 0x400000000 bytes 00:05:31.859 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:31.859 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:31.859 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:31.859 EAL: Ask a virtual area of 0x61000 bytes 00:05:31.859 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:31.859 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:31.859 EAL: Ask a virtual area of 0x400000000 bytes 00:05:31.859 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:31.859 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:31.859 EAL: Ask a virtual area of 0x61000 bytes 00:05:31.859 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:31.859 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:31.859 EAL: Ask a virtual area of 0x400000000 bytes 00:05:31.859 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:31.859 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:31.859 EAL: Ask a virtual area of 0x61000 bytes 00:05:31.859 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:31.859 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:31.859 EAL: Ask a virtual area of 0x400000000 bytes 00:05:31.859 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:31.859 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:31.859 EAL: Ask a virtual area of 0x61000 bytes 00:05:31.859 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:31.859 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:31.859 EAL: Ask a virtual area of 0x400000000 bytes 00:05:31.859 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:31.859 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:31.859 EAL: Hugepages will be freed exactly as allocated. 00:05:31.859 EAL: No shared files mode enabled, IPC is disabled 00:05:31.859 EAL: No shared files mode enabled, IPC is disabled 00:05:31.859 EAL: TSC frequency is ~2500000 KHz 00:05:31.859 EAL: Main lcore 0 is ready (tid=7fe04d693b40;cpuset=[0]) 00:05:31.859 EAL: Trying to obtain current memory policy. 00:05:31.859 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.859 EAL: Restoring previous memory policy: 0 00:05:31.859 EAL: request: mp_malloc_sync 00:05:31.859 EAL: No shared files mode enabled, IPC is disabled 00:05:31.859 EAL: Heap on socket 0 was expanded by 2MB 00:05:31.859 EAL: PCI device 0000:1a:01.0 on NUMA socket 0 00:05:31.859 EAL: probe driver: 8086:37c9 qat 00:05:31.859 EAL: PCI memory mapped at 0x202001000000 00:05:31.859 EAL: PCI memory mapped at 0x202001001000 00:05:31.859 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.0 (socket 0) 00:05:31.859 EAL: PCI device 0000:1a:01.1 on NUMA socket 0 00:05:31.859 EAL: probe driver: 8086:37c9 qat 00:05:31.859 EAL: PCI memory mapped at 0x202001002000 00:05:31.859 EAL: PCI memory mapped at 0x202001003000 00:05:31.859 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.1 (socket 0) 00:05:31.859 EAL: PCI device 0000:1a:01.2 on NUMA socket 0 00:05:31.859 EAL: probe driver: 8086:37c9 qat 00:05:31.859 EAL: PCI memory mapped at 0x202001004000 00:05:31.859 EAL: PCI memory mapped at 0x202001005000 00:05:31.859 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.2 (socket 0) 00:05:31.859 EAL: PCI device 0000:1a:01.3 on NUMA socket 0 00:05:31.859 EAL: probe driver: 8086:37c9 qat 00:05:31.859 EAL: PCI memory mapped at 0x202001006000 00:05:31.859 EAL: PCI memory mapped at 0x202001007000 00:05:31.859 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.3 (socket 0) 00:05:31.859 EAL: PCI device 0000:1a:01.4 on NUMA socket 0 00:05:31.859 EAL: probe driver: 8086:37c9 qat 00:05:31.859 EAL: PCI memory mapped at 0x202001008000 00:05:31.859 EAL: PCI memory mapped at 0x202001009000 00:05:31.859 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.4 (socket 0) 00:05:31.859 EAL: PCI device 0000:1a:01.5 on NUMA socket 0 00:05:31.859 EAL: probe driver: 8086:37c9 qat 00:05:31.859 EAL: PCI memory mapped at 0x20200100a000 00:05:31.859 EAL: PCI memory mapped at 0x20200100b000 00:05:31.859 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.5 (socket 0) 00:05:31.859 EAL: PCI device 0000:1a:01.6 on NUMA socket 0 00:05:31.859 EAL: probe driver: 8086:37c9 qat 00:05:31.859 EAL: PCI memory mapped at 0x20200100c000 00:05:31.860 EAL: PCI memory mapped at 0x20200100d000 00:05:31.860 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.6 (socket 0) 00:05:31.860 EAL: PCI device 0000:1a:01.7 on NUMA socket 0 00:05:31.860 EAL: probe driver: 8086:37c9 qat 00:05:31.860 EAL: PCI memory mapped at 0x20200100e000 00:05:31.860 EAL: PCI memory mapped at 0x20200100f000 00:05:31.860 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.7 (socket 0) 00:05:31.860 EAL: PCI device 0000:1a:02.0 on NUMA socket 0 00:05:31.860 EAL: probe driver: 8086:37c9 qat 00:05:31.860 EAL: PCI memory mapped at 0x202001010000 00:05:31.860 EAL: PCI memory mapped at 0x202001011000 00:05:31.860 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.0 (socket 0) 00:05:31.860 EAL: PCI device 0000:1a:02.1 on NUMA socket 0 00:05:31.860 EAL: probe driver: 8086:37c9 qat 00:05:31.860 EAL: PCI memory mapped at 0x202001012000 00:05:31.860 EAL: PCI memory mapped at 0x202001013000 00:05:31.860 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.1 (socket 0) 00:05:31.860 EAL: PCI device 0000:1a:02.2 on NUMA socket 0 00:05:31.860 EAL: probe driver: 8086:37c9 qat 00:05:31.860 EAL: PCI memory mapped at 0x202001014000 00:05:31.860 EAL: PCI memory mapped at 0x202001015000 00:05:31.860 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.2 (socket 0) 00:05:31.860 EAL: PCI device 0000:1a:02.3 on NUMA socket 0 00:05:31.860 EAL: probe driver: 8086:37c9 qat 00:05:31.860 EAL: PCI memory mapped at 0x202001016000 00:05:31.860 EAL: PCI memory mapped at 0x202001017000 00:05:31.860 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.3 (socket 0) 00:05:31.860 EAL: PCI device 0000:1a:02.4 on NUMA socket 0 00:05:31.860 EAL: probe driver: 8086:37c9 qat 00:05:31.860 EAL: PCI memory mapped at 0x202001018000 00:05:31.860 EAL: PCI memory mapped at 0x202001019000 00:05:31.860 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.4 (socket 0) 00:05:31.860 EAL: PCI device 0000:1a:02.5 on NUMA socket 0 00:05:31.860 EAL: probe driver: 8086:37c9 qat 00:05:31.860 EAL: PCI memory mapped at 0x20200101a000 00:05:31.860 EAL: PCI memory mapped at 0x20200101b000 00:05:31.860 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.5 (socket 0) 00:05:31.860 EAL: PCI device 0000:1a:02.6 on NUMA socket 0 00:05:31.860 EAL: probe driver: 8086:37c9 qat 00:05:31.860 EAL: PCI memory mapped at 0x20200101c000 00:05:31.860 EAL: PCI memory mapped at 0x20200101d000 00:05:31.860 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.6 (socket 0) 00:05:31.860 EAL: PCI device 0000:1a:02.7 on NUMA socket 0 00:05:31.860 EAL: probe driver: 8086:37c9 qat 00:05:31.860 EAL: PCI memory mapped at 0x20200101e000 00:05:31.860 EAL: PCI memory mapped at 0x20200101f000 00:05:31.860 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.7 (socket 0) 00:05:31.860 EAL: PCI device 0000:1c:01.0 on NUMA socket 0 00:05:31.860 EAL: probe driver: 8086:37c9 qat 00:05:31.860 EAL: PCI memory mapped at 0x202001020000 00:05:31.860 EAL: PCI memory mapped at 0x202001021000 00:05:31.860 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.0 (socket 0) 00:05:31.860 EAL: PCI device 0000:1c:01.1 on NUMA socket 0 00:05:31.860 EAL: probe driver: 8086:37c9 qat 00:05:31.860 EAL: PCI memory mapped at 0x202001022000 00:05:31.860 EAL: PCI memory mapped at 0x202001023000 00:05:31.860 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.1 (socket 0) 00:05:31.860 EAL: PCI device 0000:1c:01.2 on NUMA socket 0 00:05:31.860 EAL: probe driver: 8086:37c9 qat 00:05:31.860 EAL: PCI memory mapped at 0x202001024000 00:05:31.860 EAL: PCI memory mapped at 0x202001025000 00:05:31.860 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.2 (socket 0) 00:05:31.860 EAL: PCI device 0000:1c:01.3 on NUMA socket 0 00:05:31.860 EAL: probe driver: 8086:37c9 qat 00:05:31.860 EAL: PCI memory mapped at 0x202001026000 00:05:31.860 EAL: PCI memory mapped at 0x202001027000 00:05:31.860 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.3 (socket 0) 00:05:31.860 EAL: PCI device 0000:1c:01.4 on NUMA socket 0 00:05:31.860 EAL: probe driver: 8086:37c9 qat 00:05:31.860 EAL: PCI memory mapped at 0x202001028000 00:05:31.860 EAL: PCI memory mapped at 0x202001029000 00:05:31.860 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.4 (socket 0) 00:05:31.860 EAL: PCI device 0000:1c:01.5 on NUMA socket 0 00:05:31.860 EAL: probe driver: 8086:37c9 qat 00:05:31.860 EAL: PCI memory mapped at 0x20200102a000 00:05:31.860 EAL: PCI memory mapped at 0x20200102b000 00:05:31.860 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.5 (socket 0) 00:05:31.860 EAL: PCI device 0000:1c:01.6 on NUMA socket 0 00:05:31.860 EAL: probe driver: 8086:37c9 qat 00:05:31.860 EAL: PCI memory mapped at 0x20200102c000 00:05:31.860 EAL: PCI memory mapped at 0x20200102d000 00:05:31.860 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.6 (socket 0) 00:05:31.860 EAL: PCI device 0000:1c:01.7 on NUMA socket 0 00:05:31.860 EAL: probe driver: 8086:37c9 qat 00:05:31.860 EAL: PCI memory mapped at 0x20200102e000 00:05:31.860 EAL: PCI memory mapped at 0x20200102f000 00:05:31.860 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.7 (socket 0) 00:05:31.860 EAL: PCI device 0000:1c:02.0 on NUMA socket 0 00:05:31.860 EAL: probe driver: 8086:37c9 qat 00:05:31.860 EAL: PCI memory mapped at 0x202001030000 00:05:31.860 EAL: PCI memory mapped at 0x202001031000 00:05:31.860 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.0 (socket 0) 00:05:31.860 EAL: PCI device 0000:1c:02.1 on NUMA socket 0 00:05:31.860 EAL: probe driver: 8086:37c9 qat 00:05:31.860 EAL: PCI memory mapped at 0x202001032000 00:05:31.860 EAL: PCI memory mapped at 0x202001033000 00:05:31.860 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.1 (socket 0) 00:05:31.860 EAL: PCI device 0000:1c:02.2 on NUMA socket 0 00:05:31.860 EAL: probe driver: 8086:37c9 qat 00:05:31.860 EAL: PCI memory mapped at 0x202001034000 00:05:31.860 EAL: PCI memory mapped at 0x202001035000 00:05:31.860 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.2 (socket 0) 00:05:31.860 EAL: PCI device 0000:1c:02.3 on NUMA socket 0 00:05:31.860 EAL: probe driver: 8086:37c9 qat 00:05:31.860 EAL: PCI memory mapped at 0x202001036000 00:05:31.860 EAL: PCI memory mapped at 0x202001037000 00:05:31.860 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.3 (socket 0) 00:05:31.860 EAL: PCI device 0000:1c:02.4 on NUMA socket 0 00:05:31.860 EAL: probe driver: 8086:37c9 qat 00:05:31.860 EAL: PCI memory mapped at 0x202001038000 00:05:31.860 EAL: PCI memory mapped at 0x202001039000 00:05:31.860 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.4 (socket 0) 00:05:31.860 EAL: PCI device 0000:1c:02.5 on NUMA socket 0 00:05:31.860 EAL: probe driver: 8086:37c9 qat 00:05:31.860 EAL: PCI memory mapped at 0x20200103a000 00:05:31.860 EAL: PCI memory mapped at 0x20200103b000 00:05:31.860 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.5 (socket 0) 00:05:31.860 EAL: PCI device 0000:1c:02.6 on NUMA socket 0 00:05:31.860 EAL: probe driver: 8086:37c9 qat 00:05:31.860 EAL: PCI memory mapped at 0x20200103c000 00:05:31.860 EAL: PCI memory mapped at 0x20200103d000 00:05:31.860 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.6 (socket 0) 00:05:31.860 EAL: PCI device 0000:1c:02.7 on NUMA socket 0 00:05:31.860 EAL: probe driver: 8086:37c9 qat 00:05:31.860 EAL: PCI memory mapped at 0x20200103e000 00:05:31.860 EAL: PCI memory mapped at 0x20200103f000 00:05:31.860 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.7 (socket 0) 00:05:31.860 EAL: PCI device 0000:1e:01.0 on NUMA socket 0 00:05:31.860 EAL: probe driver: 8086:37c9 qat 00:05:31.860 EAL: PCI memory mapped at 0x202001040000 00:05:31.860 EAL: PCI memory mapped at 0x202001041000 00:05:31.860 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.0 (socket 0) 00:05:31.860 EAL: PCI device 0000:1e:01.1 on NUMA socket 0 00:05:31.860 EAL: probe driver: 8086:37c9 qat 00:05:31.860 EAL: PCI memory mapped at 0x202001042000 00:05:31.860 EAL: PCI memory mapped at 0x202001043000 00:05:31.860 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.1 (socket 0) 00:05:31.860 EAL: PCI device 0000:1e:01.2 on NUMA socket 0 00:05:31.860 EAL: probe driver: 8086:37c9 qat 00:05:31.860 EAL: PCI memory mapped at 0x202001044000 00:05:31.860 EAL: PCI memory mapped at 0x202001045000 00:05:31.861 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.2 (socket 0) 00:05:31.861 EAL: PCI device 0000:1e:01.3 on NUMA socket 0 00:05:31.861 EAL: probe driver: 8086:37c9 qat 00:05:31.861 EAL: PCI memory mapped at 0x202001046000 00:05:31.861 EAL: PCI memory mapped at 0x202001047000 00:05:31.861 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.3 (socket 0) 00:05:31.861 EAL: PCI device 0000:1e:01.4 on NUMA socket 0 00:05:31.861 EAL: probe driver: 8086:37c9 qat 00:05:31.861 EAL: PCI memory mapped at 0x202001048000 00:05:31.861 EAL: PCI memory mapped at 0x202001049000 00:05:31.861 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.4 (socket 0) 00:05:31.861 EAL: PCI device 0000:1e:01.5 on NUMA socket 0 00:05:31.861 EAL: probe driver: 8086:37c9 qat 00:05:31.861 EAL: PCI memory mapped at 0x20200104a000 00:05:31.861 EAL: PCI memory mapped at 0x20200104b000 00:05:31.861 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.5 (socket 0) 00:05:31.861 EAL: PCI device 0000:1e:01.6 on NUMA socket 0 00:05:31.861 EAL: probe driver: 8086:37c9 qat 00:05:31.861 EAL: PCI memory mapped at 0x20200104c000 00:05:31.861 EAL: PCI memory mapped at 0x20200104d000 00:05:31.861 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.6 (socket 0) 00:05:31.861 EAL: PCI device 0000:1e:01.7 on NUMA socket 0 00:05:31.861 EAL: probe driver: 8086:37c9 qat 00:05:31.861 EAL: PCI memory mapped at 0x20200104e000 00:05:31.861 EAL: PCI memory mapped at 0x20200104f000 00:05:31.861 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.7 (socket 0) 00:05:31.861 EAL: PCI device 0000:1e:02.0 on NUMA socket 0 00:05:31.861 EAL: probe driver: 8086:37c9 qat 00:05:31.861 EAL: PCI memory mapped at 0x202001050000 00:05:31.861 EAL: PCI memory mapped at 0x202001051000 00:05:31.861 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.0 (socket 0) 00:05:31.861 EAL: PCI device 0000:1e:02.1 on NUMA socket 0 00:05:31.861 EAL: probe driver: 8086:37c9 qat 00:05:31.861 EAL: PCI memory mapped at 0x202001052000 00:05:31.861 EAL: PCI memory mapped at 0x202001053000 00:05:31.861 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.1 (socket 0) 00:05:31.861 EAL: PCI device 0000:1e:02.2 on NUMA socket 0 00:05:31.861 EAL: probe driver: 8086:37c9 qat 00:05:31.861 EAL: PCI memory mapped at 0x202001054000 00:05:31.861 EAL: PCI memory mapped at 0x202001055000 00:05:31.861 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.2 (socket 0) 00:05:31.861 EAL: PCI device 0000:1e:02.3 on NUMA socket 0 00:05:31.861 EAL: probe driver: 8086:37c9 qat 00:05:31.861 EAL: PCI memory mapped at 0x202001056000 00:05:31.861 EAL: PCI memory mapped at 0x202001057000 00:05:31.861 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.3 (socket 0) 00:05:31.861 EAL: PCI device 0000:1e:02.4 on NUMA socket 0 00:05:31.861 EAL: probe driver: 8086:37c9 qat 00:05:31.861 EAL: PCI memory mapped at 0x202001058000 00:05:31.861 EAL: PCI memory mapped at 0x202001059000 00:05:31.861 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.4 (socket 0) 00:05:31.861 EAL: PCI device 0000:1e:02.5 on NUMA socket 0 00:05:31.861 EAL: probe driver: 8086:37c9 qat 00:05:31.861 EAL: PCI memory mapped at 0x20200105a000 00:05:31.861 EAL: PCI memory mapped at 0x20200105b000 00:05:31.861 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.5 (socket 0) 00:05:31.861 EAL: PCI device 0000:1e:02.6 on NUMA socket 0 00:05:31.861 EAL: probe driver: 8086:37c9 qat 00:05:31.861 EAL: PCI memory mapped at 0x20200105c000 00:05:31.861 EAL: PCI memory mapped at 0x20200105d000 00:05:31.861 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.6 (socket 0) 00:05:31.861 EAL: PCI device 0000:1e:02.7 on NUMA socket 0 00:05:31.861 EAL: probe driver: 8086:37c9 qat 00:05:31.861 EAL: PCI memory mapped at 0x20200105e000 00:05:31.861 EAL: PCI memory mapped at 0x20200105f000 00:05:31.861 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.7 (socket 0) 00:05:31.861 EAL: PCI device 0000:3d:01.0 on NUMA socket 0 00:05:31.861 EAL: probe driver: 8086:37c9 qat 00:05:31.861 EAL: PCI memory mapped at 0x202001060000 00:05:31.861 EAL: PCI memory mapped at 0x202001061000 00:05:31.861 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:05:31.861 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.861 EAL: PCI memory unmapped at 0x202001060000 00:05:31.861 EAL: PCI memory unmapped at 0x202001061000 00:05:31.861 EAL: Requested device 0000:3d:01.0 cannot be used 00:05:31.861 EAL: PCI device 0000:3d:01.1 on NUMA socket 0 00:05:31.861 EAL: probe driver: 8086:37c9 qat 00:05:31.861 EAL: PCI memory mapped at 0x202001062000 00:05:31.861 EAL: PCI memory mapped at 0x202001063000 00:05:31.861 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:05:31.861 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.861 EAL: PCI memory unmapped at 0x202001062000 00:05:31.861 EAL: PCI memory unmapped at 0x202001063000 00:05:31.861 EAL: Requested device 0000:3d:01.1 cannot be used 00:05:31.861 EAL: PCI device 0000:3d:01.2 on NUMA socket 0 00:05:31.861 EAL: probe driver: 8086:37c9 qat 00:05:31.861 EAL: PCI memory mapped at 0x202001064000 00:05:31.861 EAL: PCI memory mapped at 0x202001065000 00:05:31.861 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:05:31.861 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.861 EAL: PCI memory unmapped at 0x202001064000 00:05:31.861 EAL: PCI memory unmapped at 0x202001065000 00:05:31.861 EAL: Requested device 0000:3d:01.2 cannot be used 00:05:31.861 EAL: PCI device 0000:3d:01.3 on NUMA socket 0 00:05:31.861 EAL: probe driver: 8086:37c9 qat 00:05:31.861 EAL: PCI memory mapped at 0x202001066000 00:05:31.861 EAL: PCI memory mapped at 0x202001067000 00:05:31.861 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:05:31.861 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.861 EAL: PCI memory unmapped at 0x202001066000 00:05:31.861 EAL: PCI memory unmapped at 0x202001067000 00:05:31.861 EAL: Requested device 0000:3d:01.3 cannot be used 00:05:31.861 EAL: PCI device 0000:3d:01.4 on NUMA socket 0 00:05:31.861 EAL: probe driver: 8086:37c9 qat 00:05:31.861 EAL: PCI memory mapped at 0x202001068000 00:05:31.861 EAL: PCI memory mapped at 0x202001069000 00:05:31.861 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:05:31.861 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.861 EAL: PCI memory unmapped at 0x202001068000 00:05:31.861 EAL: PCI memory unmapped at 0x202001069000 00:05:31.861 EAL: Requested device 0000:3d:01.4 cannot be used 00:05:31.861 EAL: PCI device 0000:3d:01.5 on NUMA socket 0 00:05:31.861 EAL: probe driver: 8086:37c9 qat 00:05:31.861 EAL: PCI memory mapped at 0x20200106a000 00:05:31.861 EAL: PCI memory mapped at 0x20200106b000 00:05:31.861 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:05:31.861 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.861 EAL: PCI memory unmapped at 0x20200106a000 00:05:31.861 EAL: PCI memory unmapped at 0x20200106b000 00:05:31.861 EAL: Requested device 0000:3d:01.5 cannot be used 00:05:31.861 EAL: PCI device 0000:3d:01.6 on NUMA socket 0 00:05:31.861 EAL: probe driver: 8086:37c9 qat 00:05:31.861 EAL: PCI memory mapped at 0x20200106c000 00:05:31.861 EAL: PCI memory mapped at 0x20200106d000 00:05:31.861 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:05:31.861 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.861 EAL: PCI memory unmapped at 0x20200106c000 00:05:31.861 EAL: PCI memory unmapped at 0x20200106d000 00:05:31.861 EAL: Requested device 0000:3d:01.6 cannot be used 00:05:31.861 EAL: PCI device 0000:3d:01.7 on NUMA socket 0 00:05:31.861 EAL: probe driver: 8086:37c9 qat 00:05:31.861 EAL: PCI memory mapped at 0x20200106e000 00:05:31.861 EAL: PCI memory mapped at 0x20200106f000 00:05:31.861 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:05:31.861 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.861 EAL: PCI memory unmapped at 0x20200106e000 00:05:31.861 EAL: PCI memory unmapped at 0x20200106f000 00:05:31.861 EAL: Requested device 0000:3d:01.7 cannot be used 00:05:31.861 EAL: PCI device 0000:3d:02.0 on NUMA socket 0 00:05:31.861 EAL: probe driver: 8086:37c9 qat 00:05:31.861 EAL: PCI memory mapped at 0x202001070000 00:05:31.861 EAL: PCI memory mapped at 0x202001071000 00:05:31.861 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:05:31.862 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.862 EAL: PCI memory unmapped at 0x202001070000 00:05:31.862 EAL: PCI memory unmapped at 0x202001071000 00:05:31.862 EAL: Requested device 0000:3d:02.0 cannot be used 00:05:31.862 EAL: PCI device 0000:3d:02.1 on NUMA socket 0 00:05:31.862 EAL: probe driver: 8086:37c9 qat 00:05:31.862 EAL: PCI memory mapped at 0x202001072000 00:05:31.862 EAL: PCI memory mapped at 0x202001073000 00:05:31.862 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:05:31.862 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.862 EAL: PCI memory unmapped at 0x202001072000 00:05:31.862 EAL: PCI memory unmapped at 0x202001073000 00:05:31.862 EAL: Requested device 0000:3d:02.1 cannot be used 00:05:31.862 EAL: PCI device 0000:3d:02.2 on NUMA socket 0 00:05:31.862 EAL: probe driver: 8086:37c9 qat 00:05:31.862 EAL: PCI memory mapped at 0x202001074000 00:05:31.862 EAL: PCI memory mapped at 0x202001075000 00:05:31.862 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:05:31.862 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.862 EAL: PCI memory unmapped at 0x202001074000 00:05:31.862 EAL: PCI memory unmapped at 0x202001075000 00:05:31.862 EAL: Requested device 0000:3d:02.2 cannot be used 00:05:31.862 EAL: PCI device 0000:3d:02.3 on NUMA socket 0 00:05:31.862 EAL: probe driver: 8086:37c9 qat 00:05:31.862 EAL: PCI memory mapped at 0x202001076000 00:05:31.862 EAL: PCI memory mapped at 0x202001077000 00:05:31.862 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:05:31.862 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.862 EAL: PCI memory unmapped at 0x202001076000 00:05:31.862 EAL: PCI memory unmapped at 0x202001077000 00:05:31.862 EAL: Requested device 0000:3d:02.3 cannot be used 00:05:31.862 EAL: PCI device 0000:3d:02.4 on NUMA socket 0 00:05:31.862 EAL: probe driver: 8086:37c9 qat 00:05:31.862 EAL: PCI memory mapped at 0x202001078000 00:05:31.862 EAL: PCI memory mapped at 0x202001079000 00:05:31.862 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:05:31.862 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.862 EAL: PCI memory unmapped at 0x202001078000 00:05:31.862 EAL: PCI memory unmapped at 0x202001079000 00:05:31.862 EAL: Requested device 0000:3d:02.4 cannot be used 00:05:31.862 EAL: PCI device 0000:3d:02.5 on NUMA socket 0 00:05:31.862 EAL: probe driver: 8086:37c9 qat 00:05:31.862 EAL: PCI memory mapped at 0x20200107a000 00:05:31.862 EAL: PCI memory mapped at 0x20200107b000 00:05:31.862 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:05:31.862 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.862 EAL: PCI memory unmapped at 0x20200107a000 00:05:31.862 EAL: PCI memory unmapped at 0x20200107b000 00:05:31.862 EAL: Requested device 0000:3d:02.5 cannot be used 00:05:31.862 EAL: PCI device 0000:3d:02.6 on NUMA socket 0 00:05:31.862 EAL: probe driver: 8086:37c9 qat 00:05:31.862 EAL: PCI memory mapped at 0x20200107c000 00:05:31.862 EAL: PCI memory mapped at 0x20200107d000 00:05:31.862 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:05:31.862 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.862 EAL: PCI memory unmapped at 0x20200107c000 00:05:31.862 EAL: PCI memory unmapped at 0x20200107d000 00:05:31.862 EAL: Requested device 0000:3d:02.6 cannot be used 00:05:31.862 EAL: PCI device 0000:3d:02.7 on NUMA socket 0 00:05:31.862 EAL: probe driver: 8086:37c9 qat 00:05:31.862 EAL: PCI memory mapped at 0x20200107e000 00:05:31.862 EAL: PCI memory mapped at 0x20200107f000 00:05:31.862 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:05:31.862 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.862 EAL: PCI memory unmapped at 0x20200107e000 00:05:31.862 EAL: PCI memory unmapped at 0x20200107f000 00:05:31.862 EAL: Requested device 0000:3d:02.7 cannot be used 00:05:31.862 EAL: PCI device 0000:3f:01.0 on NUMA socket 0 00:05:31.862 EAL: probe driver: 8086:37c9 qat 00:05:31.862 EAL: PCI memory mapped at 0x202001080000 00:05:31.862 EAL: PCI memory mapped at 0x202001081000 00:05:31.862 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:05:31.862 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.862 EAL: PCI memory unmapped at 0x202001080000 00:05:31.862 EAL: PCI memory unmapped at 0x202001081000 00:05:31.862 EAL: Requested device 0000:3f:01.0 cannot be used 00:05:31.862 EAL: PCI device 0000:3f:01.1 on NUMA socket 0 00:05:31.862 EAL: probe driver: 8086:37c9 qat 00:05:31.862 EAL: PCI memory mapped at 0x202001082000 00:05:31.862 EAL: PCI memory mapped at 0x202001083000 00:05:31.862 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:05:31.862 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.862 EAL: PCI memory unmapped at 0x202001082000 00:05:31.862 EAL: PCI memory unmapped at 0x202001083000 00:05:31.862 EAL: Requested device 0000:3f:01.1 cannot be used 00:05:31.862 EAL: PCI device 0000:3f:01.2 on NUMA socket 0 00:05:31.862 EAL: probe driver: 8086:37c9 qat 00:05:31.862 EAL: PCI memory mapped at 0x202001084000 00:05:31.862 EAL: PCI memory mapped at 0x202001085000 00:05:31.862 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:05:31.862 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.862 EAL: PCI memory unmapped at 0x202001084000 00:05:31.862 EAL: PCI memory unmapped at 0x202001085000 00:05:31.862 EAL: Requested device 0000:3f:01.2 cannot be used 00:05:31.862 EAL: PCI device 0000:3f:01.3 on NUMA socket 0 00:05:31.862 EAL: probe driver: 8086:37c9 qat 00:05:31.862 EAL: PCI memory mapped at 0x202001086000 00:05:31.862 EAL: PCI memory mapped at 0x202001087000 00:05:31.862 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:05:31.862 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.862 EAL: PCI memory unmapped at 0x202001086000 00:05:31.862 EAL: PCI memory unmapped at 0x202001087000 00:05:31.862 EAL: Requested device 0000:3f:01.3 cannot be used 00:05:31.862 EAL: PCI device 0000:3f:01.4 on NUMA socket 0 00:05:31.862 EAL: probe driver: 8086:37c9 qat 00:05:31.862 EAL: PCI memory mapped at 0x202001088000 00:05:31.862 EAL: PCI memory mapped at 0x202001089000 00:05:31.862 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:05:31.862 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.862 EAL: PCI memory unmapped at 0x202001088000 00:05:31.862 EAL: PCI memory unmapped at 0x202001089000 00:05:31.862 EAL: Requested device 0000:3f:01.4 cannot be used 00:05:31.862 EAL: PCI device 0000:3f:01.5 on NUMA socket 0 00:05:31.862 EAL: probe driver: 8086:37c9 qat 00:05:31.862 EAL: PCI memory mapped at 0x20200108a000 00:05:31.862 EAL: PCI memory mapped at 0x20200108b000 00:05:31.862 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:05:31.862 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.862 EAL: PCI memory unmapped at 0x20200108a000 00:05:31.862 EAL: PCI memory unmapped at 0x20200108b000 00:05:31.862 EAL: Requested device 0000:3f:01.5 cannot be used 00:05:31.862 EAL: PCI device 0000:3f:01.6 on NUMA socket 0 00:05:31.862 EAL: probe driver: 8086:37c9 qat 00:05:31.862 EAL: PCI memory mapped at 0x20200108c000 00:05:31.862 EAL: PCI memory mapped at 0x20200108d000 00:05:31.862 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:05:31.862 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.862 EAL: PCI memory unmapped at 0x20200108c000 00:05:31.862 EAL: PCI memory unmapped at 0x20200108d000 00:05:31.862 EAL: Requested device 0000:3f:01.6 cannot be used 00:05:31.862 EAL: PCI device 0000:3f:01.7 on NUMA socket 0 00:05:31.862 EAL: probe driver: 8086:37c9 qat 00:05:31.862 EAL: PCI memory mapped at 0x20200108e000 00:05:31.862 EAL: PCI memory mapped at 0x20200108f000 00:05:31.862 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:05:31.862 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.862 EAL: PCI memory unmapped at 0x20200108e000 00:05:31.863 EAL: PCI memory unmapped at 0x20200108f000 00:05:31.863 EAL: Requested device 0000:3f:01.7 cannot be used 00:05:31.863 EAL: PCI device 0000:3f:02.0 on NUMA socket 0 00:05:31.863 EAL: probe driver: 8086:37c9 qat 00:05:31.863 EAL: PCI memory mapped at 0x202001090000 00:05:31.863 EAL: PCI memory mapped at 0x202001091000 00:05:31.863 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:05:31.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.863 EAL: PCI memory unmapped at 0x202001090000 00:05:31.863 EAL: PCI memory unmapped at 0x202001091000 00:05:31.863 EAL: Requested device 0000:3f:02.0 cannot be used 00:05:31.863 EAL: PCI device 0000:3f:02.1 on NUMA socket 0 00:05:31.863 EAL: probe driver: 8086:37c9 qat 00:05:31.863 EAL: PCI memory mapped at 0x202001092000 00:05:31.863 EAL: PCI memory mapped at 0x202001093000 00:05:31.863 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:05:31.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.863 EAL: PCI memory unmapped at 0x202001092000 00:05:31.863 EAL: PCI memory unmapped at 0x202001093000 00:05:31.863 EAL: Requested device 0000:3f:02.1 cannot be used 00:05:31.863 EAL: PCI device 0000:3f:02.2 on NUMA socket 0 00:05:31.863 EAL: probe driver: 8086:37c9 qat 00:05:31.863 EAL: PCI memory mapped at 0x202001094000 00:05:31.863 EAL: PCI memory mapped at 0x202001095000 00:05:31.863 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:05:31.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.863 EAL: PCI memory unmapped at 0x202001094000 00:05:31.863 EAL: PCI memory unmapped at 0x202001095000 00:05:31.863 EAL: Requested device 0000:3f:02.2 cannot be used 00:05:31.863 EAL: PCI device 0000:3f:02.3 on NUMA socket 0 00:05:31.863 EAL: probe driver: 8086:37c9 qat 00:05:31.863 EAL: PCI memory mapped at 0x202001096000 00:05:31.863 EAL: PCI memory mapped at 0x202001097000 00:05:31.863 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:05:31.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.863 EAL: PCI memory unmapped at 0x202001096000 00:05:31.863 EAL: PCI memory unmapped at 0x202001097000 00:05:31.863 EAL: Requested device 0000:3f:02.3 cannot be used 00:05:31.863 EAL: PCI device 0000:3f:02.4 on NUMA socket 0 00:05:31.863 EAL: probe driver: 8086:37c9 qat 00:05:31.863 EAL: PCI memory mapped at 0x202001098000 00:05:31.863 EAL: PCI memory mapped at 0x202001099000 00:05:31.863 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:05:31.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.863 EAL: PCI memory unmapped at 0x202001098000 00:05:31.863 EAL: PCI memory unmapped at 0x202001099000 00:05:31.863 EAL: Requested device 0000:3f:02.4 cannot be used 00:05:31.863 EAL: PCI device 0000:3f:02.5 on NUMA socket 0 00:05:31.863 EAL: probe driver: 8086:37c9 qat 00:05:31.863 EAL: PCI memory mapped at 0x20200109a000 00:05:31.863 EAL: PCI memory mapped at 0x20200109b000 00:05:31.863 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:05:31.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.863 EAL: PCI memory unmapped at 0x20200109a000 00:05:31.863 EAL: PCI memory unmapped at 0x20200109b000 00:05:31.863 EAL: Requested device 0000:3f:02.5 cannot be used 00:05:31.863 EAL: PCI device 0000:3f:02.6 on NUMA socket 0 00:05:31.863 EAL: probe driver: 8086:37c9 qat 00:05:31.863 EAL: PCI memory mapped at 0x20200109c000 00:05:31.863 EAL: PCI memory mapped at 0x20200109d000 00:05:31.863 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:05:31.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.863 EAL: PCI memory unmapped at 0x20200109c000 00:05:31.863 EAL: PCI memory unmapped at 0x20200109d000 00:05:31.863 EAL: Requested device 0000:3f:02.6 cannot be used 00:05:31.863 EAL: PCI device 0000:3f:02.7 on NUMA socket 0 00:05:31.863 EAL: probe driver: 8086:37c9 qat 00:05:31.863 EAL: PCI memory mapped at 0x20200109e000 00:05:31.863 EAL: PCI memory mapped at 0x20200109f000 00:05:31.863 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:05:31.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.863 EAL: PCI memory unmapped at 0x20200109e000 00:05:31.863 EAL: PCI memory unmapped at 0x20200109f000 00:05:31.863 EAL: Requested device 0000:3f:02.7 cannot be used 00:05:31.863 EAL: No shared files mode enabled, IPC is disabled 00:05:31.863 EAL: No shared files mode enabled, IPC is disabled 00:05:31.863 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:31.863 EAL: Mem event callback 'spdk:(nil)' registered 00:05:31.863 00:05:31.863 00:05:31.863 CUnit - A unit testing framework for C - Version 2.1-3 00:05:31.863 http://cunit.sourceforge.net/ 00:05:31.863 00:05:31.863 00:05:31.863 Suite: components_suite 00:05:32.123 Test: vtophys_malloc_test ...passed 00:05:32.123 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:32.123 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.123 EAL: Restoring previous memory policy: 4 00:05:32.123 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.123 EAL: request: mp_malloc_sync 00:05:32.123 EAL: No shared files mode enabled, IPC is disabled 00:05:32.123 EAL: Heap on socket 0 was expanded by 4MB 00:05:32.123 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.123 EAL: request: mp_malloc_sync 00:05:32.123 EAL: No shared files mode enabled, IPC is disabled 00:05:32.123 EAL: Heap on socket 0 was shrunk by 4MB 00:05:32.123 EAL: Trying to obtain current memory policy. 00:05:32.123 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.123 EAL: Restoring previous memory policy: 4 00:05:32.123 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.123 EAL: request: mp_malloc_sync 00:05:32.123 EAL: No shared files mode enabled, IPC is disabled 00:05:32.123 EAL: Heap on socket 0 was expanded by 6MB 00:05:32.383 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.383 EAL: request: mp_malloc_sync 00:05:32.383 EAL: No shared files mode enabled, IPC is disabled 00:05:32.383 EAL: Heap on socket 0 was shrunk by 6MB 00:05:32.383 EAL: Trying to obtain current memory policy. 00:05:32.383 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.383 EAL: Restoring previous memory policy: 4 00:05:32.383 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.383 EAL: request: mp_malloc_sync 00:05:32.383 EAL: No shared files mode enabled, IPC is disabled 00:05:32.383 EAL: Heap on socket 0 was expanded by 10MB 00:05:32.383 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.383 EAL: request: mp_malloc_sync 00:05:32.383 EAL: No shared files mode enabled, IPC is disabled 00:05:32.383 EAL: Heap on socket 0 was shrunk by 10MB 00:05:32.383 EAL: Trying to obtain current memory policy. 00:05:32.383 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.383 EAL: Restoring previous memory policy: 4 00:05:32.383 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.383 EAL: request: mp_malloc_sync 00:05:32.383 EAL: No shared files mode enabled, IPC is disabled 00:05:32.383 EAL: Heap on socket 0 was expanded by 18MB 00:05:32.383 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.383 EAL: request: mp_malloc_sync 00:05:32.383 EAL: No shared files mode enabled, IPC is disabled 00:05:32.383 EAL: Heap on socket 0 was shrunk by 18MB 00:05:32.383 EAL: Trying to obtain current memory policy. 00:05:32.383 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.383 EAL: Restoring previous memory policy: 4 00:05:32.383 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.383 EAL: request: mp_malloc_sync 00:05:32.383 EAL: No shared files mode enabled, IPC is disabled 00:05:32.383 EAL: Heap on socket 0 was expanded by 34MB 00:05:32.383 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.383 EAL: request: mp_malloc_sync 00:05:32.383 EAL: No shared files mode enabled, IPC is disabled 00:05:32.383 EAL: Heap on socket 0 was shrunk by 34MB 00:05:32.383 EAL: Trying to obtain current memory policy. 00:05:32.383 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.383 EAL: Restoring previous memory policy: 4 00:05:32.383 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.383 EAL: request: mp_malloc_sync 00:05:32.383 EAL: No shared files mode enabled, IPC is disabled 00:05:32.383 EAL: Heap on socket 0 was expanded by 66MB 00:05:32.643 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.643 EAL: request: mp_malloc_sync 00:05:32.643 EAL: No shared files mode enabled, IPC is disabled 00:05:32.643 EAL: Heap on socket 0 was shrunk by 66MB 00:05:32.643 EAL: Trying to obtain current memory policy. 00:05:32.643 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.643 EAL: Restoring previous memory policy: 4 00:05:32.643 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.643 EAL: request: mp_malloc_sync 00:05:32.643 EAL: No shared files mode enabled, IPC is disabled 00:05:32.643 EAL: Heap on socket 0 was expanded by 130MB 00:05:32.902 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.902 EAL: request: mp_malloc_sync 00:05:32.902 EAL: No shared files mode enabled, IPC is disabled 00:05:32.902 EAL: Heap on socket 0 was shrunk by 130MB 00:05:33.200 EAL: Trying to obtain current memory policy. 00:05:33.200 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:33.200 EAL: Restoring previous memory policy: 4 00:05:33.200 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.200 EAL: request: mp_malloc_sync 00:05:33.200 EAL: No shared files mode enabled, IPC is disabled 00:05:33.200 EAL: Heap on socket 0 was expanded by 258MB 00:05:33.769 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.769 EAL: request: mp_malloc_sync 00:05:33.769 EAL: No shared files mode enabled, IPC is disabled 00:05:33.769 EAL: Heap on socket 0 was shrunk by 258MB 00:05:34.036 EAL: Trying to obtain current memory policy. 00:05:34.036 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.303 EAL: Restoring previous memory policy: 4 00:05:34.303 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.303 EAL: request: mp_malloc_sync 00:05:34.303 EAL: No shared files mode enabled, IPC is disabled 00:05:34.303 EAL: Heap on socket 0 was expanded by 514MB 00:05:35.239 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.239 EAL: request: mp_malloc_sync 00:05:35.239 EAL: No shared files mode enabled, IPC is disabled 00:05:35.239 EAL: Heap on socket 0 was shrunk by 514MB 00:05:36.174 EAL: Trying to obtain current memory policy. 00:05:36.174 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.434 EAL: Restoring previous memory policy: 4 00:05:36.434 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.434 EAL: request: mp_malloc_sync 00:05:36.434 EAL: No shared files mode enabled, IPC is disabled 00:05:36.434 EAL: Heap on socket 0 was expanded by 1026MB 00:05:38.337 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.596 EAL: request: mp_malloc_sync 00:05:38.596 EAL: No shared files mode enabled, IPC is disabled 00:05:38.596 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:39.971 passed 00:05:39.971 00:05:39.971 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.971 suites 1 1 n/a 0 0 00:05:39.971 tests 2 2 2 0 0 00:05:39.971 asserts 6636 6636 6636 0 n/a 00:05:39.971 00:05:39.971 Elapsed time = 8.038 seconds 00:05:39.971 EAL: No shared files mode enabled, IPC is disabled 00:05:39.971 EAL: No shared files mode enabled, IPC is disabled 00:05:39.971 EAL: No shared files mode enabled, IPC is disabled 00:05:39.971 00:05:39.971 real 0m8.335s 00:05:39.971 user 0m7.442s 00:05:39.971 sys 0m0.841s 00:05:39.971 21:48:59 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.971 21:48:59 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:39.971 ************************************ 00:05:39.971 END TEST env_vtophys 00:05:39.971 ************************************ 00:05:39.971 21:48:59 env -- common/autotest_common.sh@1142 -- # return 0 00:05:39.971 21:48:59 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/pci/pci_ut 00:05:39.971 21:48:59 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:39.971 21:48:59 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.971 21:48:59 env -- common/autotest_common.sh@10 -- # set +x 00:05:39.971 ************************************ 00:05:39.971 START TEST env_pci 00:05:39.971 ************************************ 00:05:39.971 21:48:59 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/pci/pci_ut 00:05:40.229 00:05:40.229 00:05:40.229 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.229 http://cunit.sourceforge.net/ 00:05:40.229 00:05:40.229 00:05:40.229 Suite: pci 00:05:40.229 Test: pci_hook ...[2024-07-13 21:48:59.387833] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1275341 has claimed it 00:05:40.229 EAL: Cannot find device (10000:00:01.0) 00:05:40.229 EAL: Failed to attach device on primary process 00:05:40.229 passed 00:05:40.229 00:05:40.229 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.229 suites 1 1 n/a 0 0 00:05:40.229 tests 1 1 1 0 0 00:05:40.229 asserts 25 25 25 0 n/a 00:05:40.229 00:05:40.229 Elapsed time = 0.067 seconds 00:05:40.229 00:05:40.229 real 0m0.167s 00:05:40.229 user 0m0.067s 00:05:40.229 sys 0m0.098s 00:05:40.229 21:48:59 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.229 21:48:59 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:40.229 ************************************ 00:05:40.229 END TEST env_pci 00:05:40.229 ************************************ 00:05:40.229 21:48:59 env -- common/autotest_common.sh@1142 -- # return 0 00:05:40.229 21:48:59 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:40.229 21:48:59 env -- env/env.sh@15 -- # uname 00:05:40.229 21:48:59 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:40.229 21:48:59 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:40.229 21:48:59 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:40.229 21:48:59 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:40.229 21:48:59 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.229 21:48:59 env -- common/autotest_common.sh@10 -- # set +x 00:05:40.229 ************************************ 00:05:40.229 START TEST env_dpdk_post_init 00:05:40.229 ************************************ 00:05:40.229 21:48:59 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:40.489 EAL: Detected CPU lcores: 112 00:05:40.489 EAL: Detected NUMA nodes: 2 00:05:40.489 EAL: Detected shared linkage of DPDK 00:05:40.489 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:40.489 EAL: Selected IOVA mode 'PA' 00:05:40.489 EAL: VFIO support initialized 00:05:40.489 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.0 (socket 0) 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1a:01.0_qat_asym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1a:01.0_qat_sym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:40.489 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.1 (socket 0) 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1a:01.1_qat_asym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1a:01.1_qat_sym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:40.489 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.2 (socket 0) 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1a:01.2_qat_asym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1a:01.2_qat_sym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:40.489 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.3 (socket 0) 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1a:01.3_qat_asym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1a:01.3_qat_sym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:40.489 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.4 (socket 0) 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1a:01.4_qat_asym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1a:01.4_qat_sym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:40.489 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.5 (socket 0) 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1a:01.5_qat_asym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1a:01.5_qat_sym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:40.489 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.6 (socket 0) 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1a:01.6_qat_asym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1a:01.6_qat_sym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:40.489 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.7 (socket 0) 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1a:01.7_qat_asym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1a:01.7_qat_sym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:40.489 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.0 (socket 0) 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1a:02.0_qat_asym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1a:02.0_qat_sym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:40.489 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.1 (socket 0) 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1a:02.1_qat_asym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1a:02.1_qat_sym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:40.489 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.2 (socket 0) 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1a:02.2_qat_asym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1a:02.2_qat_sym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:40.489 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.3 (socket 0) 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1a:02.3_qat_asym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1a:02.3_qat_sym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:40.489 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.4 (socket 0) 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1a:02.4_qat_asym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1a:02.4_qat_sym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:40.489 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.5 (socket 0) 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1a:02.5_qat_asym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1a:02.5_qat_sym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:40.489 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.6 (socket 0) 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1a:02.6_qat_asym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1a:02.6_qat_sym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:40.489 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.7 (socket 0) 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1a:02.7_qat_asym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1a:02.7_qat_sym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:40.489 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.0 (socket 0) 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1c:01.0_qat_asym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1c:01.0_qat_sym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:40.489 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.1 (socket 0) 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1c:01.1_qat_asym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1c:01.1_qat_sym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:40.489 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.2 (socket 0) 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1c:01.2_qat_asym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1c:01.2_qat_sym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:40.489 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.3 (socket 0) 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1c:01.3_qat_asym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1c:01.3_qat_sym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:40.489 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.4 (socket 0) 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1c:01.4_qat_asym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1c:01.4_qat_sym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:40.489 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.5 (socket 0) 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1c:01.5_qat_asym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1c:01.5_qat_sym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:40.489 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.6 (socket 0) 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1c:01.6_qat_asym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1c:01.6_qat_sym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:40.489 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.7 (socket 0) 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1c:01.7_qat_asym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1c:01.7_qat_sym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:40.489 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.0 (socket 0) 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1c:02.0_qat_asym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1c:02.0_qat_sym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:40.489 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.1 (socket 0) 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1c:02.1_qat_asym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1c:02.1_qat_sym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:40.489 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.2 (socket 0) 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1c:02.2_qat_asym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:40.489 CRYPTODEV: Creating cryptodev 0000:1c:02.2_qat_sym 00:05:40.489 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:40.489 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.3 (socket 0) 00:05:40.490 CRYPTODEV: Creating cryptodev 0000:1c:02.3_qat_asym 00:05:40.490 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:40.490 CRYPTODEV: Creating cryptodev 0000:1c:02.3_qat_sym 00:05:40.490 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:40.490 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.4 (socket 0) 00:05:40.490 CRYPTODEV: Creating cryptodev 0000:1c:02.4_qat_asym 00:05:40.490 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:40.490 CRYPTODEV: Creating cryptodev 0000:1c:02.4_qat_sym 00:05:40.490 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:40.490 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.5 (socket 0) 00:05:40.490 CRYPTODEV: Creating cryptodev 0000:1c:02.5_qat_asym 00:05:40.490 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:40.490 CRYPTODEV: Creating cryptodev 0000:1c:02.5_qat_sym 00:05:40.490 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:40.490 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.6 (socket 0) 00:05:40.490 CRYPTODEV: Creating cryptodev 0000:1c:02.6_qat_asym 00:05:40.490 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:40.490 CRYPTODEV: Creating cryptodev 0000:1c:02.6_qat_sym 00:05:40.490 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:40.490 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.7 (socket 0) 00:05:40.490 CRYPTODEV: Creating cryptodev 0000:1c:02.7_qat_asym 00:05:40.490 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:40.490 CRYPTODEV: Creating cryptodev 0000:1c:02.7_qat_sym 00:05:40.490 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:40.490 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.0 (socket 0) 00:05:40.490 CRYPTODEV: Creating cryptodev 0000:1e:01.0_qat_asym 00:05:40.490 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:40.490 CRYPTODEV: Creating cryptodev 0000:1e:01.0_qat_sym 00:05:40.490 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:40.490 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.1 (socket 0) 00:05:40.490 CRYPTODEV: Creating cryptodev 0000:1e:01.1_qat_asym 00:05:40.490 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:40.490 CRYPTODEV: Creating cryptodev 0000:1e:01.1_qat_sym 00:05:40.490 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:40.490 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.2 (socket 0) 00:05:40.490 CRYPTODEV: Creating cryptodev 0000:1e:01.2_qat_asym 00:05:40.490 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:40.490 CRYPTODEV: Creating cryptodev 0000:1e:01.2_qat_sym 00:05:40.490 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:40.490 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.3 (socket 0) 00:05:40.490 CRYPTODEV: Creating cryptodev 0000:1e:01.3_qat_asym 00:05:40.490 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:40.490 CRYPTODEV: Creating cryptodev 0000:1e:01.3_qat_sym 00:05:40.490 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:40.490 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.4 (socket 0) 00:05:40.490 CRYPTODEV: Creating cryptodev 0000:1e:01.4_qat_asym 00:05:40.490 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:40.490 CRYPTODEV: Creating cryptodev 0000:1e:01.4_qat_sym 00:05:40.490 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:40.490 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.5 (socket 0) 00:05:40.490 CRYPTODEV: Creating cryptodev 0000:1e:01.5_qat_asym 00:05:40.490 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:40.490 CRYPTODEV: Creating cryptodev 0000:1e:01.5_qat_sym 00:05:40.490 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:40.490 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.6 (socket 0) 00:05:40.490 CRYPTODEV: Creating cryptodev 0000:1e:01.6_qat_asym 00:05:40.490 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:40.490 CRYPTODEV: Creating cryptodev 0000:1e:01.6_qat_sym 00:05:40.490 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:40.490 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.7 (socket 0) 00:05:40.490 CRYPTODEV: Creating cryptodev 0000:1e:01.7_qat_asym 00:05:40.490 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:40.490 CRYPTODEV: Creating cryptodev 0000:1e:01.7_qat_sym 00:05:40.490 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:40.490 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.0 (socket 0) 00:05:40.490 CRYPTODEV: Creating cryptodev 0000:1e:02.0_qat_asym 00:05:40.490 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:40.490 CRYPTODEV: Creating cryptodev 0000:1e:02.0_qat_sym 00:05:40.490 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:40.490 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.1 (socket 0) 00:05:40.490 CRYPTODEV: Creating cryptodev 0000:1e:02.1_qat_asym 00:05:40.490 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:40.490 CRYPTODEV: Creating cryptodev 0000:1e:02.1_qat_sym 00:05:40.490 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:40.490 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.2 (socket 0) 00:05:40.490 CRYPTODEV: Creating cryptodev 0000:1e:02.2_qat_asym 00:05:40.490 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:40.490 CRYPTODEV: Creating cryptodev 0000:1e:02.2_qat_sym 00:05:40.490 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:40.490 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.3 (socket 0) 00:05:40.490 CRYPTODEV: Creating cryptodev 0000:1e:02.3_qat_asym 00:05:40.490 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:40.490 CRYPTODEV: Creating cryptodev 0000:1e:02.3_qat_sym 00:05:40.490 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:40.490 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.4 (socket 0) 00:05:40.490 CRYPTODEV: Creating cryptodev 0000:1e:02.4_qat_asym 00:05:40.490 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:40.490 CRYPTODEV: Creating cryptodev 0000:1e:02.4_qat_sym 00:05:40.490 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:40.490 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.5 (socket 0) 00:05:40.490 CRYPTODEV: Creating cryptodev 0000:1e:02.5_qat_asym 00:05:40.490 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:40.490 CRYPTODEV: Creating cryptodev 0000:1e:02.5_qat_sym 00:05:40.490 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:40.490 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.6 (socket 0) 00:05:40.490 CRYPTODEV: Creating cryptodev 0000:1e:02.6_qat_asym 00:05:40.490 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:40.490 CRYPTODEV: Creating cryptodev 0000:1e:02.6_qat_sym 00:05:40.490 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:40.490 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.7 (socket 0) 00:05:40.490 CRYPTODEV: Creating cryptodev 0000:1e:02.7_qat_asym 00:05:40.490 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:40.490 CRYPTODEV: Creating cryptodev 0000:1e:02.7_qat_sym 00:05:40.490 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:40.490 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:05:40.490 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.490 EAL: Requested device 0000:3d:01.0 cannot be used 00:05:40.490 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:05:40.490 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.490 EAL: Requested device 0000:3d:01.1 cannot be used 00:05:40.490 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:05:40.490 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.490 EAL: Requested device 0000:3d:01.2 cannot be used 00:05:40.490 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:05:40.490 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.490 EAL: Requested device 0000:3d:01.3 cannot be used 00:05:40.490 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:05:40.490 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.490 EAL: Requested device 0000:3d:01.4 cannot be used 00:05:40.490 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:05:40.490 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.490 EAL: Requested device 0000:3d:01.5 cannot be used 00:05:40.490 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:05:40.490 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.490 EAL: Requested device 0000:3d:01.6 cannot be used 00:05:40.490 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:05:40.490 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.490 EAL: Requested device 0000:3d:01.7 cannot be used 00:05:40.490 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:05:40.490 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.490 EAL: Requested device 0000:3d:02.0 cannot be used 00:05:40.490 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:05:40.490 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.490 EAL: Requested device 0000:3d:02.1 cannot be used 00:05:40.490 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:05:40.490 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.490 EAL: Requested device 0000:3d:02.2 cannot be used 00:05:40.490 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:05:40.490 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.490 EAL: Requested device 0000:3d:02.3 cannot be used 00:05:40.490 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:05:40.490 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.490 EAL: Requested device 0000:3d:02.4 cannot be used 00:05:40.490 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:05:40.490 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.490 EAL: Requested device 0000:3d:02.5 cannot be used 00:05:40.490 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:05:40.490 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.490 EAL: Requested device 0000:3d:02.6 cannot be used 00:05:40.490 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:05:40.490 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.490 EAL: Requested device 0000:3d:02.7 cannot be used 00:05:40.490 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:05:40.490 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.490 EAL: Requested device 0000:3f:01.0 cannot be used 00:05:40.491 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:05:40.491 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.491 EAL: Requested device 0000:3f:01.1 cannot be used 00:05:40.491 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:05:40.491 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.491 EAL: Requested device 0000:3f:01.2 cannot be used 00:05:40.491 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:05:40.491 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.491 EAL: Requested device 0000:3f:01.3 cannot be used 00:05:40.491 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:05:40.491 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.491 EAL: Requested device 0000:3f:01.4 cannot be used 00:05:40.491 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:05:40.491 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.491 EAL: Requested device 0000:3f:01.5 cannot be used 00:05:40.491 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:05:40.491 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.491 EAL: Requested device 0000:3f:01.6 cannot be used 00:05:40.491 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:05:40.491 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.491 EAL: Requested device 0000:3f:01.7 cannot be used 00:05:40.491 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:05:40.491 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.491 EAL: Requested device 0000:3f:02.0 cannot be used 00:05:40.491 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:05:40.491 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.491 EAL: Requested device 0000:3f:02.1 cannot be used 00:05:40.491 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:05:40.491 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.491 EAL: Requested device 0000:3f:02.2 cannot be used 00:05:40.491 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:05:40.491 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.491 EAL: Requested device 0000:3f:02.3 cannot be used 00:05:40.491 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:05:40.491 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.491 EAL: Requested device 0000:3f:02.4 cannot be used 00:05:40.491 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:05:40.491 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.491 EAL: Requested device 0000:3f:02.5 cannot be used 00:05:40.491 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:05:40.491 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.491 EAL: Requested device 0000:3f:02.6 cannot be used 00:05:40.491 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:05:40.491 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.491 EAL: Requested device 0000:3f:02.7 cannot be used 00:05:40.491 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:40.750 EAL: Using IOMMU type 1 (Type 1) 00:05:40.750 EAL: Ignore mapping IO port bar(1) 00:05:40.750 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:40.750 EAL: Ignore mapping IO port bar(1) 00:05:40.750 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:40.750 EAL: Ignore mapping IO port bar(1) 00:05:40.750 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:40.750 EAL: Ignore mapping IO port bar(1) 00:05:40.750 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:40.750 EAL: Ignore mapping IO port bar(1) 00:05:40.750 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:40.750 EAL: Ignore mapping IO port bar(1) 00:05:40.750 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:40.750 EAL: Ignore mapping IO port bar(1) 00:05:40.750 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:40.750 EAL: Ignore mapping IO port bar(1) 00:05:40.750 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:40.750 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:05:40.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.750 EAL: Requested device 0000:3d:01.0 cannot be used 00:05:40.750 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:05:40.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.750 EAL: Requested device 0000:3d:01.1 cannot be used 00:05:40.750 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:05:40.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.750 EAL: Requested device 0000:3d:01.2 cannot be used 00:05:40.750 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:05:40.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.750 EAL: Requested device 0000:3d:01.3 cannot be used 00:05:40.750 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:05:40.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.750 EAL: Requested device 0000:3d:01.4 cannot be used 00:05:40.750 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:05:40.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.750 EAL: Requested device 0000:3d:01.5 cannot be used 00:05:40.750 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:05:40.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.750 EAL: Requested device 0000:3d:01.6 cannot be used 00:05:40.750 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:05:40.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.750 EAL: Requested device 0000:3d:01.7 cannot be used 00:05:40.750 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:05:40.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.750 EAL: Requested device 0000:3d:02.0 cannot be used 00:05:40.750 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:05:40.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.750 EAL: Requested device 0000:3d:02.1 cannot be used 00:05:40.750 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:05:40.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.750 EAL: Requested device 0000:3d:02.2 cannot be used 00:05:40.750 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:05:40.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.750 EAL: Requested device 0000:3d:02.3 cannot be used 00:05:40.750 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:05:40.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.750 EAL: Requested device 0000:3d:02.4 cannot be used 00:05:40.750 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:05:40.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.750 EAL: Requested device 0000:3d:02.5 cannot be used 00:05:40.750 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:05:40.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.750 EAL: Requested device 0000:3d:02.6 cannot be used 00:05:40.750 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:05:40.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.750 EAL: Requested device 0000:3d:02.7 cannot be used 00:05:40.750 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:05:40.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.750 EAL: Requested device 0000:3f:01.0 cannot be used 00:05:40.750 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:05:40.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.750 EAL: Requested device 0000:3f:01.1 cannot be used 00:05:40.750 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:05:40.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.750 EAL: Requested device 0000:3f:01.2 cannot be used 00:05:40.750 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:05:40.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.750 EAL: Requested device 0000:3f:01.3 cannot be used 00:05:40.750 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:05:40.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.750 EAL: Requested device 0000:3f:01.4 cannot be used 00:05:40.750 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:05:40.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.750 EAL: Requested device 0000:3f:01.5 cannot be used 00:05:40.750 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:05:40.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.750 EAL: Requested device 0000:3f:01.6 cannot be used 00:05:40.750 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:05:40.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.750 EAL: Requested device 0000:3f:01.7 cannot be used 00:05:40.750 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:05:40.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.750 EAL: Requested device 0000:3f:02.0 cannot be used 00:05:40.750 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:05:40.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.750 EAL: Requested device 0000:3f:02.1 cannot be used 00:05:40.750 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:05:40.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.750 EAL: Requested device 0000:3f:02.2 cannot be used 00:05:40.750 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:05:40.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.750 EAL: Requested device 0000:3f:02.3 cannot be used 00:05:40.750 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:05:40.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.750 EAL: Requested device 0000:3f:02.4 cannot be used 00:05:40.750 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:05:40.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.750 EAL: Requested device 0000:3f:02.5 cannot be used 00:05:40.750 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:05:40.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.750 EAL: Requested device 0000:3f:02.6 cannot be used 00:05:40.750 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:05:40.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.750 EAL: Requested device 0000:3f:02.7 cannot be used 00:05:40.750 EAL: Ignore mapping IO port bar(1) 00:05:40.750 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:40.750 EAL: Ignore mapping IO port bar(1) 00:05:40.750 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:40.750 EAL: Ignore mapping IO port bar(1) 00:05:40.750 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:40.750 EAL: Ignore mapping IO port bar(1) 00:05:40.750 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:40.750 EAL: Ignore mapping IO port bar(1) 00:05:40.750 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:40.750 EAL: Ignore mapping IO port bar(1) 00:05:40.750 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:40.750 EAL: Ignore mapping IO port bar(1) 00:05:40.750 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:40.750 EAL: Ignore mapping IO port bar(1) 00:05:40.750 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:41.686 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:05:45.874 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:05:45.874 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001120000 00:05:45.874 Starting DPDK initialization... 00:05:45.874 Starting SPDK post initialization... 00:05:45.874 SPDK NVMe probe 00:05:45.874 Attaching to 0000:d8:00.0 00:05:45.874 Attached to 0000:d8:00.0 00:05:45.874 Cleaning up... 00:05:45.874 00:05:45.874 real 0m5.535s 00:05:45.874 user 0m4.136s 00:05:45.874 sys 0m0.457s 00:05:45.874 21:49:05 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.874 21:49:05 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:45.874 ************************************ 00:05:45.874 END TEST env_dpdk_post_init 00:05:45.874 ************************************ 00:05:45.874 21:49:05 env -- common/autotest_common.sh@1142 -- # return 0 00:05:45.874 21:49:05 env -- env/env.sh@26 -- # uname 00:05:45.874 21:49:05 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:45.874 21:49:05 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:45.874 21:49:05 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.874 21:49:05 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.874 21:49:05 env -- common/autotest_common.sh@10 -- # set +x 00:05:45.874 ************************************ 00:05:45.874 START TEST env_mem_callbacks 00:05:45.874 ************************************ 00:05:45.874 21:49:05 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:46.134 EAL: Detected CPU lcores: 112 00:05:46.134 EAL: Detected NUMA nodes: 2 00:05:46.134 EAL: Detected shared linkage of DPDK 00:05:46.135 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:46.135 EAL: Selected IOVA mode 'PA' 00:05:46.135 EAL: VFIO support initialized 00:05:46.135 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.0 (socket 0) 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1a:01.0_qat_asym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1a:01.0_qat_sym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:46.135 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.1 (socket 0) 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1a:01.1_qat_asym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1a:01.1_qat_sym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:46.135 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.2 (socket 0) 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1a:01.2_qat_asym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1a:01.2_qat_sym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:46.135 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.3 (socket 0) 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1a:01.3_qat_asym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1a:01.3_qat_sym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:46.135 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.4 (socket 0) 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1a:01.4_qat_asym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1a:01.4_qat_sym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:46.135 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.5 (socket 0) 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1a:01.5_qat_asym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1a:01.5_qat_sym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:46.135 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.6 (socket 0) 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1a:01.6_qat_asym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1a:01.6_qat_sym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:46.135 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.7 (socket 0) 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1a:01.7_qat_asym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1a:01.7_qat_sym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:46.135 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.0 (socket 0) 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1a:02.0_qat_asym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1a:02.0_qat_sym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:46.135 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.1 (socket 0) 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1a:02.1_qat_asym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1a:02.1_qat_sym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:46.135 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.2 (socket 0) 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1a:02.2_qat_asym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1a:02.2_qat_sym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:46.135 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.3 (socket 0) 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1a:02.3_qat_asym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1a:02.3_qat_sym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:46.135 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.4 (socket 0) 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1a:02.4_qat_asym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1a:02.4_qat_sym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:46.135 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.5 (socket 0) 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1a:02.5_qat_asym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1a:02.5_qat_sym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:46.135 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.6 (socket 0) 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1a:02.6_qat_asym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1a:02.6_qat_sym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:46.135 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.7 (socket 0) 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1a:02.7_qat_asym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1a:02.7_qat_sym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:46.135 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.0 (socket 0) 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1c:01.0_qat_asym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1c:01.0_qat_sym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:46.135 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.1 (socket 0) 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1c:01.1_qat_asym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1c:01.1_qat_sym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:46.135 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.2 (socket 0) 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1c:01.2_qat_asym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1c:01.2_qat_sym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:46.135 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.3 (socket 0) 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1c:01.3_qat_asym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1c:01.3_qat_sym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:46.135 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.4 (socket 0) 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1c:01.4_qat_asym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1c:01.4_qat_sym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:46.135 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.5 (socket 0) 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1c:01.5_qat_asym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1c:01.5_qat_sym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:46.135 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.6 (socket 0) 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1c:01.6_qat_asym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1c:01.6_qat_sym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:46.135 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.7 (socket 0) 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1c:01.7_qat_asym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1c:01.7_qat_sym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:46.135 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.0 (socket 0) 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1c:02.0_qat_asym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1c:02.0_qat_sym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:46.135 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.1 (socket 0) 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1c:02.1_qat_asym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1c:02.1_qat_sym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:46.135 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.2 (socket 0) 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1c:02.2_qat_asym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1c:02.2_qat_sym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:46.135 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.3 (socket 0) 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1c:02.3_qat_asym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1c:02.3_qat_sym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:46.135 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.4 (socket 0) 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1c:02.4_qat_asym 00:05:46.135 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:46.135 CRYPTODEV: Creating cryptodev 0000:1c:02.4_qat_sym 00:05:46.136 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:46.136 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.5 (socket 0) 00:05:46.136 CRYPTODEV: Creating cryptodev 0000:1c:02.5_qat_asym 00:05:46.136 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:46.136 CRYPTODEV: Creating cryptodev 0000:1c:02.5_qat_sym 00:05:46.136 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:46.136 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.6 (socket 0) 00:05:46.136 CRYPTODEV: Creating cryptodev 0000:1c:02.6_qat_asym 00:05:46.136 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:46.136 CRYPTODEV: Creating cryptodev 0000:1c:02.6_qat_sym 00:05:46.136 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:46.136 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.7 (socket 0) 00:05:46.136 CRYPTODEV: Creating cryptodev 0000:1c:02.7_qat_asym 00:05:46.136 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:46.136 CRYPTODEV: Creating cryptodev 0000:1c:02.7_qat_sym 00:05:46.136 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:46.136 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.0 (socket 0) 00:05:46.136 CRYPTODEV: Creating cryptodev 0000:1e:01.0_qat_asym 00:05:46.136 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:46.136 CRYPTODEV: Creating cryptodev 0000:1e:01.0_qat_sym 00:05:46.136 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:46.136 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.1 (socket 0) 00:05:46.136 CRYPTODEV: Creating cryptodev 0000:1e:01.1_qat_asym 00:05:46.136 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:46.136 CRYPTODEV: Creating cryptodev 0000:1e:01.1_qat_sym 00:05:46.136 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:46.136 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.2 (socket 0) 00:05:46.136 CRYPTODEV: Creating cryptodev 0000:1e:01.2_qat_asym 00:05:46.136 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:46.136 CRYPTODEV: Creating cryptodev 0000:1e:01.2_qat_sym 00:05:46.136 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:46.136 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.3 (socket 0) 00:05:46.136 CRYPTODEV: Creating cryptodev 0000:1e:01.3_qat_asym 00:05:46.136 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:46.136 CRYPTODEV: Creating cryptodev 0000:1e:01.3_qat_sym 00:05:46.136 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:46.136 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.4 (socket 0) 00:05:46.136 CRYPTODEV: Creating cryptodev 0000:1e:01.4_qat_asym 00:05:46.136 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:46.136 CRYPTODEV: Creating cryptodev 0000:1e:01.4_qat_sym 00:05:46.136 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:46.136 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.5 (socket 0) 00:05:46.136 CRYPTODEV: Creating cryptodev 0000:1e:01.5_qat_asym 00:05:46.136 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:46.136 CRYPTODEV: Creating cryptodev 0000:1e:01.5_qat_sym 00:05:46.136 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:46.136 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.6 (socket 0) 00:05:46.136 CRYPTODEV: Creating cryptodev 0000:1e:01.6_qat_asym 00:05:46.136 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:46.136 CRYPTODEV: Creating cryptodev 0000:1e:01.6_qat_sym 00:05:46.136 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:46.136 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.7 (socket 0) 00:05:46.136 CRYPTODEV: Creating cryptodev 0000:1e:01.7_qat_asym 00:05:46.136 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:46.136 CRYPTODEV: Creating cryptodev 0000:1e:01.7_qat_sym 00:05:46.136 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:46.136 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.0 (socket 0) 00:05:46.136 CRYPTODEV: Creating cryptodev 0000:1e:02.0_qat_asym 00:05:46.136 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:46.136 CRYPTODEV: Creating cryptodev 0000:1e:02.0_qat_sym 00:05:46.136 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:46.136 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.1 (socket 0) 00:05:46.136 CRYPTODEV: Creating cryptodev 0000:1e:02.1_qat_asym 00:05:46.136 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:46.136 CRYPTODEV: Creating cryptodev 0000:1e:02.1_qat_sym 00:05:46.136 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:46.136 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.2 (socket 0) 00:05:46.136 CRYPTODEV: Creating cryptodev 0000:1e:02.2_qat_asym 00:05:46.136 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:46.136 CRYPTODEV: Creating cryptodev 0000:1e:02.2_qat_sym 00:05:46.136 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:46.136 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.3 (socket 0) 00:05:46.136 CRYPTODEV: Creating cryptodev 0000:1e:02.3_qat_asym 00:05:46.136 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:46.136 CRYPTODEV: Creating cryptodev 0000:1e:02.3_qat_sym 00:05:46.136 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:46.136 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.4 (socket 0) 00:05:46.136 CRYPTODEV: Creating cryptodev 0000:1e:02.4_qat_asym 00:05:46.136 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:46.136 CRYPTODEV: Creating cryptodev 0000:1e:02.4_qat_sym 00:05:46.136 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:46.136 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.5 (socket 0) 00:05:46.136 CRYPTODEV: Creating cryptodev 0000:1e:02.5_qat_asym 00:05:46.136 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:46.136 CRYPTODEV: Creating cryptodev 0000:1e:02.5_qat_sym 00:05:46.136 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:46.136 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.6 (socket 0) 00:05:46.136 CRYPTODEV: Creating cryptodev 0000:1e:02.6_qat_asym 00:05:46.136 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:46.136 CRYPTODEV: Creating cryptodev 0000:1e:02.6_qat_sym 00:05:46.136 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:46.136 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.7 (socket 0) 00:05:46.136 CRYPTODEV: Creating cryptodev 0000:1e:02.7_qat_asym 00:05:46.136 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:46.136 CRYPTODEV: Creating cryptodev 0000:1e:02.7_qat_sym 00:05:46.136 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:46.136 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:05:46.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.136 EAL: Requested device 0000:3d:01.0 cannot be used 00:05:46.136 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:05:46.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.136 EAL: Requested device 0000:3d:01.1 cannot be used 00:05:46.136 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:05:46.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.136 EAL: Requested device 0000:3d:01.2 cannot be used 00:05:46.136 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:05:46.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.136 EAL: Requested device 0000:3d:01.3 cannot be used 00:05:46.136 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:05:46.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.136 EAL: Requested device 0000:3d:01.4 cannot be used 00:05:46.136 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:05:46.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.136 EAL: Requested device 0000:3d:01.5 cannot be used 00:05:46.136 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:05:46.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.136 EAL: Requested device 0000:3d:01.6 cannot be used 00:05:46.136 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:05:46.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.136 EAL: Requested device 0000:3d:01.7 cannot be used 00:05:46.136 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:05:46.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.136 EAL: Requested device 0000:3d:02.0 cannot be used 00:05:46.136 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:05:46.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.136 EAL: Requested device 0000:3d:02.1 cannot be used 00:05:46.136 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:05:46.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.136 EAL: Requested device 0000:3d:02.2 cannot be used 00:05:46.136 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:05:46.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.136 EAL: Requested device 0000:3d:02.3 cannot be used 00:05:46.136 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:05:46.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.136 EAL: Requested device 0000:3d:02.4 cannot be used 00:05:46.136 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:05:46.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.136 EAL: Requested device 0000:3d:02.5 cannot be used 00:05:46.136 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:05:46.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.136 EAL: Requested device 0000:3d:02.6 cannot be used 00:05:46.136 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:05:46.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.136 EAL: Requested device 0000:3d:02.7 cannot be used 00:05:46.136 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:05:46.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.136 EAL: Requested device 0000:3f:01.0 cannot be used 00:05:46.136 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:05:46.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.136 EAL: Requested device 0000:3f:01.1 cannot be used 00:05:46.136 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:05:46.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.136 EAL: Requested device 0000:3f:01.2 cannot be used 00:05:46.136 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:05:46.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.137 EAL: Requested device 0000:3f:01.3 cannot be used 00:05:46.137 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:05:46.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.137 EAL: Requested device 0000:3f:01.4 cannot be used 00:05:46.137 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:05:46.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.137 EAL: Requested device 0000:3f:01.5 cannot be used 00:05:46.137 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:05:46.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.137 EAL: Requested device 0000:3f:01.6 cannot be used 00:05:46.137 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:05:46.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.137 EAL: Requested device 0000:3f:01.7 cannot be used 00:05:46.137 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:05:46.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.137 EAL: Requested device 0000:3f:02.0 cannot be used 00:05:46.137 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:05:46.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.137 EAL: Requested device 0000:3f:02.1 cannot be used 00:05:46.137 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:05:46.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.137 EAL: Requested device 0000:3f:02.2 cannot be used 00:05:46.137 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:05:46.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.137 EAL: Requested device 0000:3f:02.3 cannot be used 00:05:46.137 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:05:46.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.137 EAL: Requested device 0000:3f:02.4 cannot be used 00:05:46.137 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:05:46.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.137 EAL: Requested device 0000:3f:02.5 cannot be used 00:05:46.137 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:05:46.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.137 EAL: Requested device 0000:3f:02.6 cannot be used 00:05:46.137 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:05:46.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.137 EAL: Requested device 0000:3f:02.7 cannot be used 00:05:46.137 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:46.137 00:05:46.137 00:05:46.137 CUnit - A unit testing framework for C - Version 2.1-3 00:05:46.137 http://cunit.sourceforge.net/ 00:05:46.137 00:05:46.137 00:05:46.137 Suite: memory 00:05:46.137 Test: test ... 00:05:46.137 register 0x200000200000 2097152 00:05:46.137 malloc 3145728 00:05:46.137 register 0x200000400000 4194304 00:05:46.137 buf 0x2000004fffc0 len 3145728 PASSED 00:05:46.137 malloc 64 00:05:46.137 buf 0x2000004ffec0 len 64 PASSED 00:05:46.137 malloc 4194304 00:05:46.137 register 0x200000800000 6291456 00:05:46.137 buf 0x2000009fffc0 len 4194304 PASSED 00:05:46.137 free 0x2000004fffc0 3145728 00:05:46.137 free 0x2000004ffec0 64 00:05:46.137 unregister 0x200000400000 4194304 PASSED 00:05:46.137 free 0x2000009fffc0 4194304 00:05:46.137 unregister 0x200000800000 6291456 PASSED 00:05:46.137 malloc 8388608 00:05:46.137 register 0x200000400000 10485760 00:05:46.137 buf 0x2000005fffc0 len 8388608 PASSED 00:05:46.137 free 0x2000005fffc0 8388608 00:05:46.137 unregister 0x200000400000 10485760 PASSED 00:05:46.137 passed 00:05:46.137 00:05:46.137 Run Summary: Type Total Ran Passed Failed Inactive 00:05:46.137 suites 1 1 n/a 0 0 00:05:46.137 tests 1 1 1 0 0 00:05:46.137 asserts 15 15 15 0 n/a 00:05:46.137 00:05:46.137 Elapsed time = 0.062 seconds 00:05:46.137 00:05:46.137 real 0m0.221s 00:05:46.137 user 0m0.115s 00:05:46.137 sys 0m0.105s 00:05:46.137 21:49:05 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.137 21:49:05 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:46.137 ************************************ 00:05:46.137 END TEST env_mem_callbacks 00:05:46.137 ************************************ 00:05:46.137 21:49:05 env -- common/autotest_common.sh@1142 -- # return 0 00:05:46.137 00:05:46.137 real 0m15.047s 00:05:46.137 user 0m12.177s 00:05:46.137 sys 0m1.917s 00:05:46.137 21:49:05 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.137 21:49:05 env -- common/autotest_common.sh@10 -- # set +x 00:05:46.137 ************************************ 00:05:46.137 END TEST env 00:05:46.137 ************************************ 00:05:46.396 21:49:05 -- common/autotest_common.sh@1142 -- # return 0 00:05:46.396 21:49:05 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/rpc.sh 00:05:46.396 21:49:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.396 21:49:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.396 21:49:05 -- common/autotest_common.sh@10 -- # set +x 00:05:46.396 ************************************ 00:05:46.396 START TEST rpc 00:05:46.396 ************************************ 00:05:46.396 21:49:05 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/rpc.sh 00:05:46.396 * Looking for test storage... 00:05:46.396 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:05:46.396 21:49:05 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1276535 00:05:46.396 21:49:05 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:46.396 21:49:05 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1276535 00:05:46.396 21:49:05 rpc -- common/autotest_common.sh@829 -- # '[' -z 1276535 ']' 00:05:46.396 21:49:05 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.396 21:49:05 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.396 21:49:05 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.396 21:49:05 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:46.396 21:49:05 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.396 21:49:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.396 [2024-07-13 21:49:05.757846] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:46.396 [2024-07-13 21:49:05.757967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1276535 ] 00:05:46.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.656 EAL: Requested device 0000:3d:01.0 cannot be used 00:05:46.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.656 EAL: Requested device 0000:3d:01.1 cannot be used 00:05:46.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.656 EAL: Requested device 0000:3d:01.2 cannot be used 00:05:46.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.656 EAL: Requested device 0000:3d:01.3 cannot be used 00:05:46.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.656 EAL: Requested device 0000:3d:01.4 cannot be used 00:05:46.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.656 EAL: Requested device 0000:3d:01.5 cannot be used 00:05:46.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.656 EAL: Requested device 0000:3d:01.6 cannot be used 00:05:46.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.656 EAL: Requested device 0000:3d:01.7 cannot be used 00:05:46.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.656 EAL: Requested device 0000:3d:02.0 cannot be used 00:05:46.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.656 EAL: Requested device 0000:3d:02.1 cannot be used 00:05:46.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.656 EAL: Requested device 0000:3d:02.2 cannot be used 00:05:46.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.656 EAL: Requested device 0000:3d:02.3 cannot be used 00:05:46.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.656 EAL: Requested device 0000:3d:02.4 cannot be used 00:05:46.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.656 EAL: Requested device 0000:3d:02.5 cannot be used 00:05:46.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.656 EAL: Requested device 0000:3d:02.6 cannot be used 00:05:46.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.656 EAL: Requested device 0000:3d:02.7 cannot be used 00:05:46.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.656 EAL: Requested device 0000:3f:01.0 cannot be used 00:05:46.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.656 EAL: Requested device 0000:3f:01.1 cannot be used 00:05:46.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.656 EAL: Requested device 0000:3f:01.2 cannot be used 00:05:46.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.656 EAL: Requested device 0000:3f:01.3 cannot be used 00:05:46.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.656 EAL: Requested device 0000:3f:01.4 cannot be used 00:05:46.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.656 EAL: Requested device 0000:3f:01.5 cannot be used 00:05:46.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.656 EAL: Requested device 0000:3f:01.6 cannot be used 00:05:46.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.656 EAL: Requested device 0000:3f:01.7 cannot be used 00:05:46.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.656 EAL: Requested device 0000:3f:02.0 cannot be used 00:05:46.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.656 EAL: Requested device 0000:3f:02.1 cannot be used 00:05:46.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.656 EAL: Requested device 0000:3f:02.2 cannot be used 00:05:46.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.656 EAL: Requested device 0000:3f:02.3 cannot be used 00:05:46.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.656 EAL: Requested device 0000:3f:02.4 cannot be used 00:05:46.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.656 EAL: Requested device 0000:3f:02.5 cannot be used 00:05:46.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.656 EAL: Requested device 0000:3f:02.6 cannot be used 00:05:46.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.656 EAL: Requested device 0000:3f:02.7 cannot be used 00:05:46.656 [2024-07-13 21:49:05.918483] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.916 [2024-07-13 21:49:06.123678] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:46.916 [2024-07-13 21:49:06.123723] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1276535' to capture a snapshot of events at runtime. 00:05:46.916 [2024-07-13 21:49:06.123736] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:46.916 [2024-07-13 21:49:06.123749] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:46.916 [2024-07-13 21:49:06.123758] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1276535 for offline analysis/debug. 00:05:46.916 [2024-07-13 21:49:06.123792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.855 21:49:06 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.855 21:49:06 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:47.855 21:49:06 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:05:47.855 21:49:06 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:05:47.855 21:49:06 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:47.855 21:49:06 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:47.855 21:49:06 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:47.855 21:49:06 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.855 21:49:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.855 ************************************ 00:05:47.855 START TEST rpc_integrity 00:05:47.855 ************************************ 00:05:47.855 21:49:07 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:47.855 21:49:07 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:47.855 21:49:07 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.855 21:49:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.855 21:49:07 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.855 21:49:07 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:47.855 21:49:07 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:47.855 21:49:07 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:47.855 21:49:07 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:47.855 21:49:07 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.855 21:49:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.855 21:49:07 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.855 21:49:07 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:47.855 21:49:07 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:47.855 21:49:07 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.855 21:49:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.855 21:49:07 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.855 21:49:07 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:47.855 { 00:05:47.855 "name": "Malloc0", 00:05:47.855 "aliases": [ 00:05:47.855 "f313c88c-8949-4c2a-bb1a-227f6109c270" 00:05:47.855 ], 00:05:47.855 "product_name": "Malloc disk", 00:05:47.855 "block_size": 512, 00:05:47.855 "num_blocks": 16384, 00:05:47.855 "uuid": "f313c88c-8949-4c2a-bb1a-227f6109c270", 00:05:47.855 "assigned_rate_limits": { 00:05:47.855 "rw_ios_per_sec": 0, 00:05:47.855 "rw_mbytes_per_sec": 0, 00:05:47.855 "r_mbytes_per_sec": 0, 00:05:47.855 "w_mbytes_per_sec": 0 00:05:47.855 }, 00:05:47.855 "claimed": false, 00:05:47.855 "zoned": false, 00:05:47.855 "supported_io_types": { 00:05:47.855 "read": true, 00:05:47.855 "write": true, 00:05:47.855 "unmap": true, 00:05:47.855 "flush": true, 00:05:47.855 "reset": true, 00:05:47.855 "nvme_admin": false, 00:05:47.855 "nvme_io": false, 00:05:47.855 "nvme_io_md": false, 00:05:47.855 "write_zeroes": true, 00:05:47.855 "zcopy": true, 00:05:47.855 "get_zone_info": false, 00:05:47.855 "zone_management": false, 00:05:47.855 "zone_append": false, 00:05:47.855 "compare": false, 00:05:47.855 "compare_and_write": false, 00:05:47.855 "abort": true, 00:05:47.855 "seek_hole": false, 00:05:47.855 "seek_data": false, 00:05:47.855 "copy": true, 00:05:47.855 "nvme_iov_md": false 00:05:47.855 }, 00:05:47.855 "memory_domains": [ 00:05:47.855 { 00:05:47.855 "dma_device_id": "system", 00:05:47.855 "dma_device_type": 1 00:05:47.855 }, 00:05:47.855 { 00:05:47.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.855 "dma_device_type": 2 00:05:47.855 } 00:05:47.855 ], 00:05:47.855 "driver_specific": {} 00:05:47.855 } 00:05:47.855 ]' 00:05:47.855 21:49:07 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:47.855 21:49:07 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:47.855 21:49:07 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:47.855 21:49:07 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.855 21:49:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.855 [2024-07-13 21:49:07.176266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:47.855 [2024-07-13 21:49:07.176322] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:47.855 [2024-07-13 21:49:07.176345] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003fc80 00:05:47.855 [2024-07-13 21:49:07.176359] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:47.855 [2024-07-13 21:49:07.178463] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:47.855 [2024-07-13 21:49:07.178501] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:47.855 Passthru0 00:05:47.855 21:49:07 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.855 21:49:07 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:47.855 21:49:07 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.855 21:49:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.855 21:49:07 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.855 21:49:07 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:47.855 { 00:05:47.855 "name": "Malloc0", 00:05:47.855 "aliases": [ 00:05:47.855 "f313c88c-8949-4c2a-bb1a-227f6109c270" 00:05:47.855 ], 00:05:47.855 "product_name": "Malloc disk", 00:05:47.855 "block_size": 512, 00:05:47.855 "num_blocks": 16384, 00:05:47.855 "uuid": "f313c88c-8949-4c2a-bb1a-227f6109c270", 00:05:47.855 "assigned_rate_limits": { 00:05:47.855 "rw_ios_per_sec": 0, 00:05:47.855 "rw_mbytes_per_sec": 0, 00:05:47.855 "r_mbytes_per_sec": 0, 00:05:47.855 "w_mbytes_per_sec": 0 00:05:47.855 }, 00:05:47.855 "claimed": true, 00:05:47.856 "claim_type": "exclusive_write", 00:05:47.856 "zoned": false, 00:05:47.856 "supported_io_types": { 00:05:47.856 "read": true, 00:05:47.856 "write": true, 00:05:47.856 "unmap": true, 00:05:47.856 "flush": true, 00:05:47.856 "reset": true, 00:05:47.856 "nvme_admin": false, 00:05:47.856 "nvme_io": false, 00:05:47.856 "nvme_io_md": false, 00:05:47.856 "write_zeroes": true, 00:05:47.856 "zcopy": true, 00:05:47.856 "get_zone_info": false, 00:05:47.856 "zone_management": false, 00:05:47.856 "zone_append": false, 00:05:47.856 "compare": false, 00:05:47.856 "compare_and_write": false, 00:05:47.856 "abort": true, 00:05:47.856 "seek_hole": false, 00:05:47.856 "seek_data": false, 00:05:47.856 "copy": true, 00:05:47.856 "nvme_iov_md": false 00:05:47.856 }, 00:05:47.856 "memory_domains": [ 00:05:47.856 { 00:05:47.856 "dma_device_id": "system", 00:05:47.856 "dma_device_type": 1 00:05:47.856 }, 00:05:47.856 { 00:05:47.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.856 "dma_device_type": 2 00:05:47.856 } 00:05:47.856 ], 00:05:47.856 "driver_specific": {} 00:05:47.856 }, 00:05:47.856 { 00:05:47.856 "name": "Passthru0", 00:05:47.856 "aliases": [ 00:05:47.856 "8f7900bb-e127-56e0-a213-067f355040f9" 00:05:47.856 ], 00:05:47.856 "product_name": "passthru", 00:05:47.856 "block_size": 512, 00:05:47.856 "num_blocks": 16384, 00:05:47.856 "uuid": "8f7900bb-e127-56e0-a213-067f355040f9", 00:05:47.856 "assigned_rate_limits": { 00:05:47.856 "rw_ios_per_sec": 0, 00:05:47.856 "rw_mbytes_per_sec": 0, 00:05:47.856 "r_mbytes_per_sec": 0, 00:05:47.856 "w_mbytes_per_sec": 0 00:05:47.856 }, 00:05:47.856 "claimed": false, 00:05:47.856 "zoned": false, 00:05:47.856 "supported_io_types": { 00:05:47.856 "read": true, 00:05:47.856 "write": true, 00:05:47.856 "unmap": true, 00:05:47.856 "flush": true, 00:05:47.856 "reset": true, 00:05:47.856 "nvme_admin": false, 00:05:47.856 "nvme_io": false, 00:05:47.856 "nvme_io_md": false, 00:05:47.856 "write_zeroes": true, 00:05:47.856 "zcopy": true, 00:05:47.856 "get_zone_info": false, 00:05:47.856 "zone_management": false, 00:05:47.856 "zone_append": false, 00:05:47.856 "compare": false, 00:05:47.856 "compare_and_write": false, 00:05:47.856 "abort": true, 00:05:47.856 "seek_hole": false, 00:05:47.856 "seek_data": false, 00:05:47.856 "copy": true, 00:05:47.856 "nvme_iov_md": false 00:05:47.856 }, 00:05:47.856 "memory_domains": [ 00:05:47.856 { 00:05:47.856 "dma_device_id": "system", 00:05:47.856 "dma_device_type": 1 00:05:47.856 }, 00:05:47.856 { 00:05:47.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.856 "dma_device_type": 2 00:05:47.856 } 00:05:47.856 ], 00:05:47.856 "driver_specific": { 00:05:47.856 "passthru": { 00:05:47.856 "name": "Passthru0", 00:05:47.856 "base_bdev_name": "Malloc0" 00:05:47.856 } 00:05:47.856 } 00:05:47.856 } 00:05:47.856 ]' 00:05:47.856 21:49:07 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:47.856 21:49:07 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:47.856 21:49:07 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:47.856 21:49:07 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.856 21:49:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.115 21:49:07 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.115 21:49:07 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:48.115 21:49:07 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.115 21:49:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.115 21:49:07 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.115 21:49:07 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:48.115 21:49:07 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.115 21:49:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.115 21:49:07 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.115 21:49:07 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:48.115 21:49:07 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:48.115 21:49:07 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:48.115 00:05:48.115 real 0m0.279s 00:05:48.115 user 0m0.164s 00:05:48.115 sys 0m0.043s 00:05:48.115 21:49:07 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.115 21:49:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.115 ************************************ 00:05:48.115 END TEST rpc_integrity 00:05:48.115 ************************************ 00:05:48.115 21:49:07 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:48.115 21:49:07 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:48.115 21:49:07 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:48.115 21:49:07 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.115 21:49:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.115 ************************************ 00:05:48.115 START TEST rpc_plugins 00:05:48.115 ************************************ 00:05:48.115 21:49:07 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:48.115 21:49:07 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:48.115 21:49:07 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.115 21:49:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:48.115 21:49:07 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.115 21:49:07 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:48.115 21:49:07 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:48.115 21:49:07 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.115 21:49:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:48.115 21:49:07 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.115 21:49:07 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:48.115 { 00:05:48.115 "name": "Malloc1", 00:05:48.115 "aliases": [ 00:05:48.115 "a0ba4342-c9e3-4447-a8e3-dce5eadb6bc3" 00:05:48.115 ], 00:05:48.115 "product_name": "Malloc disk", 00:05:48.115 "block_size": 4096, 00:05:48.115 "num_blocks": 256, 00:05:48.116 "uuid": "a0ba4342-c9e3-4447-a8e3-dce5eadb6bc3", 00:05:48.116 "assigned_rate_limits": { 00:05:48.116 "rw_ios_per_sec": 0, 00:05:48.116 "rw_mbytes_per_sec": 0, 00:05:48.116 "r_mbytes_per_sec": 0, 00:05:48.116 "w_mbytes_per_sec": 0 00:05:48.116 }, 00:05:48.116 "claimed": false, 00:05:48.116 "zoned": false, 00:05:48.116 "supported_io_types": { 00:05:48.116 "read": true, 00:05:48.116 "write": true, 00:05:48.116 "unmap": true, 00:05:48.116 "flush": true, 00:05:48.116 "reset": true, 00:05:48.116 "nvme_admin": false, 00:05:48.116 "nvme_io": false, 00:05:48.116 "nvme_io_md": false, 00:05:48.116 "write_zeroes": true, 00:05:48.116 "zcopy": true, 00:05:48.116 "get_zone_info": false, 00:05:48.116 "zone_management": false, 00:05:48.116 "zone_append": false, 00:05:48.116 "compare": false, 00:05:48.116 "compare_and_write": false, 00:05:48.116 "abort": true, 00:05:48.116 "seek_hole": false, 00:05:48.116 "seek_data": false, 00:05:48.116 "copy": true, 00:05:48.116 "nvme_iov_md": false 00:05:48.116 }, 00:05:48.116 "memory_domains": [ 00:05:48.116 { 00:05:48.116 "dma_device_id": "system", 00:05:48.116 "dma_device_type": 1 00:05:48.116 }, 00:05:48.116 { 00:05:48.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.116 "dma_device_type": 2 00:05:48.116 } 00:05:48.116 ], 00:05:48.116 "driver_specific": {} 00:05:48.116 } 00:05:48.116 ]' 00:05:48.116 21:49:07 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:48.116 21:49:07 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:48.116 21:49:07 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:48.116 21:49:07 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.116 21:49:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:48.116 21:49:07 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.116 21:49:07 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:48.116 21:49:07 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.116 21:49:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:48.116 21:49:07 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.116 21:49:07 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:48.116 21:49:07 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:48.413 21:49:07 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:48.413 00:05:48.413 real 0m0.146s 00:05:48.413 user 0m0.088s 00:05:48.413 sys 0m0.022s 00:05:48.413 21:49:07 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.413 21:49:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:48.413 ************************************ 00:05:48.413 END TEST rpc_plugins 00:05:48.413 ************************************ 00:05:48.413 21:49:07 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:48.413 21:49:07 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:48.413 21:49:07 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:48.413 21:49:07 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.413 21:49:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.413 ************************************ 00:05:48.413 START TEST rpc_trace_cmd_test 00:05:48.413 ************************************ 00:05:48.413 21:49:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:48.413 21:49:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:48.413 21:49:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:48.413 21:49:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.413 21:49:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:48.413 21:49:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.413 21:49:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:48.413 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1276535", 00:05:48.413 "tpoint_group_mask": "0x8", 00:05:48.413 "iscsi_conn": { 00:05:48.413 "mask": "0x2", 00:05:48.413 "tpoint_mask": "0x0" 00:05:48.413 }, 00:05:48.413 "scsi": { 00:05:48.413 "mask": "0x4", 00:05:48.413 "tpoint_mask": "0x0" 00:05:48.413 }, 00:05:48.413 "bdev": { 00:05:48.413 "mask": "0x8", 00:05:48.413 "tpoint_mask": "0xffffffffffffffff" 00:05:48.413 }, 00:05:48.413 "nvmf_rdma": { 00:05:48.413 "mask": "0x10", 00:05:48.413 "tpoint_mask": "0x0" 00:05:48.413 }, 00:05:48.413 "nvmf_tcp": { 00:05:48.413 "mask": "0x20", 00:05:48.413 "tpoint_mask": "0x0" 00:05:48.413 }, 00:05:48.413 "ftl": { 00:05:48.413 "mask": "0x40", 00:05:48.413 "tpoint_mask": "0x0" 00:05:48.413 }, 00:05:48.413 "blobfs": { 00:05:48.413 "mask": "0x80", 00:05:48.413 "tpoint_mask": "0x0" 00:05:48.413 }, 00:05:48.413 "dsa": { 00:05:48.413 "mask": "0x200", 00:05:48.413 "tpoint_mask": "0x0" 00:05:48.413 }, 00:05:48.413 "thread": { 00:05:48.413 "mask": "0x400", 00:05:48.413 "tpoint_mask": "0x0" 00:05:48.413 }, 00:05:48.413 "nvme_pcie": { 00:05:48.413 "mask": "0x800", 00:05:48.413 "tpoint_mask": "0x0" 00:05:48.413 }, 00:05:48.413 "iaa": { 00:05:48.413 "mask": "0x1000", 00:05:48.413 "tpoint_mask": "0x0" 00:05:48.413 }, 00:05:48.413 "nvme_tcp": { 00:05:48.413 "mask": "0x2000", 00:05:48.413 "tpoint_mask": "0x0" 00:05:48.413 }, 00:05:48.413 "bdev_nvme": { 00:05:48.413 "mask": "0x4000", 00:05:48.413 "tpoint_mask": "0x0" 00:05:48.413 }, 00:05:48.413 "sock": { 00:05:48.413 "mask": "0x8000", 00:05:48.413 "tpoint_mask": "0x0" 00:05:48.413 } 00:05:48.413 }' 00:05:48.413 21:49:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:48.413 21:49:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:48.413 21:49:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:48.413 21:49:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:48.413 21:49:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:48.413 21:49:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:48.413 21:49:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:48.673 21:49:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:48.673 21:49:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:48.673 21:49:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:48.673 00:05:48.673 real 0m0.232s 00:05:48.673 user 0m0.184s 00:05:48.673 sys 0m0.041s 00:05:48.673 21:49:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.673 21:49:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:48.673 ************************************ 00:05:48.673 END TEST rpc_trace_cmd_test 00:05:48.673 ************************************ 00:05:48.673 21:49:07 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:48.673 21:49:07 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:48.673 21:49:07 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:48.673 21:49:07 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:48.673 21:49:07 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:48.673 21:49:07 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.673 21:49:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.673 ************************************ 00:05:48.673 START TEST rpc_daemon_integrity 00:05:48.673 ************************************ 00:05:48.673 21:49:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:48.673 21:49:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:48.673 21:49:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.673 21:49:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.673 21:49:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.673 21:49:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:48.673 21:49:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:48.673 21:49:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:48.673 21:49:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:48.673 21:49:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.673 21:49:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.673 21:49:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.673 21:49:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:48.673 21:49:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:48.673 21:49:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.673 21:49:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.673 21:49:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.673 21:49:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:48.673 { 00:05:48.673 "name": "Malloc2", 00:05:48.673 "aliases": [ 00:05:48.673 "b51412ba-0f2f-4ef9-b816-bb0b43068ab9" 00:05:48.673 ], 00:05:48.673 "product_name": "Malloc disk", 00:05:48.673 "block_size": 512, 00:05:48.673 "num_blocks": 16384, 00:05:48.673 "uuid": "b51412ba-0f2f-4ef9-b816-bb0b43068ab9", 00:05:48.673 "assigned_rate_limits": { 00:05:48.673 "rw_ios_per_sec": 0, 00:05:48.673 "rw_mbytes_per_sec": 0, 00:05:48.673 "r_mbytes_per_sec": 0, 00:05:48.673 "w_mbytes_per_sec": 0 00:05:48.673 }, 00:05:48.673 "claimed": false, 00:05:48.673 "zoned": false, 00:05:48.673 "supported_io_types": { 00:05:48.673 "read": true, 00:05:48.673 "write": true, 00:05:48.673 "unmap": true, 00:05:48.673 "flush": true, 00:05:48.673 "reset": true, 00:05:48.673 "nvme_admin": false, 00:05:48.673 "nvme_io": false, 00:05:48.673 "nvme_io_md": false, 00:05:48.673 "write_zeroes": true, 00:05:48.673 "zcopy": true, 00:05:48.673 "get_zone_info": false, 00:05:48.673 "zone_management": false, 00:05:48.673 "zone_append": false, 00:05:48.673 "compare": false, 00:05:48.673 "compare_and_write": false, 00:05:48.673 "abort": true, 00:05:48.673 "seek_hole": false, 00:05:48.673 "seek_data": false, 00:05:48.673 "copy": true, 00:05:48.673 "nvme_iov_md": false 00:05:48.673 }, 00:05:48.673 "memory_domains": [ 00:05:48.673 { 00:05:48.673 "dma_device_id": "system", 00:05:48.673 "dma_device_type": 1 00:05:48.673 }, 00:05:48.673 { 00:05:48.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.673 "dma_device_type": 2 00:05:48.673 } 00:05:48.673 ], 00:05:48.673 "driver_specific": {} 00:05:48.673 } 00:05:48.673 ]' 00:05:48.673 21:49:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:48.932 21:49:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:48.932 21:49:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:48.932 21:49:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.932 21:49:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.932 [2024-07-13 21:49:08.067899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:48.932 [2024-07-13 21:49:08.067952] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:48.932 [2024-07-13 21:49:08.067971] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040e80 00:05:48.932 [2024-07-13 21:49:08.067985] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:48.932 [2024-07-13 21:49:08.070031] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:48.932 [2024-07-13 21:49:08.070061] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:48.932 Passthru0 00:05:48.932 21:49:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.932 21:49:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:48.932 21:49:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.932 21:49:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.932 21:49:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.932 21:49:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:48.932 { 00:05:48.932 "name": "Malloc2", 00:05:48.932 "aliases": [ 00:05:48.932 "b51412ba-0f2f-4ef9-b816-bb0b43068ab9" 00:05:48.932 ], 00:05:48.932 "product_name": "Malloc disk", 00:05:48.932 "block_size": 512, 00:05:48.932 "num_blocks": 16384, 00:05:48.932 "uuid": "b51412ba-0f2f-4ef9-b816-bb0b43068ab9", 00:05:48.932 "assigned_rate_limits": { 00:05:48.932 "rw_ios_per_sec": 0, 00:05:48.932 "rw_mbytes_per_sec": 0, 00:05:48.932 "r_mbytes_per_sec": 0, 00:05:48.932 "w_mbytes_per_sec": 0 00:05:48.932 }, 00:05:48.932 "claimed": true, 00:05:48.932 "claim_type": "exclusive_write", 00:05:48.932 "zoned": false, 00:05:48.932 "supported_io_types": { 00:05:48.932 "read": true, 00:05:48.932 "write": true, 00:05:48.932 "unmap": true, 00:05:48.932 "flush": true, 00:05:48.932 "reset": true, 00:05:48.932 "nvme_admin": false, 00:05:48.932 "nvme_io": false, 00:05:48.932 "nvme_io_md": false, 00:05:48.932 "write_zeroes": true, 00:05:48.932 "zcopy": true, 00:05:48.932 "get_zone_info": false, 00:05:48.932 "zone_management": false, 00:05:48.932 "zone_append": false, 00:05:48.932 "compare": false, 00:05:48.932 "compare_and_write": false, 00:05:48.932 "abort": true, 00:05:48.932 "seek_hole": false, 00:05:48.932 "seek_data": false, 00:05:48.932 "copy": true, 00:05:48.932 "nvme_iov_md": false 00:05:48.932 }, 00:05:48.932 "memory_domains": [ 00:05:48.932 { 00:05:48.932 "dma_device_id": "system", 00:05:48.932 "dma_device_type": 1 00:05:48.932 }, 00:05:48.932 { 00:05:48.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.932 "dma_device_type": 2 00:05:48.932 } 00:05:48.932 ], 00:05:48.932 "driver_specific": {} 00:05:48.932 }, 00:05:48.932 { 00:05:48.932 "name": "Passthru0", 00:05:48.932 "aliases": [ 00:05:48.932 "152bb34f-6cb4-570b-8477-6af25bc220da" 00:05:48.932 ], 00:05:48.932 "product_name": "passthru", 00:05:48.932 "block_size": 512, 00:05:48.932 "num_blocks": 16384, 00:05:48.932 "uuid": "152bb34f-6cb4-570b-8477-6af25bc220da", 00:05:48.932 "assigned_rate_limits": { 00:05:48.932 "rw_ios_per_sec": 0, 00:05:48.932 "rw_mbytes_per_sec": 0, 00:05:48.932 "r_mbytes_per_sec": 0, 00:05:48.932 "w_mbytes_per_sec": 0 00:05:48.932 }, 00:05:48.932 "claimed": false, 00:05:48.932 "zoned": false, 00:05:48.932 "supported_io_types": { 00:05:48.932 "read": true, 00:05:48.932 "write": true, 00:05:48.932 "unmap": true, 00:05:48.932 "flush": true, 00:05:48.932 "reset": true, 00:05:48.932 "nvme_admin": false, 00:05:48.932 "nvme_io": false, 00:05:48.932 "nvme_io_md": false, 00:05:48.932 "write_zeroes": true, 00:05:48.932 "zcopy": true, 00:05:48.932 "get_zone_info": false, 00:05:48.932 "zone_management": false, 00:05:48.932 "zone_append": false, 00:05:48.932 "compare": false, 00:05:48.932 "compare_and_write": false, 00:05:48.932 "abort": true, 00:05:48.932 "seek_hole": false, 00:05:48.932 "seek_data": false, 00:05:48.932 "copy": true, 00:05:48.932 "nvme_iov_md": false 00:05:48.932 }, 00:05:48.932 "memory_domains": [ 00:05:48.932 { 00:05:48.932 "dma_device_id": "system", 00:05:48.932 "dma_device_type": 1 00:05:48.932 }, 00:05:48.932 { 00:05:48.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.932 "dma_device_type": 2 00:05:48.932 } 00:05:48.932 ], 00:05:48.932 "driver_specific": { 00:05:48.932 "passthru": { 00:05:48.932 "name": "Passthru0", 00:05:48.932 "base_bdev_name": "Malloc2" 00:05:48.932 } 00:05:48.932 } 00:05:48.932 } 00:05:48.932 ]' 00:05:48.932 21:49:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:48.932 21:49:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:48.932 21:49:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:48.932 21:49:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.932 21:49:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.932 21:49:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.932 21:49:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:48.932 21:49:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.932 21:49:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.932 21:49:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.932 21:49:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:48.932 21:49:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.932 21:49:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.932 21:49:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.932 21:49:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:48.932 21:49:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:48.932 21:49:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:48.932 00:05:48.932 real 0m0.300s 00:05:48.932 user 0m0.171s 00:05:48.932 sys 0m0.051s 00:05:48.932 21:49:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.932 21:49:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.932 ************************************ 00:05:48.932 END TEST rpc_daemon_integrity 00:05:48.932 ************************************ 00:05:48.932 21:49:08 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:48.932 21:49:08 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:48.932 21:49:08 rpc -- rpc/rpc.sh@84 -- # killprocess 1276535 00:05:48.932 21:49:08 rpc -- common/autotest_common.sh@948 -- # '[' -z 1276535 ']' 00:05:48.932 21:49:08 rpc -- common/autotest_common.sh@952 -- # kill -0 1276535 00:05:48.932 21:49:08 rpc -- common/autotest_common.sh@953 -- # uname 00:05:48.932 21:49:08 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:48.932 21:49:08 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1276535 00:05:48.932 21:49:08 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:48.932 21:49:08 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:48.932 21:49:08 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1276535' 00:05:48.932 killing process with pid 1276535 00:05:48.932 21:49:08 rpc -- common/autotest_common.sh@967 -- # kill 1276535 00:05:48.932 21:49:08 rpc -- common/autotest_common.sh@972 -- # wait 1276535 00:05:51.494 00:05:51.494 real 0m5.103s 00:05:51.494 user 0m5.576s 00:05:51.494 sys 0m0.973s 00:05:51.494 21:49:10 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.494 21:49:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.494 ************************************ 00:05:51.494 END TEST rpc 00:05:51.494 ************************************ 00:05:51.494 21:49:10 -- common/autotest_common.sh@1142 -- # return 0 00:05:51.494 21:49:10 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:51.494 21:49:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:51.494 21:49:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.494 21:49:10 -- common/autotest_common.sh@10 -- # set +x 00:05:51.494 ************************************ 00:05:51.494 START TEST skip_rpc 00:05:51.494 ************************************ 00:05:51.494 21:49:10 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:51.494 * Looking for test storage... 00:05:51.494 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:05:51.494 21:49:10 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:05:51.494 21:49:10 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/log.txt 00:05:51.494 21:49:10 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:51.494 21:49:10 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:51.494 21:49:10 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.494 21:49:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.753 ************************************ 00:05:51.753 START TEST skip_rpc 00:05:51.753 ************************************ 00:05:51.753 21:49:10 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:51.753 21:49:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:51.753 21:49:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1277740 00:05:51.753 21:49:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:51.753 21:49:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:51.753 [2024-07-13 21:49:10.976010] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:51.753 [2024-07-13 21:49:10.976089] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1277740 ] 00:05:51.753 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.753 EAL: Requested device 0000:3d:01.0 cannot be used 00:05:51.753 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.753 EAL: Requested device 0000:3d:01.1 cannot be used 00:05:51.753 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.753 EAL: Requested device 0000:3d:01.2 cannot be used 00:05:51.753 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.753 EAL: Requested device 0000:3d:01.3 cannot be used 00:05:51.753 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.753 EAL: Requested device 0000:3d:01.4 cannot be used 00:05:51.753 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.753 EAL: Requested device 0000:3d:01.5 cannot be used 00:05:51.753 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.753 EAL: Requested device 0000:3d:01.6 cannot be used 00:05:51.753 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.753 EAL: Requested device 0000:3d:01.7 cannot be used 00:05:51.753 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.753 EAL: Requested device 0000:3d:02.0 cannot be used 00:05:51.753 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.753 EAL: Requested device 0000:3d:02.1 cannot be used 00:05:51.753 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.753 EAL: Requested device 0000:3d:02.2 cannot be used 00:05:51.753 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.753 EAL: Requested device 0000:3d:02.3 cannot be used 00:05:51.753 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.753 EAL: Requested device 0000:3d:02.4 cannot be used 00:05:51.753 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.753 EAL: Requested device 0000:3d:02.5 cannot be used 00:05:51.753 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.753 EAL: Requested device 0000:3d:02.6 cannot be used 00:05:51.753 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.753 EAL: Requested device 0000:3d:02.7 cannot be used 00:05:51.753 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.753 EAL: Requested device 0000:3f:01.0 cannot be used 00:05:51.753 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.754 EAL: Requested device 0000:3f:01.1 cannot be used 00:05:51.754 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.754 EAL: Requested device 0000:3f:01.2 cannot be used 00:05:51.754 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.754 EAL: Requested device 0000:3f:01.3 cannot be used 00:05:51.754 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.754 EAL: Requested device 0000:3f:01.4 cannot be used 00:05:51.754 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.754 EAL: Requested device 0000:3f:01.5 cannot be used 00:05:51.754 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.754 EAL: Requested device 0000:3f:01.6 cannot be used 00:05:51.754 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.754 EAL: Requested device 0000:3f:01.7 cannot be used 00:05:51.754 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.754 EAL: Requested device 0000:3f:02.0 cannot be used 00:05:51.754 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.754 EAL: Requested device 0000:3f:02.1 cannot be used 00:05:51.754 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.754 EAL: Requested device 0000:3f:02.2 cannot be used 00:05:51.754 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.754 EAL: Requested device 0000:3f:02.3 cannot be used 00:05:51.754 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.754 EAL: Requested device 0000:3f:02.4 cannot be used 00:05:51.754 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.754 EAL: Requested device 0000:3f:02.5 cannot be used 00:05:51.754 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.754 EAL: Requested device 0000:3f:02.6 cannot be used 00:05:51.754 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.754 EAL: Requested device 0000:3f:02.7 cannot be used 00:05:51.754 [2024-07-13 21:49:11.139382] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.013 [2024-07-13 21:49:11.342159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.290 21:49:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:57.290 21:49:15 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:57.290 21:49:15 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:57.290 21:49:15 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:57.290 21:49:15 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:57.290 21:49:15 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:57.290 21:49:15 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:57.290 21:49:15 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:57.290 21:49:15 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.290 21:49:15 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.291 21:49:15 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:57.291 21:49:15 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:57.291 21:49:15 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:57.291 21:49:15 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:57.291 21:49:15 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:57.291 21:49:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:57.291 21:49:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1277740 00:05:57.291 21:49:15 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 1277740 ']' 00:05:57.291 21:49:15 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 1277740 00:05:57.291 21:49:15 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:57.291 21:49:15 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:57.291 21:49:15 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1277740 00:05:57.291 21:49:15 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:57.291 21:49:15 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:57.291 21:49:15 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1277740' 00:05:57.291 killing process with pid 1277740 00:05:57.291 21:49:15 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 1277740 00:05:57.291 21:49:15 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 1277740 00:05:59.195 00:05:59.195 real 0m7.416s 00:05:59.195 user 0m6.977s 00:05:59.195 sys 0m0.455s 00:05:59.195 21:49:18 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.195 21:49:18 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.195 ************************************ 00:05:59.195 END TEST skip_rpc 00:05:59.195 ************************************ 00:05:59.195 21:49:18 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:59.195 21:49:18 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:59.195 21:49:18 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.195 21:49:18 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.195 21:49:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.195 ************************************ 00:05:59.195 START TEST skip_rpc_with_json 00:05:59.195 ************************************ 00:05:59.195 21:49:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:59.195 21:49:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:59.195 21:49:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1279043 00:05:59.195 21:49:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:59.195 21:49:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:59.195 21:49:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1279043 00:05:59.195 21:49:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 1279043 ']' 00:05:59.195 21:49:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.195 21:49:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.195 21:49:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.195 21:49:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.195 21:49:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:59.195 [2024-07-13 21:49:18.491234] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:59.195 [2024-07-13 21:49:18.491324] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1279043 ] 00:05:59.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:59.454 EAL: Requested device 0000:3d:01.0 cannot be used 00:05:59.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:59.454 EAL: Requested device 0000:3d:01.1 cannot be used 00:05:59.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:59.454 EAL: Requested device 0000:3d:01.2 cannot be used 00:05:59.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:59.454 EAL: Requested device 0000:3d:01.3 cannot be used 00:05:59.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:59.454 EAL: Requested device 0000:3d:01.4 cannot be used 00:05:59.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:59.454 EAL: Requested device 0000:3d:01.5 cannot be used 00:05:59.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:59.454 EAL: Requested device 0000:3d:01.6 cannot be used 00:05:59.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:59.454 EAL: Requested device 0000:3d:01.7 cannot be used 00:05:59.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:59.454 EAL: Requested device 0000:3d:02.0 cannot be used 00:05:59.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:59.454 EAL: Requested device 0000:3d:02.1 cannot be used 00:05:59.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:59.454 EAL: Requested device 0000:3d:02.2 cannot be used 00:05:59.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:59.454 EAL: Requested device 0000:3d:02.3 cannot be used 00:05:59.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:59.454 EAL: Requested device 0000:3d:02.4 cannot be used 00:05:59.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:59.454 EAL: Requested device 0000:3d:02.5 cannot be used 00:05:59.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:59.454 EAL: Requested device 0000:3d:02.6 cannot be used 00:05:59.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:59.454 EAL: Requested device 0000:3d:02.7 cannot be used 00:05:59.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:59.454 EAL: Requested device 0000:3f:01.0 cannot be used 00:05:59.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:59.454 EAL: Requested device 0000:3f:01.1 cannot be used 00:05:59.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:59.454 EAL: Requested device 0000:3f:01.2 cannot be used 00:05:59.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:59.454 EAL: Requested device 0000:3f:01.3 cannot be used 00:05:59.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:59.454 EAL: Requested device 0000:3f:01.4 cannot be used 00:05:59.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:59.454 EAL: Requested device 0000:3f:01.5 cannot be used 00:05:59.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:59.454 EAL: Requested device 0000:3f:01.6 cannot be used 00:05:59.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:59.454 EAL: Requested device 0000:3f:01.7 cannot be used 00:05:59.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:59.454 EAL: Requested device 0000:3f:02.0 cannot be used 00:05:59.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:59.454 EAL: Requested device 0000:3f:02.1 cannot be used 00:05:59.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:59.454 EAL: Requested device 0000:3f:02.2 cannot be used 00:05:59.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:59.454 EAL: Requested device 0000:3f:02.3 cannot be used 00:05:59.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:59.454 EAL: Requested device 0000:3f:02.4 cannot be used 00:05:59.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:59.454 EAL: Requested device 0000:3f:02.5 cannot be used 00:05:59.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:59.454 EAL: Requested device 0000:3f:02.6 cannot be used 00:05:59.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:59.454 EAL: Requested device 0000:3f:02.7 cannot be used 00:05:59.454 [2024-07-13 21:49:18.652865] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.714 [2024-07-13 21:49:18.856936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.652 21:49:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.652 21:49:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:06:00.652 21:49:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:00.652 21:49:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.652 21:49:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:00.652 [2024-07-13 21:49:19.721564] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:00.652 request: 00:06:00.652 { 00:06:00.652 "trtype": "tcp", 00:06:00.652 "method": "nvmf_get_transports", 00:06:00.652 "req_id": 1 00:06:00.652 } 00:06:00.652 Got JSON-RPC error response 00:06:00.652 response: 00:06:00.652 { 00:06:00.652 "code": -19, 00:06:00.652 "message": "No such device" 00:06:00.652 } 00:06:00.652 21:49:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:00.652 21:49:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:00.652 21:49:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.652 21:49:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:00.652 [2024-07-13 21:49:19.733673] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:00.652 21:49:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.652 21:49:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:00.652 21:49:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.652 21:49:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:00.652 21:49:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.652 21:49:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:06:00.652 { 00:06:00.652 "subsystems": [ 00:06:00.652 { 00:06:00.652 "subsystem": "keyring", 00:06:00.652 "config": [] 00:06:00.652 }, 00:06:00.652 { 00:06:00.652 "subsystem": "iobuf", 00:06:00.652 "config": [ 00:06:00.652 { 00:06:00.652 "method": "iobuf_set_options", 00:06:00.652 "params": { 00:06:00.652 "small_pool_count": 8192, 00:06:00.652 "large_pool_count": 1024, 00:06:00.652 "small_bufsize": 8192, 00:06:00.652 "large_bufsize": 135168 00:06:00.652 } 00:06:00.652 } 00:06:00.652 ] 00:06:00.652 }, 00:06:00.652 { 00:06:00.652 "subsystem": "sock", 00:06:00.652 "config": [ 00:06:00.652 { 00:06:00.652 "method": "sock_set_default_impl", 00:06:00.652 "params": { 00:06:00.652 "impl_name": "posix" 00:06:00.652 } 00:06:00.652 }, 00:06:00.652 { 00:06:00.652 "method": "sock_impl_set_options", 00:06:00.652 "params": { 00:06:00.652 "impl_name": "ssl", 00:06:00.652 "recv_buf_size": 4096, 00:06:00.652 "send_buf_size": 4096, 00:06:00.652 "enable_recv_pipe": true, 00:06:00.652 "enable_quickack": false, 00:06:00.652 "enable_placement_id": 0, 00:06:00.652 "enable_zerocopy_send_server": true, 00:06:00.652 "enable_zerocopy_send_client": false, 00:06:00.652 "zerocopy_threshold": 0, 00:06:00.652 "tls_version": 0, 00:06:00.652 "enable_ktls": false 00:06:00.652 } 00:06:00.652 }, 00:06:00.652 { 00:06:00.652 "method": "sock_impl_set_options", 00:06:00.652 "params": { 00:06:00.652 "impl_name": "posix", 00:06:00.652 "recv_buf_size": 2097152, 00:06:00.652 "send_buf_size": 2097152, 00:06:00.652 "enable_recv_pipe": true, 00:06:00.652 "enable_quickack": false, 00:06:00.652 "enable_placement_id": 0, 00:06:00.652 "enable_zerocopy_send_server": true, 00:06:00.652 "enable_zerocopy_send_client": false, 00:06:00.652 "zerocopy_threshold": 0, 00:06:00.652 "tls_version": 0, 00:06:00.652 "enable_ktls": false 00:06:00.652 } 00:06:00.652 } 00:06:00.652 ] 00:06:00.652 }, 00:06:00.652 { 00:06:00.652 "subsystem": "vmd", 00:06:00.652 "config": [] 00:06:00.652 }, 00:06:00.652 { 00:06:00.652 "subsystem": "accel", 00:06:00.652 "config": [ 00:06:00.653 { 00:06:00.653 "method": "accel_set_options", 00:06:00.653 "params": { 00:06:00.653 "small_cache_size": 128, 00:06:00.653 "large_cache_size": 16, 00:06:00.653 "task_count": 2048, 00:06:00.653 "sequence_count": 2048, 00:06:00.653 "buf_count": 2048 00:06:00.653 } 00:06:00.653 } 00:06:00.653 ] 00:06:00.653 }, 00:06:00.653 { 00:06:00.653 "subsystem": "bdev", 00:06:00.653 "config": [ 00:06:00.653 { 00:06:00.653 "method": "bdev_set_options", 00:06:00.653 "params": { 00:06:00.653 "bdev_io_pool_size": 65535, 00:06:00.653 "bdev_io_cache_size": 256, 00:06:00.653 "bdev_auto_examine": true, 00:06:00.653 "iobuf_small_cache_size": 128, 00:06:00.653 "iobuf_large_cache_size": 16 00:06:00.653 } 00:06:00.653 }, 00:06:00.653 { 00:06:00.653 "method": "bdev_raid_set_options", 00:06:00.653 "params": { 00:06:00.653 "process_window_size_kb": 1024 00:06:00.653 } 00:06:00.653 }, 00:06:00.653 { 00:06:00.653 "method": "bdev_iscsi_set_options", 00:06:00.653 "params": { 00:06:00.653 "timeout_sec": 30 00:06:00.653 } 00:06:00.653 }, 00:06:00.653 { 00:06:00.653 "method": "bdev_nvme_set_options", 00:06:00.653 "params": { 00:06:00.653 "action_on_timeout": "none", 00:06:00.653 "timeout_us": 0, 00:06:00.653 "timeout_admin_us": 0, 00:06:00.653 "keep_alive_timeout_ms": 10000, 00:06:00.653 "arbitration_burst": 0, 00:06:00.653 "low_priority_weight": 0, 00:06:00.653 "medium_priority_weight": 0, 00:06:00.653 "high_priority_weight": 0, 00:06:00.653 "nvme_adminq_poll_period_us": 10000, 00:06:00.653 "nvme_ioq_poll_period_us": 0, 00:06:00.653 "io_queue_requests": 0, 00:06:00.653 "delay_cmd_submit": true, 00:06:00.653 "transport_retry_count": 4, 00:06:00.653 "bdev_retry_count": 3, 00:06:00.653 "transport_ack_timeout": 0, 00:06:00.653 "ctrlr_loss_timeout_sec": 0, 00:06:00.653 "reconnect_delay_sec": 0, 00:06:00.653 "fast_io_fail_timeout_sec": 0, 00:06:00.653 "disable_auto_failback": false, 00:06:00.653 "generate_uuids": false, 00:06:00.653 "transport_tos": 0, 00:06:00.653 "nvme_error_stat": false, 00:06:00.653 "rdma_srq_size": 0, 00:06:00.653 "io_path_stat": false, 00:06:00.653 "allow_accel_sequence": false, 00:06:00.653 "rdma_max_cq_size": 0, 00:06:00.653 "rdma_cm_event_timeout_ms": 0, 00:06:00.653 "dhchap_digests": [ 00:06:00.653 "sha256", 00:06:00.653 "sha384", 00:06:00.653 "sha512" 00:06:00.653 ], 00:06:00.653 "dhchap_dhgroups": [ 00:06:00.653 "null", 00:06:00.653 "ffdhe2048", 00:06:00.653 "ffdhe3072", 00:06:00.653 "ffdhe4096", 00:06:00.653 "ffdhe6144", 00:06:00.653 "ffdhe8192" 00:06:00.653 ] 00:06:00.653 } 00:06:00.653 }, 00:06:00.653 { 00:06:00.653 "method": "bdev_nvme_set_hotplug", 00:06:00.653 "params": { 00:06:00.653 "period_us": 100000, 00:06:00.653 "enable": false 00:06:00.653 } 00:06:00.653 }, 00:06:00.653 { 00:06:00.653 "method": "bdev_wait_for_examine" 00:06:00.653 } 00:06:00.653 ] 00:06:00.653 }, 00:06:00.653 { 00:06:00.653 "subsystem": "scsi", 00:06:00.653 "config": null 00:06:00.653 }, 00:06:00.653 { 00:06:00.653 "subsystem": "scheduler", 00:06:00.653 "config": [ 00:06:00.653 { 00:06:00.653 "method": "framework_set_scheduler", 00:06:00.653 "params": { 00:06:00.653 "name": "static" 00:06:00.653 } 00:06:00.653 } 00:06:00.653 ] 00:06:00.653 }, 00:06:00.653 { 00:06:00.653 "subsystem": "vhost_scsi", 00:06:00.653 "config": [] 00:06:00.653 }, 00:06:00.653 { 00:06:00.653 "subsystem": "vhost_blk", 00:06:00.653 "config": [] 00:06:00.653 }, 00:06:00.653 { 00:06:00.653 "subsystem": "ublk", 00:06:00.653 "config": [] 00:06:00.653 }, 00:06:00.653 { 00:06:00.653 "subsystem": "nbd", 00:06:00.653 "config": [] 00:06:00.653 }, 00:06:00.653 { 00:06:00.653 "subsystem": "nvmf", 00:06:00.653 "config": [ 00:06:00.653 { 00:06:00.653 "method": "nvmf_set_config", 00:06:00.653 "params": { 00:06:00.653 "discovery_filter": "match_any", 00:06:00.653 "admin_cmd_passthru": { 00:06:00.653 "identify_ctrlr": false 00:06:00.653 } 00:06:00.653 } 00:06:00.653 }, 00:06:00.653 { 00:06:00.653 "method": "nvmf_set_max_subsystems", 00:06:00.653 "params": { 00:06:00.653 "max_subsystems": 1024 00:06:00.653 } 00:06:00.653 }, 00:06:00.653 { 00:06:00.653 "method": "nvmf_set_crdt", 00:06:00.653 "params": { 00:06:00.653 "crdt1": 0, 00:06:00.653 "crdt2": 0, 00:06:00.653 "crdt3": 0 00:06:00.653 } 00:06:00.653 }, 00:06:00.653 { 00:06:00.653 "method": "nvmf_create_transport", 00:06:00.653 "params": { 00:06:00.653 "trtype": "TCP", 00:06:00.653 "max_queue_depth": 128, 00:06:00.653 "max_io_qpairs_per_ctrlr": 127, 00:06:00.653 "in_capsule_data_size": 4096, 00:06:00.653 "max_io_size": 131072, 00:06:00.653 "io_unit_size": 131072, 00:06:00.653 "max_aq_depth": 128, 00:06:00.653 "num_shared_buffers": 511, 00:06:00.653 "buf_cache_size": 4294967295, 00:06:00.653 "dif_insert_or_strip": false, 00:06:00.653 "zcopy": false, 00:06:00.653 "c2h_success": true, 00:06:00.653 "sock_priority": 0, 00:06:00.653 "abort_timeout_sec": 1, 00:06:00.653 "ack_timeout": 0, 00:06:00.653 "data_wr_pool_size": 0 00:06:00.653 } 00:06:00.653 } 00:06:00.653 ] 00:06:00.653 }, 00:06:00.653 { 00:06:00.653 "subsystem": "iscsi", 00:06:00.653 "config": [ 00:06:00.653 { 00:06:00.653 "method": "iscsi_set_options", 00:06:00.653 "params": { 00:06:00.653 "node_base": "iqn.2016-06.io.spdk", 00:06:00.653 "max_sessions": 128, 00:06:00.653 "max_connections_per_session": 2, 00:06:00.653 "max_queue_depth": 64, 00:06:00.653 "default_time2wait": 2, 00:06:00.653 "default_time2retain": 20, 00:06:00.653 "first_burst_length": 8192, 00:06:00.653 "immediate_data": true, 00:06:00.653 "allow_duplicated_isid": false, 00:06:00.653 "error_recovery_level": 0, 00:06:00.653 "nop_timeout": 60, 00:06:00.653 "nop_in_interval": 30, 00:06:00.653 "disable_chap": false, 00:06:00.653 "require_chap": false, 00:06:00.653 "mutual_chap": false, 00:06:00.653 "chap_group": 0, 00:06:00.653 "max_large_datain_per_connection": 64, 00:06:00.653 "max_r2t_per_connection": 4, 00:06:00.653 "pdu_pool_size": 36864, 00:06:00.653 "immediate_data_pool_size": 16384, 00:06:00.653 "data_out_pool_size": 2048 00:06:00.653 } 00:06:00.653 } 00:06:00.653 ] 00:06:00.653 } 00:06:00.653 ] 00:06:00.653 } 00:06:00.653 21:49:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:00.653 21:49:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1279043 00:06:00.653 21:49:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1279043 ']' 00:06:00.653 21:49:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1279043 00:06:00.653 21:49:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:00.653 21:49:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:00.653 21:49:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1279043 00:06:00.653 21:49:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:00.653 21:49:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:00.653 21:49:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1279043' 00:06:00.653 killing process with pid 1279043 00:06:00.653 21:49:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1279043 00:06:00.653 21:49:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1279043 00:06:03.185 21:49:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1279638 00:06:03.185 21:49:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:03.185 21:49:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:06:08.453 21:49:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1279638 00:06:08.453 21:49:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1279638 ']' 00:06:08.453 21:49:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1279638 00:06:08.453 21:49:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:08.453 21:49:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:08.453 21:49:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1279638 00:06:08.453 21:49:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:08.453 21:49:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:08.453 21:49:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1279638' 00:06:08.453 killing process with pid 1279638 00:06:08.453 21:49:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1279638 00:06:08.453 21:49:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1279638 00:06:10.406 21:49:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/log.txt 00:06:10.406 21:49:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/log.txt 00:06:10.406 00:06:10.406 real 0m11.331s 00:06:10.406 user 0m10.741s 00:06:10.406 sys 0m0.993s 00:06:10.406 21:49:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.406 21:49:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:10.406 ************************************ 00:06:10.406 END TEST skip_rpc_with_json 00:06:10.406 ************************************ 00:06:10.406 21:49:29 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:10.406 21:49:29 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:10.406 21:49:29 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:10.406 21:49:29 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.406 21:49:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.666 ************************************ 00:06:10.666 START TEST skip_rpc_with_delay 00:06:10.666 ************************************ 00:06:10.666 21:49:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:06:10.666 21:49:29 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:10.666 21:49:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:06:10.666 21:49:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:10.666 21:49:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:06:10.666 21:49:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.666 21:49:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:06:10.666 21:49:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.666 21:49:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:06:10.666 21:49:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.666 21:49:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:06:10.666 21:49:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:10.666 21:49:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:10.666 [2024-07-13 21:49:29.911699] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:10.666 [2024-07-13 21:49:29.911792] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:10.666 21:49:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:06:10.666 21:49:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:10.666 21:49:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:10.666 21:49:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:10.666 00:06:10.666 real 0m0.168s 00:06:10.666 user 0m0.086s 00:06:10.666 sys 0m0.081s 00:06:10.666 21:49:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.666 21:49:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:10.666 ************************************ 00:06:10.666 END TEST skip_rpc_with_delay 00:06:10.666 ************************************ 00:06:10.666 21:49:30 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:10.666 21:49:30 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:10.666 21:49:30 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:10.666 21:49:30 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:10.666 21:49:30 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:10.666 21:49:30 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.666 21:49:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.926 ************************************ 00:06:10.926 START TEST exit_on_failed_rpc_init 00:06:10.926 ************************************ 00:06:10.926 21:49:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:06:10.926 21:49:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1281008 00:06:10.926 21:49:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:10.926 21:49:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1281008 00:06:10.926 21:49:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 1281008 ']' 00:06:10.926 21:49:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.926 21:49:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.926 21:49:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.926 21:49:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.926 21:49:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:10.926 [2024-07-13 21:49:30.152369] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:10.926 [2024-07-13 21:49:30.152452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1281008 ] 00:06:10.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:10.926 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:10.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:10.926 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:10.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:10.926 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:10.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:10.926 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:10.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:10.926 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:10.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:10.926 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:10.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:10.926 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:10.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:10.926 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:10.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:10.926 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:10.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:10.926 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:10.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:10.926 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:10.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:10.926 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:10.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:10.926 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:10.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:10.926 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:10.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:10.926 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:10.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:10.926 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:10.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:10.926 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:10.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:10.926 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:10.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:10.926 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:10.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:10.926 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:10.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:10.926 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:10.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:10.926 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:10.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:10.926 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:10.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:10.926 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:10.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:10.926 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:10.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:10.926 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:10.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:10.926 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:10.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:10.926 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:10.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:10.926 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:10.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:10.926 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:10.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:10.926 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:10.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:10.926 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:11.186 [2024-07-13 21:49:30.317842] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.186 [2024-07-13 21:49:30.534898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.124 21:49:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.124 21:49:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:06:12.124 21:49:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:12.124 21:49:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:12.124 21:49:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:06:12.124 21:49:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:12.124 21:49:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:06:12.124 21:49:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:12.124 21:49:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:06:12.124 21:49:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:12.124 21:49:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:06:12.124 21:49:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:12.124 21:49:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:06:12.124 21:49:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:12.124 21:49:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:12.383 [2024-07-13 21:49:31.526311] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:12.383 [2024-07-13 21:49:31.526420] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1281279 ] 00:06:12.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.383 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:12.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.383 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:12.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.383 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:12.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.383 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:12.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.383 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:12.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.383 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:12.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.383 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:12.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.383 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:12.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.384 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:12.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.384 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:12.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.384 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:12.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.384 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:12.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.384 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:12.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.384 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:12.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.384 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:12.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.384 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:12.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.384 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:12.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.384 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:12.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.384 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:12.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.384 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:12.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.384 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:12.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.384 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:12.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.384 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:12.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.384 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:12.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.384 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:12.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.384 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:12.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.384 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:12.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.384 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:12.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.384 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:12.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.384 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:12.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.384 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:12.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.384 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:12.384 [2024-07-13 21:49:31.686419] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.643 [2024-07-13 21:49:31.904237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.643 [2024-07-13 21:49:31.904340] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:12.643 [2024-07-13 21:49:31.904356] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:12.643 [2024-07-13 21:49:31.904370] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:13.211 21:49:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:06:13.211 21:49:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:13.211 21:49:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:06:13.211 21:49:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:06:13.211 21:49:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:06:13.211 21:49:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:13.211 21:49:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:13.211 21:49:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1281008 00:06:13.211 21:49:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 1281008 ']' 00:06:13.211 21:49:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 1281008 00:06:13.211 21:49:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:06:13.211 21:49:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:13.211 21:49:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1281008 00:06:13.211 21:49:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:13.211 21:49:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:13.211 21:49:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1281008' 00:06:13.211 killing process with pid 1281008 00:06:13.211 21:49:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 1281008 00:06:13.211 21:49:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 1281008 00:06:15.746 00:06:15.746 real 0m4.636s 00:06:15.746 user 0m5.128s 00:06:15.746 sys 0m0.761s 00:06:15.746 21:49:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.746 21:49:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:15.746 ************************************ 00:06:15.746 END TEST exit_on_failed_rpc_init 00:06:15.746 ************************************ 00:06:15.746 21:49:34 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:15.746 21:49:34 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:06:15.746 00:06:15.746 real 0m23.989s 00:06:15.746 user 0m23.119s 00:06:15.746 sys 0m2.574s 00:06:15.747 21:49:34 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.747 21:49:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.747 ************************************ 00:06:15.747 END TEST skip_rpc 00:06:15.747 ************************************ 00:06:15.747 21:49:34 -- common/autotest_common.sh@1142 -- # return 0 00:06:15.747 21:49:34 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:15.747 21:49:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:15.747 21:49:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.747 21:49:34 -- common/autotest_common.sh@10 -- # set +x 00:06:15.747 ************************************ 00:06:15.747 START TEST rpc_client 00:06:15.747 ************************************ 00:06:15.747 21:49:34 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:15.747 * Looking for test storage... 00:06:15.747 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client 00:06:15.747 21:49:34 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:15.747 OK 00:06:15.747 21:49:34 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:15.747 00:06:15.747 real 0m0.168s 00:06:15.747 user 0m0.081s 00:06:15.747 sys 0m0.098s 00:06:15.747 21:49:34 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.747 21:49:34 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:15.747 ************************************ 00:06:15.747 END TEST rpc_client 00:06:15.747 ************************************ 00:06:15.747 21:49:35 -- common/autotest_common.sh@1142 -- # return 0 00:06:15.747 21:49:35 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config.sh 00:06:15.747 21:49:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:15.747 21:49:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.747 21:49:35 -- common/autotest_common.sh@10 -- # set +x 00:06:15.747 ************************************ 00:06:15.747 START TEST json_config 00:06:15.747 ************************************ 00:06:15.747 21:49:35 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config.sh 00:06:16.007 21:49:35 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:06:16.007 21:49:35 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:16.007 21:49:35 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:16.007 21:49:35 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:16.007 21:49:35 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:16.007 21:49:35 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:16.007 21:49:35 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:16.007 21:49:35 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:16.007 21:49:35 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:16.007 21:49:35 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:16.007 21:49:35 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:16.007 21:49:35 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:16.007 21:49:35 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:06:16.007 21:49:35 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:06:16.007 21:49:35 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:16.007 21:49:35 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:16.007 21:49:35 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:16.007 21:49:35 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:16.007 21:49:35 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:06:16.007 21:49:35 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:16.007 21:49:35 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:16.007 21:49:35 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:16.007 21:49:35 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.007 21:49:35 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.007 21:49:35 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.007 21:49:35 json_config -- paths/export.sh@5 -- # export PATH 00:06:16.007 21:49:35 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.007 21:49:35 json_config -- nvmf/common.sh@47 -- # : 0 00:06:16.007 21:49:35 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:16.007 21:49:35 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:16.007 21:49:35 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:16.007 21:49:35 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:16.007 21:49:35 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:16.007 21:49:35 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:16.007 21:49:35 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:16.007 21:49:35 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:16.007 21:49:35 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/common.sh 00:06:16.007 21:49:35 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:16.007 21:49:35 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:16.007 21:49:35 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:16.007 21:49:35 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:16.007 21:49:35 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:16.007 21:49:35 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:16.007 21:49:35 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:16.007 21:49:35 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:16.007 21:49:35 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:16.007 21:49:35 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:16.007 21:49:35 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_initiator_config.json') 00:06:16.007 21:49:35 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:16.007 21:49:35 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:16.007 21:49:35 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:16.007 21:49:35 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:06:16.007 INFO: JSON configuration test init 00:06:16.007 21:49:35 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:06:16.007 21:49:35 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:06:16.007 21:49:35 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:16.007 21:49:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.007 21:49:35 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:06:16.007 21:49:35 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:16.007 21:49:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.007 21:49:35 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:06:16.007 21:49:35 json_config -- json_config/common.sh@9 -- # local app=target 00:06:16.007 21:49:35 json_config -- json_config/common.sh@10 -- # shift 00:06:16.007 21:49:35 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:16.007 21:49:35 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:16.007 21:49:35 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:16.007 21:49:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:16.007 21:49:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:16.007 21:49:35 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1282054 00:06:16.007 21:49:35 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:16.007 Waiting for target to run... 00:06:16.007 21:49:35 json_config -- json_config/common.sh@25 -- # waitforlisten 1282054 /var/tmp/spdk_tgt.sock 00:06:16.007 21:49:35 json_config -- common/autotest_common.sh@829 -- # '[' -z 1282054 ']' 00:06:16.007 21:49:35 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:16.007 21:49:35 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:16.007 21:49:35 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.007 21:49:35 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:16.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:16.007 21:49:35 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.007 21:49:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.007 [2024-07-13 21:49:35.294894] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:16.007 [2024-07-13 21:49:35.294995] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1282054 ] 00:06:16.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.577 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:16.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.577 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:16.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.577 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:16.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.577 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:16.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.577 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:16.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.577 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:16.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.577 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:16.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.577 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:16.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.577 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:16.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.577 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:16.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.577 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:16.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.577 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:16.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.577 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:16.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.577 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:16.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.577 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:16.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.577 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:16.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.577 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:16.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.577 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:16.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.577 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:16.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.577 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:16.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.578 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:16.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.578 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:16.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.578 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:16.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.578 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:16.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.578 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:16.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.578 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:16.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.578 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:16.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.578 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:16.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.578 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:16.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.578 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:16.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.578 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:16.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.578 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:16.578 [2024-07-13 21:49:35.828667] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.837 [2024-07-13 21:49:36.025664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.404 21:49:36 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:17.404 21:49:36 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:17.404 21:49:36 json_config -- json_config/common.sh@26 -- # echo '' 00:06:17.404 00:06:17.404 21:49:36 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:06:17.404 21:49:36 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:06:17.404 21:49:36 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:17.404 21:49:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:17.404 21:49:36 json_config -- json_config/json_config.sh@95 -- # [[ 1 -eq 1 ]] 00:06:17.404 21:49:36 json_config -- json_config/json_config.sh@96 -- # tgt_rpc dpdk_cryptodev_scan_accel_module 00:06:17.404 21:49:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock dpdk_cryptodev_scan_accel_module 00:06:17.663 21:49:36 json_config -- json_config/json_config.sh@97 -- # tgt_rpc accel_assign_opc -o encrypt -m dpdk_cryptodev 00:06:17.663 21:49:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock accel_assign_opc -o encrypt -m dpdk_cryptodev 00:06:17.663 [2024-07-13 21:49:37.040914] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:06:17.920 21:49:37 json_config -- json_config/json_config.sh@98 -- # tgt_rpc accel_assign_opc -o decrypt -m dpdk_cryptodev 00:06:17.920 21:49:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock accel_assign_opc -o decrypt -m dpdk_cryptodev 00:06:17.920 [2024-07-13 21:49:37.209345] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:06:17.920 21:49:37 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:06:17.920 21:49:37 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:17.920 21:49:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:17.920 21:49:37 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:17.920 21:49:37 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:06:17.920 21:49:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:18.539 [2024-07-13 21:49:37.621324] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:06:25.106 21:49:43 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:06:25.106 21:49:43 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:25.106 21:49:43 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:25.106 21:49:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.106 21:49:43 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:25.106 21:49:43 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:25.106 21:49:43 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:25.106 21:49:43 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:25.106 21:49:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:25.106 21:49:43 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:25.106 21:49:43 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:25.106 21:49:43 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:25.106 21:49:43 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:25.106 21:49:43 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:06:25.106 21:49:43 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:25.106 21:49:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.106 21:49:43 json_config -- json_config/json_config.sh@55 -- # return 0 00:06:25.106 21:49:43 json_config -- json_config/json_config.sh@278 -- # [[ 1 -eq 1 ]] 00:06:25.106 21:49:43 json_config -- json_config/json_config.sh@279 -- # create_bdev_subsystem_config 00:06:25.106 21:49:43 json_config -- json_config/json_config.sh@105 -- # timing_enter create_bdev_subsystem_config 00:06:25.106 21:49:43 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:25.106 21:49:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.106 21:49:43 json_config -- json_config/json_config.sh@107 -- # expected_notifications=() 00:06:25.106 21:49:43 json_config -- json_config/json_config.sh@107 -- # local expected_notifications 00:06:25.106 21:49:43 json_config -- json_config/json_config.sh@111 -- # expected_notifications+=($(get_notifications)) 00:06:25.106 21:49:43 json_config -- json_config/json_config.sh@111 -- # get_notifications 00:06:25.106 21:49:43 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:06:25.106 21:49:43 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:25.106 21:49:43 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:25.106 21:49:43 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:06:25.106 21:49:43 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:06:25.106 21:49:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:06:25.106 21:49:43 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:06:25.106 21:49:43 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:25.106 21:49:43 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:25.106 21:49:43 json_config -- json_config/json_config.sh@113 -- # [[ 1 -eq 1 ]] 00:06:25.106 21:49:43 json_config -- json_config/json_config.sh@114 -- # local lvol_store_base_bdev=Nvme0n1 00:06:25.106 21:49:43 json_config -- json_config/json_config.sh@116 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:06:25.106 21:49:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:06:25.106 Nvme0n1p0 Nvme0n1p1 00:06:25.106 21:49:44 json_config -- json_config/json_config.sh@117 -- # tgt_rpc bdev_split_create Malloc0 3 00:06:25.106 21:49:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:06:25.106 [2024-07-13 21:49:44.270060] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:06:25.106 [2024-07-13 21:49:44.270117] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:06:25.106 00:06:25.106 21:49:44 json_config -- json_config/json_config.sh@118 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:06:25.106 21:49:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:06:25.106 Malloc3 00:06:25.106 21:49:44 json_config -- json_config/json_config.sh@119 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:06:25.106 21:49:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:06:25.365 [2024-07-13 21:49:44.625226] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:25.365 [2024-07-13 21:49:44.625291] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:25.365 [2024-07-13 21:49:44.625322] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:06:25.365 [2024-07-13 21:49:44.625337] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:25.365 [2024-07-13 21:49:44.627569] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:25.365 [2024-07-13 21:49:44.627603] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:06:25.365 PTBdevFromMalloc3 00:06:25.365 21:49:44 json_config -- json_config/json_config.sh@121 -- # tgt_rpc bdev_null_create Null0 32 512 00:06:25.365 21:49:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:06:25.623 Null0 00:06:25.623 21:49:44 json_config -- json_config/json_config.sh@123 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:06:25.623 21:49:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:06:25.623 Malloc0 00:06:25.882 21:49:45 json_config -- json_config/json_config.sh@124 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:06:25.883 21:49:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:06:25.883 Malloc1 00:06:25.883 21:49:45 json_config -- json_config/json_config.sh@137 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:06:25.883 21:49:45 json_config -- json_config/json_config.sh@140 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:06:26.143 102400+0 records in 00:06:26.143 102400+0 records out 00:06:26.143 104857600 bytes (105 MB, 100 MiB) copied, 0.212665 s, 493 MB/s 00:06:26.143 21:49:45 json_config -- json_config/json_config.sh@141 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:06:26.143 21:49:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:06:26.401 aio_disk 00:06:26.401 21:49:45 json_config -- json_config/json_config.sh@142 -- # expected_notifications+=(bdev_register:aio_disk) 00:06:26.401 21:49:45 json_config -- json_config/json_config.sh@147 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:06:26.401 21:49:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:06:30.587 89ae1110-c76d-4304-86f1-f446a5054551 00:06:30.587 21:49:49 json_config -- json_config/json_config.sh@154 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:06:30.587 21:49:49 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:06:30.587 21:49:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:06:30.588 21:49:49 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:06:30.588 21:49:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:06:30.846 21:49:50 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:06:30.846 21:49:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:06:30.846 21:49:50 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:06:30.846 21:49:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:06:31.105 21:49:50 json_config -- json_config/json_config.sh@157 -- # [[ 1 -eq 1 ]] 00:06:31.105 21:49:50 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_malloc_create 8 1024 --name MallocForCryptoBdev 00:06:31.105 21:49:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 1024 --name MallocForCryptoBdev 00:06:31.365 MallocForCryptoBdev 00:06:31.365 21:49:50 json_config -- json_config/json_config.sh@159 -- # lspci -d:37c8 00:06:31.365 21:49:50 json_config -- json_config/json_config.sh@159 -- # wc -l 00:06:31.365 21:49:50 json_config -- json_config/json_config.sh@159 -- # [[ 5 -eq 0 ]] 00:06:31.365 21:49:50 json_config -- json_config/json_config.sh@162 -- # local crypto_driver=crypto_qat 00:06:31.365 21:49:50 json_config -- json_config/json_config.sh@165 -- # tgt_rpc bdev_crypto_create MallocForCryptoBdev CryptoMallocBdev -p crypto_qat -k 01234567891234560123456789123456 00:06:31.365 21:49:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_crypto_create MallocForCryptoBdev CryptoMallocBdev -p crypto_qat -k 01234567891234560123456789123456 00:06:31.365 [2024-07-13 21:49:50.741870] vbdev_crypto_rpc.c: 136:rpc_bdev_crypto_create: *WARNING*: "crypto_pmd" parameters is obsolete and ignored 00:06:31.365 CryptoMallocBdev 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@169 -- # expected_notifications+=(bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev) 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@172 -- # [[ 0 -eq 1 ]] 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@178 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:1c48469a-d195-4546-9dd5-169e383e614e bdev_register:d623f8bf-9bc2-4af6-b1f7-814b641427cc bdev_register:30fdef39-316e-48c1-8987-7eff72c26950 bdev_register:6ff002d5-853e-454c-a6b7-cc9bc3735510 bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@67 -- # local events_to_check 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@68 -- # local recorded_events 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@71 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@71 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:1c48469a-d195-4546-9dd5-169e383e614e bdev_register:d623f8bf-9bc2-4af6-b1f7-814b641427cc bdev_register:30fdef39-316e-48c1-8987-7eff72c26950 bdev_register:6ff002d5-853e-454c-a6b7-cc9bc3735510 bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@71 -- # sort 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@72 -- # recorded_events=($(get_notifications | sort)) 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@72 -- # get_notifications 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@72 -- # sort 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:06:31.625 21:49:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p1 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p0 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc3 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:PTBdevFromMalloc3 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Null0 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p2 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p1 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p0 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc1 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:aio_disk 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:1c48469a-d195-4546-9dd5-169e383e614e 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:d623f8bf-9bc2-4af6-b1f7-814b641427cc 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:30fdef39-316e-48c1-8987-7eff72c26950 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:6ff002d5-853e-454c-a6b7-cc9bc3735510 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:MallocForCryptoBdev 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:CryptoMallocBdev 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@74 -- # [[ bdev_register:1c48469a-d195-4546-9dd5-169e383e614e bdev_register:30fdef39-316e-48c1-8987-7eff72c26950 bdev_register:6ff002d5-853e-454c-a6b7-cc9bc3735510 bdev_register:aio_disk bdev_register:CryptoMallocBdev bdev_register:d623f8bf-9bc2-4af6-b1f7-814b641427cc bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:MallocForCryptoBdev bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\1\c\4\8\4\6\9\a\-\d\1\9\5\-\4\5\4\6\-\9\d\d\5\-\1\6\9\e\3\8\3\e\6\1\4\e\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\3\0\f\d\e\f\3\9\-\3\1\6\e\-\4\8\c\1\-\8\9\8\7\-\7\e\f\f\7\2\c\2\6\9\5\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\6\f\f\0\0\2\d\5\-\8\5\3\e\-\4\5\4\c\-\a\6\b\7\-\c\c\9\b\c\3\7\3\5\5\1\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\C\r\y\p\t\o\M\a\l\l\o\c\B\d\e\v\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\d\6\2\3\f\8\b\f\-\9\b\c\2\-\4\a\f\6\-\b\1\f\7\-\8\1\4\b\6\4\1\4\2\7\c\c\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\F\o\r\C\r\y\p\t\o\B\d\e\v\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3 ]] 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@86 -- # cat 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@86 -- # printf ' %s\n' bdev_register:1c48469a-d195-4546-9dd5-169e383e614e bdev_register:30fdef39-316e-48c1-8987-7eff72c26950 bdev_register:6ff002d5-853e-454c-a6b7-cc9bc3735510 bdev_register:aio_disk bdev_register:CryptoMallocBdev bdev_register:d623f8bf-9bc2-4af6-b1f7-814b641427cc bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:MallocForCryptoBdev bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 00:06:31.625 Expected events matched: 00:06:31.625 bdev_register:1c48469a-d195-4546-9dd5-169e383e614e 00:06:31.625 bdev_register:30fdef39-316e-48c1-8987-7eff72c26950 00:06:31.625 bdev_register:6ff002d5-853e-454c-a6b7-cc9bc3735510 00:06:31.625 bdev_register:aio_disk 00:06:31.625 bdev_register:CryptoMallocBdev 00:06:31.625 bdev_register:d623f8bf-9bc2-4af6-b1f7-814b641427cc 00:06:31.625 bdev_register:Malloc0 00:06:31.625 bdev_register:Malloc0p0 00:06:31.625 bdev_register:Malloc0p1 00:06:31.625 bdev_register:Malloc0p2 00:06:31.625 bdev_register:Malloc1 00:06:31.625 bdev_register:Malloc3 00:06:31.625 bdev_register:MallocForCryptoBdev 00:06:31.625 bdev_register:Null0 00:06:31.625 bdev_register:Nvme0n1 00:06:31.625 bdev_register:Nvme0n1p0 00:06:31.625 bdev_register:Nvme0n1p1 00:06:31.625 bdev_register:PTBdevFromMalloc3 00:06:31.625 21:49:50 json_config -- json_config/json_config.sh@180 -- # timing_exit create_bdev_subsystem_config 00:06:31.625 21:49:50 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:31.625 21:49:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:31.625 21:49:51 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:31.625 21:49:51 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:31.625 21:49:51 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:06:31.625 21:49:51 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:06:31.625 21:49:51 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:31.625 21:49:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:31.884 21:49:51 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:06:31.884 21:49:51 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:31.884 21:49:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:31.884 MallocBdevForConfigChangeCheck 00:06:31.884 21:49:51 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:06:31.884 21:49:51 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:31.884 21:49:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:32.143 21:49:51 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:06:32.143 21:49:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:32.402 21:49:51 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:06:32.402 INFO: shutting down applications... 00:06:32.402 21:49:51 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:06:32.402 21:49:51 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:06:32.402 21:49:51 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:06:32.402 21:49:51 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:32.402 [2024-07-13 21:49:51.727328] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:06:35.691 Calling clear_iscsi_subsystem 00:06:35.691 Calling clear_nvmf_subsystem 00:06:35.691 Calling clear_nbd_subsystem 00:06:35.691 Calling clear_ublk_subsystem 00:06:35.691 Calling clear_vhost_blk_subsystem 00:06:35.691 Calling clear_vhost_scsi_subsystem 00:06:35.691 Calling clear_bdev_subsystem 00:06:35.691 21:49:54 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py 00:06:35.691 21:49:54 json_config -- json_config/json_config.sh@343 -- # count=100 00:06:35.691 21:49:54 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:35.691 21:49:54 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:35.691 21:49:54 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:35.691 21:49:54 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:35.691 21:49:54 json_config -- json_config/json_config.sh@345 -- # break 00:06:35.691 21:49:54 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:35.691 21:49:54 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:35.691 21:49:54 json_config -- json_config/common.sh@31 -- # local app=target 00:06:35.691 21:49:54 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:35.691 21:49:54 json_config -- json_config/common.sh@35 -- # [[ -n 1282054 ]] 00:06:35.691 21:49:54 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1282054 00:06:35.691 21:49:54 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:35.691 21:49:54 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:35.691 21:49:54 json_config -- json_config/common.sh@41 -- # kill -0 1282054 00:06:35.691 21:49:54 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:35.950 21:49:55 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:35.950 21:49:55 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:35.950 21:49:55 json_config -- json_config/common.sh@41 -- # kill -0 1282054 00:06:35.950 21:49:55 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:36.518 21:49:55 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:36.518 21:49:55 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:36.518 21:49:55 json_config -- json_config/common.sh@41 -- # kill -0 1282054 00:06:36.518 21:49:55 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:37.086 21:49:56 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:37.086 21:49:56 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:37.086 21:49:56 json_config -- json_config/common.sh@41 -- # kill -0 1282054 00:06:37.086 21:49:56 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:37.655 21:49:56 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:37.655 21:49:56 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:37.655 21:49:56 json_config -- json_config/common.sh@41 -- # kill -0 1282054 00:06:37.655 21:49:56 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:37.655 21:49:56 json_config -- json_config/common.sh@43 -- # break 00:06:37.655 21:49:56 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:37.655 21:49:56 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:37.655 SPDK target shutdown done 00:06:37.655 21:49:56 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:37.655 INFO: relaunching applications... 00:06:37.655 21:49:56 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:37.655 21:49:56 json_config -- json_config/common.sh@9 -- # local app=target 00:06:37.655 21:49:56 json_config -- json_config/common.sh@10 -- # shift 00:06:37.655 21:49:56 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:37.655 21:49:56 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:37.655 21:49:56 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:37.655 21:49:56 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:37.655 21:49:56 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:37.655 21:49:56 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1285884 00:06:37.655 21:49:56 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:37.655 Waiting for target to run... 00:06:37.655 21:49:56 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:37.655 21:49:56 json_config -- json_config/common.sh@25 -- # waitforlisten 1285884 /var/tmp/spdk_tgt.sock 00:06:37.655 21:49:56 json_config -- common/autotest_common.sh@829 -- # '[' -z 1285884 ']' 00:06:37.655 21:49:56 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:37.655 21:49:56 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:37.655 21:49:56 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:37.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:37.655 21:49:56 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:37.655 21:49:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:37.655 [2024-07-13 21:49:56.893646] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:37.655 [2024-07-13 21:49:56.893746] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1285884 ] 00:06:38.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.221 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:38.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.221 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:38.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.221 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:38.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.221 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:38.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.221 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:38.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.221 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:38.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.221 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:38.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.222 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:38.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.222 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:38.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.222 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:38.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.222 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:38.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.222 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:38.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.222 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:38.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.222 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:38.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.222 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:38.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.222 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:38.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.222 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:38.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.222 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:38.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.222 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:38.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.222 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:38.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.222 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:38.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.222 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:38.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.222 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:38.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.222 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:38.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.222 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:38.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.222 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:38.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.222 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:38.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.222 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:38.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.222 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:38.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.222 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:38.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.222 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:38.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.222 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:38.222 [2024-07-13 21:49:57.438498] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.481 [2024-07-13 21:49:57.645170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.481 [2024-07-13 21:49:57.699158] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:06:38.481 [2024-07-13 21:49:57.707190] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:06:38.481 [2024-07-13 21:49:57.715202] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:06:38.739 [2024-07-13 21:49:57.969486] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:06:42.030 [2024-07-13 21:50:00.952745] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:42.030 [2024-07-13 21:50:00.952813] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:06:42.030 [2024-07-13 21:50:00.952844] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:06:42.030 [2024-07-13 21:50:00.960756] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:06:42.030 [2024-07-13 21:50:00.960797] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:06:42.030 [2024-07-13 21:50:00.968761] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:06:42.030 [2024-07-13 21:50:00.968796] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:06:42.030 [2024-07-13 21:50:00.976796] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "CryptoMallocBdev_AES_CBC" 00:06:42.030 [2024-07-13 21:50:00.976864] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: MallocForCryptoBdev 00:06:42.030 [2024-07-13 21:50:00.976881] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:06:44.597 [2024-07-13 21:50:03.910472] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:44.597 [2024-07-13 21:50:03.910531] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:44.597 [2024-07-13 21:50:03.910548] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042c80 00:06:44.597 [2024-07-13 21:50:03.910558] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:44.597 [2024-07-13 21:50:03.910979] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:44.597 [2024-07-13 21:50:03.911003] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:06:44.857 21:50:04 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:44.857 21:50:04 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:44.857 21:50:04 json_config -- json_config/common.sh@26 -- # echo '' 00:06:44.857 00:06:44.857 21:50:04 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:44.857 21:50:04 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:44.857 INFO: Checking if target configuration is the same... 00:06:44.857 21:50:04 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:44.857 21:50:04 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:44.857 21:50:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:44.857 + '[' 2 -ne 2 ']' 00:06:44.857 +++ dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:44.857 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/../.. 00:06:44.857 + rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:06:44.857 +++ basename /dev/fd/62 00:06:44.857 ++ mktemp /tmp/62.XXX 00:06:44.857 + tmp_file_1=/tmp/62.zD1 00:06:44.857 +++ basename /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:44.857 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:44.857 + tmp_file_2=/tmp/spdk_tgt_config.json.yQ5 00:06:44.857 + ret=0 00:06:44.857 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:45.115 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:45.115 + diff -u /tmp/62.zD1 /tmp/spdk_tgt_config.json.yQ5 00:06:45.115 + echo 'INFO: JSON config files are the same' 00:06:45.115 INFO: JSON config files are the same 00:06:45.115 + rm /tmp/62.zD1 /tmp/spdk_tgt_config.json.yQ5 00:06:45.115 + exit 0 00:06:45.115 21:50:04 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:45.115 21:50:04 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:45.115 INFO: changing configuration and checking if this can be detected... 00:06:45.115 21:50:04 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:45.115 21:50:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:45.375 21:50:04 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:45.375 21:50:04 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:45.375 21:50:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:45.375 + '[' 2 -ne 2 ']' 00:06:45.375 +++ dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:45.375 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/../.. 00:06:45.375 + rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:06:45.375 +++ basename /dev/fd/62 00:06:45.375 ++ mktemp /tmp/62.XXX 00:06:45.375 + tmp_file_1=/tmp/62.ocK 00:06:45.375 +++ basename /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:45.375 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:45.375 + tmp_file_2=/tmp/spdk_tgt_config.json.HMC 00:06:45.375 + ret=0 00:06:45.375 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:45.633 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:45.634 + diff -u /tmp/62.ocK /tmp/spdk_tgt_config.json.HMC 00:06:45.634 + ret=1 00:06:45.634 + echo '=== Start of file: /tmp/62.ocK ===' 00:06:45.634 + cat /tmp/62.ocK 00:06:45.634 + echo '=== End of file: /tmp/62.ocK ===' 00:06:45.634 + echo '' 00:06:45.634 + echo '=== Start of file: /tmp/spdk_tgt_config.json.HMC ===' 00:06:45.634 + cat /tmp/spdk_tgt_config.json.HMC 00:06:45.634 + echo '=== End of file: /tmp/spdk_tgt_config.json.HMC ===' 00:06:45.634 + echo '' 00:06:45.634 + rm /tmp/62.ocK /tmp/spdk_tgt_config.json.HMC 00:06:45.634 + exit 1 00:06:45.634 21:50:05 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:45.634 INFO: configuration change detected. 00:06:45.634 21:50:05 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:45.634 21:50:05 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:45.634 21:50:05 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:45.634 21:50:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:45.634 21:50:05 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:45.634 21:50:05 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:45.634 21:50:05 json_config -- json_config/json_config.sh@317 -- # [[ -n 1285884 ]] 00:06:45.634 21:50:05 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:45.634 21:50:05 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:45.634 21:50:05 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:45.634 21:50:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:45.634 21:50:05 json_config -- json_config/json_config.sh@186 -- # [[ 1 -eq 1 ]] 00:06:45.634 21:50:05 json_config -- json_config/json_config.sh@187 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:06:45.634 21:50:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:06:45.892 21:50:05 json_config -- json_config/json_config.sh@188 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:06:45.892 21:50:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:06:46.151 21:50:05 json_config -- json_config/json_config.sh@189 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:06:46.151 21:50:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:06:46.151 21:50:05 json_config -- json_config/json_config.sh@190 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:06:46.151 21:50:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:06:46.410 21:50:05 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:46.410 21:50:05 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:46.410 21:50:05 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:46.410 21:50:05 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:46.410 21:50:05 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:46.410 21:50:05 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:46.410 21:50:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:46.410 21:50:05 json_config -- json_config/json_config.sh@323 -- # killprocess 1285884 00:06:46.410 21:50:05 json_config -- common/autotest_common.sh@948 -- # '[' -z 1285884 ']' 00:06:46.410 21:50:05 json_config -- common/autotest_common.sh@952 -- # kill -0 1285884 00:06:46.410 21:50:05 json_config -- common/autotest_common.sh@953 -- # uname 00:06:46.410 21:50:05 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:46.410 21:50:05 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1285884 00:06:46.668 21:50:05 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:46.668 21:50:05 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:46.668 21:50:05 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1285884' 00:06:46.668 killing process with pid 1285884 00:06:46.668 21:50:05 json_config -- common/autotest_common.sh@967 -- # kill 1285884 00:06:46.668 21:50:05 json_config -- common/autotest_common.sh@972 -- # wait 1285884 00:06:50.855 21:50:09 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:50.856 21:50:09 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:50.856 21:50:09 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:50.856 21:50:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:50.856 21:50:09 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:50.856 21:50:09 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:50.856 INFO: Success 00:06:50.856 00:06:50.856 real 0m34.892s 00:06:50.856 user 0m37.268s 00:06:50.856 sys 0m3.818s 00:06:50.856 21:50:09 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.856 21:50:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:50.856 ************************************ 00:06:50.856 END TEST json_config 00:06:50.856 ************************************ 00:06:50.856 21:50:09 -- common/autotest_common.sh@1142 -- # return 0 00:06:50.856 21:50:09 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:50.856 21:50:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:50.856 21:50:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.856 21:50:09 -- common/autotest_common.sh@10 -- # set +x 00:06:50.856 ************************************ 00:06:50.856 START TEST json_config_extra_key 00:06:50.856 ************************************ 00:06:50.856 21:50:10 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:50.856 21:50:10 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:06:50.856 21:50:10 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:50.856 21:50:10 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:50.856 21:50:10 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:50.856 21:50:10 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:50.856 21:50:10 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:50.856 21:50:10 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:50.856 21:50:10 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:50.856 21:50:10 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:50.856 21:50:10 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:50.856 21:50:10 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:50.856 21:50:10 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:50.856 21:50:10 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:06:50.856 21:50:10 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:06:50.856 21:50:10 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:50.856 21:50:10 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:50.856 21:50:10 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:50.856 21:50:10 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:50.856 21:50:10 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:06:50.856 21:50:10 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:50.856 21:50:10 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:50.856 21:50:10 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:50.856 21:50:10 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.856 21:50:10 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.856 21:50:10 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.856 21:50:10 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:50.856 21:50:10 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.856 21:50:10 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:50.856 21:50:10 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:50.856 21:50:10 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:50.856 21:50:10 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:50.856 21:50:10 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:50.856 21:50:10 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:50.856 21:50:10 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:50.856 21:50:10 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:50.856 21:50:10 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:50.856 21:50:10 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/common.sh 00:06:50.856 21:50:10 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:50.856 21:50:10 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:50.856 21:50:10 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:50.856 21:50:10 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:50.856 21:50:10 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:50.856 21:50:10 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:50.856 21:50:10 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:50.856 21:50:10 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:50.856 21:50:10 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:50.856 21:50:10 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:50.856 INFO: launching applications... 00:06:50.856 21:50:10 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json 00:06:50.856 21:50:10 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:50.856 21:50:10 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:50.856 21:50:10 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:50.856 21:50:10 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:50.856 21:50:10 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:50.856 21:50:10 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:50.856 21:50:10 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:50.856 21:50:10 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1288289 00:06:50.856 21:50:10 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:50.856 Waiting for target to run... 00:06:50.856 21:50:10 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1288289 /var/tmp/spdk_tgt.sock 00:06:50.856 21:50:10 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 1288289 ']' 00:06:50.856 21:50:10 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json 00:06:50.856 21:50:10 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:50.856 21:50:10 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:50.856 21:50:10 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:50.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:50.856 21:50:10 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:50.856 21:50:10 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:51.115 [2024-07-13 21:50:10.244894] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:51.115 [2024-07-13 21:50:10.244994] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1288289 ] 00:06:51.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.374 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:51.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.374 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:51.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.374 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:51.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.374 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:51.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.374 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:51.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.374 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:51.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.374 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:51.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.374 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:51.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.374 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:51.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.374 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:51.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.374 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:51.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.374 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:51.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.374 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:51.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.374 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:51.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.374 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:51.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.374 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:51.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.374 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:51.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.374 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:51.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.374 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:51.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.374 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:51.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.374 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:51.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.374 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:51.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.374 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:51.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.374 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:51.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.374 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:51.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.374 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:51.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.374 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:51.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.374 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:51.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.374 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:51.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.374 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:51.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.374 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:51.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.374 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:51.374 [2024-07-13 21:50:10.762997] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.632 [2024-07-13 21:50:10.963200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.596 21:50:11 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:52.596 21:50:11 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:52.596 21:50:11 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:52.596 00:06:52.596 21:50:11 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:52.596 INFO: shutting down applications... 00:06:52.596 21:50:11 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:52.596 21:50:11 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:52.596 21:50:11 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:52.596 21:50:11 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1288289 ]] 00:06:52.596 21:50:11 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1288289 00:06:52.596 21:50:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:52.596 21:50:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:52.596 21:50:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1288289 00:06:52.596 21:50:11 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:52.855 21:50:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:52.855 21:50:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:52.855 21:50:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1288289 00:06:52.855 21:50:12 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:53.422 21:50:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:53.422 21:50:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:53.422 21:50:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1288289 00:06:53.422 21:50:12 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:53.422 21:50:12 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:53.422 21:50:12 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:53.422 21:50:12 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:53.422 SPDK target shutdown done 00:06:53.422 21:50:12 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:53.422 Success 00:06:53.422 00:06:53.422 real 0m2.655s 00:06:53.422 user 0m2.268s 00:06:53.422 sys 0m0.742s 00:06:53.422 21:50:12 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.422 21:50:12 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:53.422 ************************************ 00:06:53.422 END TEST json_config_extra_key 00:06:53.422 ************************************ 00:06:53.422 21:50:12 -- common/autotest_common.sh@1142 -- # return 0 00:06:53.422 21:50:12 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:53.422 21:50:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:53.422 21:50:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.422 21:50:12 -- common/autotest_common.sh@10 -- # set +x 00:06:53.422 ************************************ 00:06:53.422 START TEST alias_rpc 00:06:53.422 ************************************ 00:06:53.422 21:50:12 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:53.682 * Looking for test storage... 00:06:53.682 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc 00:06:53.682 21:50:12 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:53.682 21:50:12 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1288869 00:06:53.682 21:50:12 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:06:53.682 21:50:12 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1288869 00:06:53.682 21:50:12 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 1288869 ']' 00:06:53.682 21:50:12 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.682 21:50:12 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:53.682 21:50:12 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.682 21:50:12 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:53.682 21:50:12 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.682 [2024-07-13 21:50:12.966824] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:53.682 [2024-07-13 21:50:12.966925] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1288869 ] 00:06:53.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.682 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:53.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.682 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:53.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.682 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:53.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.682 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:53.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.682 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:53.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.682 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:53.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.682 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:53.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.682 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:53.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.682 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:53.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.682 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:53.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.682 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:53.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.682 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:53.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.682 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:53.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.682 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:53.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.682 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:53.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.682 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:53.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.682 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:53.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.682 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:53.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.682 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:53.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.682 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:53.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.682 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:53.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.682 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:53.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.682 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:53.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.682 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:53.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.682 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:53.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.682 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:53.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.682 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:53.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.682 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:53.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.682 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:53.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.682 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:53.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.682 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:53.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.682 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:53.942 [2024-07-13 21:50:13.130977] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.200 [2024-07-13 21:50:13.334212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.136 21:50:14 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:55.136 21:50:14 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:55.136 21:50:14 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:55.136 21:50:14 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1288869 00:06:55.136 21:50:14 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 1288869 ']' 00:06:55.136 21:50:14 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 1288869 00:06:55.136 21:50:14 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:55.136 21:50:14 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:55.136 21:50:14 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1288869 00:06:55.136 21:50:14 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:55.136 21:50:14 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:55.136 21:50:14 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1288869' 00:06:55.136 killing process with pid 1288869 00:06:55.136 21:50:14 alias_rpc -- common/autotest_common.sh@967 -- # kill 1288869 00:06:55.136 21:50:14 alias_rpc -- common/autotest_common.sh@972 -- # wait 1288869 00:06:57.673 00:06:57.673 real 0m4.034s 00:06:57.673 user 0m3.934s 00:06:57.673 sys 0m0.627s 00:06:57.673 21:50:16 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.673 21:50:16 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.673 ************************************ 00:06:57.673 END TEST alias_rpc 00:06:57.673 ************************************ 00:06:57.673 21:50:16 -- common/autotest_common.sh@1142 -- # return 0 00:06:57.673 21:50:16 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:57.673 21:50:16 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:57.673 21:50:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:57.673 21:50:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.673 21:50:16 -- common/autotest_common.sh@10 -- # set +x 00:06:57.673 ************************************ 00:06:57.673 START TEST spdkcli_tcp 00:06:57.673 ************************************ 00:06:57.673 21:50:16 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:57.673 * Looking for test storage... 00:06:57.673 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli 00:06:57.673 21:50:16 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/common.sh 00:06:57.673 21:50:16 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:57.673 21:50:16 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/clear_config.py 00:06:57.673 21:50:16 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:57.673 21:50:16 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:57.673 21:50:16 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:57.673 21:50:16 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:57.673 21:50:16 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:57.673 21:50:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:57.673 21:50:16 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1289653 00:06:57.673 21:50:16 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1289653 00:06:57.673 21:50:16 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:57.673 21:50:16 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 1289653 ']' 00:06:57.673 21:50:16 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.673 21:50:16 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:57.673 21:50:16 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.673 21:50:16 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:57.673 21:50:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:57.932 [2024-07-13 21:50:17.088169] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:57.932 [2024-07-13 21:50:17.088266] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1289653 ] 00:06:57.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.932 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:57.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.932 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:57.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.932 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:57.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.932 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:57.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.932 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:57.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.932 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:57.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.932 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:57.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.932 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:57.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.932 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:57.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.932 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:57.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.932 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:57.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.932 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:57.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.932 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:57.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.932 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:57.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.932 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:57.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.932 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:57.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.932 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:57.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.932 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:57.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.932 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:57.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.932 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:57.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.932 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:57.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.932 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:57.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.932 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:57.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.932 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:57.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.932 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:57.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.932 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:57.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.932 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:57.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.932 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:57.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.932 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:57.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.932 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:57.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.932 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:57.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.932 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:57.932 [2024-07-13 21:50:17.249111] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:58.191 [2024-07-13 21:50:17.455924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.191 [2024-07-13 21:50:17.455928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.129 21:50:18 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:59.129 21:50:18 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:59.129 21:50:18 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1289836 00:06:59.129 21:50:18 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:59.129 21:50:18 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:59.390 [ 00:06:59.390 "bdev_malloc_delete", 00:06:59.390 "bdev_malloc_create", 00:06:59.390 "bdev_null_resize", 00:06:59.390 "bdev_null_delete", 00:06:59.390 "bdev_null_create", 00:06:59.390 "bdev_nvme_cuse_unregister", 00:06:59.390 "bdev_nvme_cuse_register", 00:06:59.390 "bdev_opal_new_user", 00:06:59.390 "bdev_opal_set_lock_state", 00:06:59.390 "bdev_opal_delete", 00:06:59.390 "bdev_opal_get_info", 00:06:59.390 "bdev_opal_create", 00:06:59.390 "bdev_nvme_opal_revert", 00:06:59.390 "bdev_nvme_opal_init", 00:06:59.390 "bdev_nvme_send_cmd", 00:06:59.390 "bdev_nvme_get_path_iostat", 00:06:59.390 "bdev_nvme_get_mdns_discovery_info", 00:06:59.390 "bdev_nvme_stop_mdns_discovery", 00:06:59.390 "bdev_nvme_start_mdns_discovery", 00:06:59.390 "bdev_nvme_set_multipath_policy", 00:06:59.390 "bdev_nvme_set_preferred_path", 00:06:59.390 "bdev_nvme_get_io_paths", 00:06:59.390 "bdev_nvme_remove_error_injection", 00:06:59.390 "bdev_nvme_add_error_injection", 00:06:59.390 "bdev_nvme_get_discovery_info", 00:06:59.390 "bdev_nvme_stop_discovery", 00:06:59.390 "bdev_nvme_start_discovery", 00:06:59.390 "bdev_nvme_get_controller_health_info", 00:06:59.390 "bdev_nvme_disable_controller", 00:06:59.390 "bdev_nvme_enable_controller", 00:06:59.390 "bdev_nvme_reset_controller", 00:06:59.390 "bdev_nvme_get_transport_statistics", 00:06:59.390 "bdev_nvme_apply_firmware", 00:06:59.391 "bdev_nvme_detach_controller", 00:06:59.391 "bdev_nvme_get_controllers", 00:06:59.391 "bdev_nvme_attach_controller", 00:06:59.391 "bdev_nvme_set_hotplug", 00:06:59.391 "bdev_nvme_set_options", 00:06:59.391 "bdev_passthru_delete", 00:06:59.391 "bdev_passthru_create", 00:06:59.391 "bdev_lvol_set_parent_bdev", 00:06:59.391 "bdev_lvol_set_parent", 00:06:59.391 "bdev_lvol_check_shallow_copy", 00:06:59.391 "bdev_lvol_start_shallow_copy", 00:06:59.391 "bdev_lvol_grow_lvstore", 00:06:59.391 "bdev_lvol_get_lvols", 00:06:59.391 "bdev_lvol_get_lvstores", 00:06:59.391 "bdev_lvol_delete", 00:06:59.391 "bdev_lvol_set_read_only", 00:06:59.391 "bdev_lvol_resize", 00:06:59.391 "bdev_lvol_decouple_parent", 00:06:59.391 "bdev_lvol_inflate", 00:06:59.391 "bdev_lvol_rename", 00:06:59.391 "bdev_lvol_clone_bdev", 00:06:59.391 "bdev_lvol_clone", 00:06:59.391 "bdev_lvol_snapshot", 00:06:59.391 "bdev_lvol_create", 00:06:59.391 "bdev_lvol_delete_lvstore", 00:06:59.391 "bdev_lvol_rename_lvstore", 00:06:59.391 "bdev_lvol_create_lvstore", 00:06:59.391 "bdev_raid_set_options", 00:06:59.391 "bdev_raid_remove_base_bdev", 00:06:59.391 "bdev_raid_add_base_bdev", 00:06:59.391 "bdev_raid_delete", 00:06:59.391 "bdev_raid_create", 00:06:59.391 "bdev_raid_get_bdevs", 00:06:59.391 "bdev_error_inject_error", 00:06:59.391 "bdev_error_delete", 00:06:59.391 "bdev_error_create", 00:06:59.391 "bdev_split_delete", 00:06:59.391 "bdev_split_create", 00:06:59.391 "bdev_delay_delete", 00:06:59.391 "bdev_delay_create", 00:06:59.391 "bdev_delay_update_latency", 00:06:59.391 "bdev_zone_block_delete", 00:06:59.391 "bdev_zone_block_create", 00:06:59.391 "blobfs_create", 00:06:59.391 "blobfs_detect", 00:06:59.391 "blobfs_set_cache_size", 00:06:59.391 "bdev_crypto_delete", 00:06:59.391 "bdev_crypto_create", 00:06:59.391 "bdev_compress_delete", 00:06:59.391 "bdev_compress_create", 00:06:59.391 "bdev_compress_get_orphans", 00:06:59.391 "bdev_aio_delete", 00:06:59.391 "bdev_aio_rescan", 00:06:59.391 "bdev_aio_create", 00:06:59.391 "bdev_ftl_set_property", 00:06:59.391 "bdev_ftl_get_properties", 00:06:59.391 "bdev_ftl_get_stats", 00:06:59.391 "bdev_ftl_unmap", 00:06:59.391 "bdev_ftl_unload", 00:06:59.391 "bdev_ftl_delete", 00:06:59.391 "bdev_ftl_load", 00:06:59.391 "bdev_ftl_create", 00:06:59.391 "bdev_virtio_attach_controller", 00:06:59.391 "bdev_virtio_scsi_get_devices", 00:06:59.391 "bdev_virtio_detach_controller", 00:06:59.391 "bdev_virtio_blk_set_hotplug", 00:06:59.391 "bdev_iscsi_delete", 00:06:59.391 "bdev_iscsi_create", 00:06:59.391 "bdev_iscsi_set_options", 00:06:59.391 "accel_error_inject_error", 00:06:59.391 "ioat_scan_accel_module", 00:06:59.391 "dsa_scan_accel_module", 00:06:59.391 "iaa_scan_accel_module", 00:06:59.391 "dpdk_cryptodev_get_driver", 00:06:59.391 "dpdk_cryptodev_set_driver", 00:06:59.391 "dpdk_cryptodev_scan_accel_module", 00:06:59.391 "compressdev_scan_accel_module", 00:06:59.391 "keyring_file_remove_key", 00:06:59.391 "keyring_file_add_key", 00:06:59.391 "keyring_linux_set_options", 00:06:59.391 "iscsi_get_histogram", 00:06:59.391 "iscsi_enable_histogram", 00:06:59.391 "iscsi_set_options", 00:06:59.391 "iscsi_get_auth_groups", 00:06:59.391 "iscsi_auth_group_remove_secret", 00:06:59.391 "iscsi_auth_group_add_secret", 00:06:59.391 "iscsi_delete_auth_group", 00:06:59.391 "iscsi_create_auth_group", 00:06:59.391 "iscsi_set_discovery_auth", 00:06:59.391 "iscsi_get_options", 00:06:59.391 "iscsi_target_node_request_logout", 00:06:59.391 "iscsi_target_node_set_redirect", 00:06:59.391 "iscsi_target_node_set_auth", 00:06:59.391 "iscsi_target_node_add_lun", 00:06:59.391 "iscsi_get_stats", 00:06:59.391 "iscsi_get_connections", 00:06:59.391 "iscsi_portal_group_set_auth", 00:06:59.391 "iscsi_start_portal_group", 00:06:59.391 "iscsi_delete_portal_group", 00:06:59.391 "iscsi_create_portal_group", 00:06:59.391 "iscsi_get_portal_groups", 00:06:59.391 "iscsi_delete_target_node", 00:06:59.391 "iscsi_target_node_remove_pg_ig_maps", 00:06:59.391 "iscsi_target_node_add_pg_ig_maps", 00:06:59.391 "iscsi_create_target_node", 00:06:59.391 "iscsi_get_target_nodes", 00:06:59.391 "iscsi_delete_initiator_group", 00:06:59.391 "iscsi_initiator_group_remove_initiators", 00:06:59.391 "iscsi_initiator_group_add_initiators", 00:06:59.391 "iscsi_create_initiator_group", 00:06:59.391 "iscsi_get_initiator_groups", 00:06:59.391 "nvmf_set_crdt", 00:06:59.391 "nvmf_set_config", 00:06:59.391 "nvmf_set_max_subsystems", 00:06:59.391 "nvmf_stop_mdns_prr", 00:06:59.391 "nvmf_publish_mdns_prr", 00:06:59.391 "nvmf_subsystem_get_listeners", 00:06:59.391 "nvmf_subsystem_get_qpairs", 00:06:59.391 "nvmf_subsystem_get_controllers", 00:06:59.391 "nvmf_get_stats", 00:06:59.391 "nvmf_get_transports", 00:06:59.391 "nvmf_create_transport", 00:06:59.391 "nvmf_get_targets", 00:06:59.391 "nvmf_delete_target", 00:06:59.391 "nvmf_create_target", 00:06:59.391 "nvmf_subsystem_allow_any_host", 00:06:59.391 "nvmf_subsystem_remove_host", 00:06:59.391 "nvmf_subsystem_add_host", 00:06:59.391 "nvmf_ns_remove_host", 00:06:59.391 "nvmf_ns_add_host", 00:06:59.391 "nvmf_subsystem_remove_ns", 00:06:59.391 "nvmf_subsystem_add_ns", 00:06:59.391 "nvmf_subsystem_listener_set_ana_state", 00:06:59.391 "nvmf_discovery_get_referrals", 00:06:59.391 "nvmf_discovery_remove_referral", 00:06:59.391 "nvmf_discovery_add_referral", 00:06:59.391 "nvmf_subsystem_remove_listener", 00:06:59.391 "nvmf_subsystem_add_listener", 00:06:59.391 "nvmf_delete_subsystem", 00:06:59.391 "nvmf_create_subsystem", 00:06:59.391 "nvmf_get_subsystems", 00:06:59.391 "env_dpdk_get_mem_stats", 00:06:59.391 "nbd_get_disks", 00:06:59.391 "nbd_stop_disk", 00:06:59.391 "nbd_start_disk", 00:06:59.391 "ublk_recover_disk", 00:06:59.391 "ublk_get_disks", 00:06:59.391 "ublk_stop_disk", 00:06:59.391 "ublk_start_disk", 00:06:59.391 "ublk_destroy_target", 00:06:59.391 "ublk_create_target", 00:06:59.391 "virtio_blk_create_transport", 00:06:59.391 "virtio_blk_get_transports", 00:06:59.391 "vhost_controller_set_coalescing", 00:06:59.391 "vhost_get_controllers", 00:06:59.391 "vhost_delete_controller", 00:06:59.391 "vhost_create_blk_controller", 00:06:59.391 "vhost_scsi_controller_remove_target", 00:06:59.391 "vhost_scsi_controller_add_target", 00:06:59.391 "vhost_start_scsi_controller", 00:06:59.391 "vhost_create_scsi_controller", 00:06:59.391 "thread_set_cpumask", 00:06:59.391 "framework_get_governor", 00:06:59.391 "framework_get_scheduler", 00:06:59.391 "framework_set_scheduler", 00:06:59.391 "framework_get_reactors", 00:06:59.391 "thread_get_io_channels", 00:06:59.391 "thread_get_pollers", 00:06:59.391 "thread_get_stats", 00:06:59.391 "framework_monitor_context_switch", 00:06:59.391 "spdk_kill_instance", 00:06:59.391 "log_enable_timestamps", 00:06:59.391 "log_get_flags", 00:06:59.391 "log_clear_flag", 00:06:59.391 "log_set_flag", 00:06:59.391 "log_get_level", 00:06:59.391 "log_set_level", 00:06:59.391 "log_get_print_level", 00:06:59.391 "log_set_print_level", 00:06:59.391 "framework_enable_cpumask_locks", 00:06:59.391 "framework_disable_cpumask_locks", 00:06:59.391 "framework_wait_init", 00:06:59.391 "framework_start_init", 00:06:59.391 "scsi_get_devices", 00:06:59.391 "bdev_get_histogram", 00:06:59.391 "bdev_enable_histogram", 00:06:59.391 "bdev_set_qos_limit", 00:06:59.391 "bdev_set_qd_sampling_period", 00:06:59.391 "bdev_get_bdevs", 00:06:59.391 "bdev_reset_iostat", 00:06:59.391 "bdev_get_iostat", 00:06:59.391 "bdev_examine", 00:06:59.391 "bdev_wait_for_examine", 00:06:59.391 "bdev_set_options", 00:06:59.391 "notify_get_notifications", 00:06:59.391 "notify_get_types", 00:06:59.391 "accel_get_stats", 00:06:59.391 "accel_set_options", 00:06:59.391 "accel_set_driver", 00:06:59.391 "accel_crypto_key_destroy", 00:06:59.391 "accel_crypto_keys_get", 00:06:59.391 "accel_crypto_key_create", 00:06:59.391 "accel_assign_opc", 00:06:59.391 "accel_get_module_info", 00:06:59.391 "accel_get_opc_assignments", 00:06:59.391 "vmd_rescan", 00:06:59.391 "vmd_remove_device", 00:06:59.391 "vmd_enable", 00:06:59.391 "sock_get_default_impl", 00:06:59.391 "sock_set_default_impl", 00:06:59.391 "sock_impl_set_options", 00:06:59.391 "sock_impl_get_options", 00:06:59.391 "iobuf_get_stats", 00:06:59.391 "iobuf_set_options", 00:06:59.391 "framework_get_pci_devices", 00:06:59.391 "framework_get_config", 00:06:59.391 "framework_get_subsystems", 00:06:59.391 "trace_get_info", 00:06:59.391 "trace_get_tpoint_group_mask", 00:06:59.391 "trace_disable_tpoint_group", 00:06:59.391 "trace_enable_tpoint_group", 00:06:59.391 "trace_clear_tpoint_mask", 00:06:59.391 "trace_set_tpoint_mask", 00:06:59.391 "keyring_get_keys", 00:06:59.391 "spdk_get_version", 00:06:59.391 "rpc_get_methods" 00:06:59.391 ] 00:06:59.391 21:50:18 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:59.391 21:50:18 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:59.391 21:50:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:59.391 21:50:18 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:59.391 21:50:18 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1289653 00:06:59.391 21:50:18 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 1289653 ']' 00:06:59.391 21:50:18 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 1289653 00:06:59.391 21:50:18 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:59.391 21:50:18 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:59.391 21:50:18 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1289653 00:06:59.391 21:50:18 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:59.391 21:50:18 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:59.391 21:50:18 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1289653' 00:06:59.391 killing process with pid 1289653 00:06:59.391 21:50:18 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 1289653 00:06:59.391 21:50:18 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 1289653 00:07:01.927 00:07:01.927 real 0m4.142s 00:07:01.927 user 0m7.176s 00:07:01.927 sys 0m0.677s 00:07:01.927 21:50:21 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.927 21:50:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:01.927 ************************************ 00:07:01.927 END TEST spdkcli_tcp 00:07:01.927 ************************************ 00:07:01.927 21:50:21 -- common/autotest_common.sh@1142 -- # return 0 00:07:01.927 21:50:21 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:01.927 21:50:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:01.927 21:50:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.927 21:50:21 -- common/autotest_common.sh@10 -- # set +x 00:07:01.927 ************************************ 00:07:01.927 START TEST dpdk_mem_utility 00:07:01.927 ************************************ 00:07:01.927 21:50:21 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:01.927 * Looking for test storage... 00:07:01.927 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility 00:07:01.927 21:50:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:01.927 21:50:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:07:01.927 21:50:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1290327 00:07:01.927 21:50:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1290327 00:07:01.927 21:50:21 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 1290327 ']' 00:07:01.927 21:50:21 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.927 21:50:21 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:01.927 21:50:21 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.927 21:50:21 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:01.927 21:50:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:01.928 [2024-07-13 21:50:21.292304] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:01.928 [2024-07-13 21:50:21.292405] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1290327 ] 00:07:02.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.187 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:02.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.187 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:02.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.187 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:02.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.187 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:02.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.187 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:02.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.187 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:02.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.187 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:02.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.187 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:02.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.187 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:02.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.187 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:02.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.187 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:02.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.187 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:02.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.187 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:02.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.187 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:02.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.187 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:02.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.187 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:02.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.187 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:02.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.187 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:02.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.187 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:02.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.187 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:02.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.187 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:02.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.187 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:02.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.187 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:02.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.187 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:02.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.187 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:02.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.187 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:02.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.188 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:02.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.188 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:02.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.188 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:02.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.188 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:02.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.188 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:02.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.188 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:02.188 [2024-07-13 21:50:21.453997] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.447 [2024-07-13 21:50:21.657863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.389 21:50:22 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:03.389 21:50:22 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:07:03.389 21:50:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:03.389 21:50:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:03.389 21:50:22 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.389 21:50:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:03.389 { 00:07:03.389 "filename": "/tmp/spdk_mem_dump.txt" 00:07:03.389 } 00:07:03.389 21:50:22 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.389 21:50:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:03.389 DPDK memory size 820.000000 MiB in 1 heap(s) 00:07:03.389 1 heaps totaling size 820.000000 MiB 00:07:03.389 size: 820.000000 MiB heap id: 0 00:07:03.389 end heaps---------- 00:07:03.389 8 mempools totaling size 598.116089 MiB 00:07:03.389 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:03.389 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:03.389 size: 84.521057 MiB name: bdev_io_1290327 00:07:03.389 size: 51.011292 MiB name: evtpool_1290327 00:07:03.389 size: 50.003479 MiB name: msgpool_1290327 00:07:03.389 size: 21.763794 MiB name: PDU_Pool 00:07:03.389 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:03.389 size: 0.026123 MiB name: Session_Pool 00:07:03.389 end mempools------- 00:07:03.389 201 memzones totaling size 4.176453 MiB 00:07:03.389 size: 1.000366 MiB name: RG_ring_0_1290327 00:07:03.389 size: 1.000366 MiB name: RG_ring_1_1290327 00:07:03.389 size: 1.000366 MiB name: RG_ring_4_1290327 00:07:03.389 size: 1.000366 MiB name: RG_ring_5_1290327 00:07:03.389 size: 0.125366 MiB name: RG_ring_2_1290327 00:07:03.389 size: 0.015991 MiB name: RG_ring_3_1290327 00:07:03.389 size: 0.001160 MiB name: QAT_SYM_CAPA_GEN_1 00:07:03.389 size: 0.000305 MiB name: 0000:1a:01.0_qat 00:07:03.389 size: 0.000305 MiB name: 0000:1a:01.1_qat 00:07:03.389 size: 0.000305 MiB name: 0000:1a:01.2_qat 00:07:03.389 size: 0.000305 MiB name: 0000:1a:01.3_qat 00:07:03.389 size: 0.000305 MiB name: 0000:1a:01.4_qat 00:07:03.389 size: 0.000305 MiB name: 0000:1a:01.5_qat 00:07:03.389 size: 0.000305 MiB name: 0000:1a:01.6_qat 00:07:03.389 size: 0.000305 MiB name: 0000:1a:01.7_qat 00:07:03.389 size: 0.000305 MiB name: 0000:1a:02.0_qat 00:07:03.389 size: 0.000305 MiB name: 0000:1a:02.1_qat 00:07:03.389 size: 0.000305 MiB name: 0000:1a:02.2_qat 00:07:03.390 size: 0.000305 MiB name: 0000:1a:02.3_qat 00:07:03.390 size: 0.000305 MiB name: 0000:1a:02.4_qat 00:07:03.390 size: 0.000305 MiB name: 0000:1a:02.5_qat 00:07:03.390 size: 0.000305 MiB name: 0000:1a:02.6_qat 00:07:03.390 size: 0.000305 MiB name: 0000:1a:02.7_qat 00:07:03.390 size: 0.000305 MiB name: 0000:1c:01.0_qat 00:07:03.390 size: 0.000305 MiB name: 0000:1c:01.1_qat 00:07:03.390 size: 0.000305 MiB name: 0000:1c:01.2_qat 00:07:03.390 size: 0.000305 MiB name: 0000:1c:01.3_qat 00:07:03.390 size: 0.000305 MiB name: 0000:1c:01.4_qat 00:07:03.390 size: 0.000305 MiB name: 0000:1c:01.5_qat 00:07:03.390 size: 0.000305 MiB name: 0000:1c:01.6_qat 00:07:03.390 size: 0.000305 MiB name: 0000:1c:01.7_qat 00:07:03.390 size: 0.000305 MiB name: 0000:1c:02.0_qat 00:07:03.390 size: 0.000305 MiB name: 0000:1c:02.1_qat 00:07:03.390 size: 0.000305 MiB name: 0000:1c:02.2_qat 00:07:03.390 size: 0.000305 MiB name: 0000:1c:02.3_qat 00:07:03.390 size: 0.000305 MiB name: 0000:1c:02.4_qat 00:07:03.390 size: 0.000305 MiB name: 0000:1c:02.5_qat 00:07:03.390 size: 0.000305 MiB name: 0000:1c:02.6_qat 00:07:03.390 size: 0.000305 MiB name: 0000:1c:02.7_qat 00:07:03.390 size: 0.000305 MiB name: 0000:1e:01.0_qat 00:07:03.390 size: 0.000305 MiB name: 0000:1e:01.1_qat 00:07:03.390 size: 0.000305 MiB name: 0000:1e:01.2_qat 00:07:03.390 size: 0.000305 MiB name: 0000:1e:01.3_qat 00:07:03.390 size: 0.000305 MiB name: 0000:1e:01.4_qat 00:07:03.390 size: 0.000305 MiB name: 0000:1e:01.5_qat 00:07:03.390 size: 0.000305 MiB name: 0000:1e:01.6_qat 00:07:03.390 size: 0.000305 MiB name: 0000:1e:01.7_qat 00:07:03.390 size: 0.000305 MiB name: 0000:1e:02.0_qat 00:07:03.390 size: 0.000305 MiB name: 0000:1e:02.1_qat 00:07:03.390 size: 0.000305 MiB name: 0000:1e:02.2_qat 00:07:03.390 size: 0.000305 MiB name: 0000:1e:02.3_qat 00:07:03.390 size: 0.000305 MiB name: 0000:1e:02.4_qat 00:07:03.390 size: 0.000305 MiB name: 0000:1e:02.5_qat 00:07:03.390 size: 0.000305 MiB name: 0000:1e:02.6_qat 00:07:03.390 size: 0.000305 MiB name: 0000:1e:02.7_qat 00:07:03.390 size: 0.000183 MiB name: QAT_ASYM_CAPA_GEN_1 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_0 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_1 00:07:03.390 size: 0.000122 MiB name: rte_compressdev_data_0 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_2 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_3 00:07:03.390 size: 0.000122 MiB name: rte_compressdev_data_1 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_4 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_5 00:07:03.390 size: 0.000122 MiB name: rte_compressdev_data_2 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_6 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_7 00:07:03.390 size: 0.000122 MiB name: rte_compressdev_data_3 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_8 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_9 00:07:03.390 size: 0.000122 MiB name: rte_compressdev_data_4 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_10 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_11 00:07:03.390 size: 0.000122 MiB name: rte_compressdev_data_5 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_12 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_13 00:07:03.390 size: 0.000122 MiB name: rte_compressdev_data_6 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_14 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_15 00:07:03.390 size: 0.000122 MiB name: rte_compressdev_data_7 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_16 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_17 00:07:03.390 size: 0.000122 MiB name: rte_compressdev_data_8 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_18 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_19 00:07:03.390 size: 0.000122 MiB name: rte_compressdev_data_9 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_20 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_21 00:07:03.390 size: 0.000122 MiB name: rte_compressdev_data_10 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_22 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_23 00:07:03.390 size: 0.000122 MiB name: rte_compressdev_data_11 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_24 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_25 00:07:03.390 size: 0.000122 MiB name: rte_compressdev_data_12 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_26 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_27 00:07:03.390 size: 0.000122 MiB name: rte_compressdev_data_13 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_28 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_29 00:07:03.390 size: 0.000122 MiB name: rte_compressdev_data_14 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_30 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_31 00:07:03.390 size: 0.000122 MiB name: rte_compressdev_data_15 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_32 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_33 00:07:03.390 size: 0.000122 MiB name: rte_compressdev_data_16 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_34 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_35 00:07:03.390 size: 0.000122 MiB name: rte_compressdev_data_17 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_36 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_37 00:07:03.390 size: 0.000122 MiB name: rte_compressdev_data_18 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_38 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_39 00:07:03.390 size: 0.000122 MiB name: rte_compressdev_data_19 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_40 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_41 00:07:03.390 size: 0.000122 MiB name: rte_compressdev_data_20 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_42 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_43 00:07:03.390 size: 0.000122 MiB name: rte_compressdev_data_21 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_44 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_45 00:07:03.390 size: 0.000122 MiB name: rte_compressdev_data_22 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_46 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_47 00:07:03.390 size: 0.000122 MiB name: rte_compressdev_data_23 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_48 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_49 00:07:03.390 size: 0.000122 MiB name: rte_compressdev_data_24 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_50 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_51 00:07:03.390 size: 0.000122 MiB name: rte_compressdev_data_25 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_52 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_53 00:07:03.390 size: 0.000122 MiB name: rte_compressdev_data_26 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_54 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_55 00:07:03.390 size: 0.000122 MiB name: rte_compressdev_data_27 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_56 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_57 00:07:03.390 size: 0.000122 MiB name: rte_compressdev_data_28 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_58 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_59 00:07:03.390 size: 0.000122 MiB name: rte_compressdev_data_29 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_60 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_61 00:07:03.390 size: 0.000122 MiB name: rte_compressdev_data_30 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_62 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_63 00:07:03.390 size: 0.000122 MiB name: rte_compressdev_data_31 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_64 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_65 00:07:03.390 size: 0.000122 MiB name: rte_compressdev_data_32 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_66 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_67 00:07:03.390 size: 0.000122 MiB name: rte_compressdev_data_33 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_68 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_69 00:07:03.390 size: 0.000122 MiB name: rte_compressdev_data_34 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_70 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_71 00:07:03.390 size: 0.000122 MiB name: rte_compressdev_data_35 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_72 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_73 00:07:03.390 size: 0.000122 MiB name: rte_compressdev_data_36 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_74 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_75 00:07:03.390 size: 0.000122 MiB name: rte_compressdev_data_37 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_76 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_77 00:07:03.390 size: 0.000122 MiB name: rte_compressdev_data_38 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_78 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_79 00:07:03.390 size: 0.000122 MiB name: rte_compressdev_data_39 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_80 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_81 00:07:03.390 size: 0.000122 MiB name: rte_compressdev_data_40 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_82 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_83 00:07:03.390 size: 0.000122 MiB name: rte_compressdev_data_41 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_84 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_85 00:07:03.390 size: 0.000122 MiB name: rte_compressdev_data_42 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_86 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_87 00:07:03.390 size: 0.000122 MiB name: rte_compressdev_data_43 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_88 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_89 00:07:03.390 size: 0.000122 MiB name: rte_compressdev_data_44 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_90 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_91 00:07:03.390 size: 0.000122 MiB name: rte_compressdev_data_45 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_92 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_93 00:07:03.390 size: 0.000122 MiB name: rte_compressdev_data_46 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_94 00:07:03.390 size: 0.000122 MiB name: rte_cryptodev_data_95 00:07:03.390 size: 0.000122 MiB name: rte_compressdev_data_47 00:07:03.390 size: 0.000061 MiB name: QAT_COMP_CAPA_GEN_1 00:07:03.390 end memzones------- 00:07:03.390 21:50:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:07:03.390 heap id: 0 total size: 820.000000 MiB number of busy elements: 630 number of free elements: 17 00:07:03.390 list of free elements. size: 17.730347 MiB 00:07:03.391 element at address: 0x200000400000 with size: 1.999451 MiB 00:07:03.391 element at address: 0x200000800000 with size: 1.996887 MiB 00:07:03.391 element at address: 0x200007000000 with size: 1.995972 MiB 00:07:03.391 element at address: 0x20000b200000 with size: 1.995972 MiB 00:07:03.391 element at address: 0x200019100040 with size: 0.999939 MiB 00:07:03.391 element at address: 0x200019500040 with size: 0.999939 MiB 00:07:03.391 element at address: 0x200019900040 with size: 0.999939 MiB 00:07:03.391 element at address: 0x200019600000 with size: 0.999329 MiB 00:07:03.391 element at address: 0x200003e00000 with size: 0.996338 MiB 00:07:03.391 element at address: 0x200032200000 with size: 0.994324 MiB 00:07:03.391 element at address: 0x200018e00000 with size: 0.959656 MiB 00:07:03.391 element at address: 0x20001b000000 with size: 0.580994 MiB 00:07:03.391 element at address: 0x200019200000 with size: 0.491150 MiB 00:07:03.391 element at address: 0x200019a00000 with size: 0.485657 MiB 00:07:03.391 element at address: 0x200013800000 with size: 0.467651 MiB 00:07:03.391 element at address: 0x200028400000 with size: 0.394348 MiB 00:07:03.391 element at address: 0x200003a00000 with size: 0.372803 MiB 00:07:03.391 list of standard malloc elements. size: 199.935913 MiB 00:07:03.391 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:07:03.391 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:07:03.391 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:07:03.391 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:07:03.391 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:07:03.391 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:07:03.391 element at address: 0x200000207480 with size: 0.062683 MiB 00:07:03.391 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:07:03.391 element at address: 0x2000003239c0 with size: 0.004456 MiB 00:07:03.391 element at address: 0x200000327740 with size: 0.004456 MiB 00:07:03.391 element at address: 0x20000032b4c0 with size: 0.004456 MiB 00:07:03.391 element at address: 0x20000032f240 with size: 0.004456 MiB 00:07:03.391 element at address: 0x200000332fc0 with size: 0.004456 MiB 00:07:03.391 element at address: 0x200000336d40 with size: 0.004456 MiB 00:07:03.391 element at address: 0x20000033aac0 with size: 0.004456 MiB 00:07:03.391 element at address: 0x20000033e840 with size: 0.004456 MiB 00:07:03.391 element at address: 0x2000003425c0 with size: 0.004456 MiB 00:07:03.391 element at address: 0x200000346340 with size: 0.004456 MiB 00:07:03.391 element at address: 0x20000034a0c0 with size: 0.004456 MiB 00:07:03.391 element at address: 0x20000034de40 with size: 0.004456 MiB 00:07:03.391 element at address: 0x200000351bc0 with size: 0.004456 MiB 00:07:03.391 element at address: 0x200000355940 with size: 0.004456 MiB 00:07:03.391 element at address: 0x2000003596c0 with size: 0.004456 MiB 00:07:03.391 element at address: 0x20000035d440 with size: 0.004456 MiB 00:07:03.391 element at address: 0x2000003611c0 with size: 0.004456 MiB 00:07:03.391 element at address: 0x200000364f40 with size: 0.004456 MiB 00:07:03.391 element at address: 0x200000368cc0 with size: 0.004456 MiB 00:07:03.391 element at address: 0x20000036ca40 with size: 0.004456 MiB 00:07:03.391 element at address: 0x2000003707c0 with size: 0.004456 MiB 00:07:03.391 element at address: 0x200000374540 with size: 0.004456 MiB 00:07:03.391 element at address: 0x2000003782c0 with size: 0.004456 MiB 00:07:03.391 element at address: 0x20000037c040 with size: 0.004456 MiB 00:07:03.391 element at address: 0x20000037fdc0 with size: 0.004456 MiB 00:07:03.391 element at address: 0x200000383b40 with size: 0.004456 MiB 00:07:03.391 element at address: 0x2000003878c0 with size: 0.004456 MiB 00:07:03.391 element at address: 0x20000038b640 with size: 0.004456 MiB 00:07:03.391 element at address: 0x20000038f3c0 with size: 0.004456 MiB 00:07:03.391 element at address: 0x200000393140 with size: 0.004456 MiB 00:07:03.391 element at address: 0x200000396ec0 with size: 0.004456 MiB 00:07:03.391 element at address: 0x20000039ac40 with size: 0.004456 MiB 00:07:03.391 element at address: 0x20000039e9c0 with size: 0.004456 MiB 00:07:03.391 element at address: 0x2000003a2740 with size: 0.004456 MiB 00:07:03.391 element at address: 0x2000003a64c0 with size: 0.004456 MiB 00:07:03.391 element at address: 0x2000003aa240 with size: 0.004456 MiB 00:07:03.391 element at address: 0x2000003adfc0 with size: 0.004456 MiB 00:07:03.391 element at address: 0x2000003b1d40 with size: 0.004456 MiB 00:07:03.391 element at address: 0x2000003b5ac0 with size: 0.004456 MiB 00:07:03.391 element at address: 0x2000003b9840 with size: 0.004456 MiB 00:07:03.391 element at address: 0x2000003bd5c0 with size: 0.004456 MiB 00:07:03.391 element at address: 0x2000003c1340 with size: 0.004456 MiB 00:07:03.391 element at address: 0x2000003c50c0 with size: 0.004456 MiB 00:07:03.391 element at address: 0x2000003c8e40 with size: 0.004456 MiB 00:07:03.391 element at address: 0x2000003ccbc0 with size: 0.004456 MiB 00:07:03.391 element at address: 0x2000003d0940 with size: 0.004456 MiB 00:07:03.391 element at address: 0x2000003d46c0 with size: 0.004456 MiB 00:07:03.391 element at address: 0x2000003d8c40 with size: 0.004456 MiB 00:07:03.391 element at address: 0x200000321840 with size: 0.004089 MiB 00:07:03.391 element at address: 0x200000322900 with size: 0.004089 MiB 00:07:03.391 element at address: 0x2000003255c0 with size: 0.004089 MiB 00:07:03.391 element at address: 0x200000326680 with size: 0.004089 MiB 00:07:03.391 element at address: 0x200000329340 with size: 0.004089 MiB 00:07:03.391 element at address: 0x20000032a400 with size: 0.004089 MiB 00:07:03.391 element at address: 0x20000032d0c0 with size: 0.004089 MiB 00:07:03.391 element at address: 0x20000032e180 with size: 0.004089 MiB 00:07:03.391 element at address: 0x200000330e40 with size: 0.004089 MiB 00:07:03.391 element at address: 0x200000331f00 with size: 0.004089 MiB 00:07:03.391 element at address: 0x200000334bc0 with size: 0.004089 MiB 00:07:03.391 element at address: 0x200000335c80 with size: 0.004089 MiB 00:07:03.391 element at address: 0x200000338940 with size: 0.004089 MiB 00:07:03.391 element at address: 0x200000339a00 with size: 0.004089 MiB 00:07:03.391 element at address: 0x20000033c6c0 with size: 0.004089 MiB 00:07:03.391 element at address: 0x20000033d780 with size: 0.004089 MiB 00:07:03.391 element at address: 0x200000340440 with size: 0.004089 MiB 00:07:03.391 element at address: 0x200000341500 with size: 0.004089 MiB 00:07:03.391 element at address: 0x2000003441c0 with size: 0.004089 MiB 00:07:03.391 element at address: 0x200000345280 with size: 0.004089 MiB 00:07:03.391 element at address: 0x200000347f40 with size: 0.004089 MiB 00:07:03.391 element at address: 0x200000349000 with size: 0.004089 MiB 00:07:03.391 element at address: 0x20000034bcc0 with size: 0.004089 MiB 00:07:03.391 element at address: 0x20000034cd80 with size: 0.004089 MiB 00:07:03.391 element at address: 0x20000034fa40 with size: 0.004089 MiB 00:07:03.391 element at address: 0x200000350b00 with size: 0.004089 MiB 00:07:03.391 element at address: 0x2000003537c0 with size: 0.004089 MiB 00:07:03.391 element at address: 0x200000354880 with size: 0.004089 MiB 00:07:03.391 element at address: 0x200000357540 with size: 0.004089 MiB 00:07:03.391 element at address: 0x200000358600 with size: 0.004089 MiB 00:07:03.391 element at address: 0x20000035b2c0 with size: 0.004089 MiB 00:07:03.391 element at address: 0x20000035c380 with size: 0.004089 MiB 00:07:03.391 element at address: 0x20000035f040 with size: 0.004089 MiB 00:07:03.391 element at address: 0x200000360100 with size: 0.004089 MiB 00:07:03.391 element at address: 0x200000362dc0 with size: 0.004089 MiB 00:07:03.391 element at address: 0x200000363e80 with size: 0.004089 MiB 00:07:03.391 element at address: 0x200000366b40 with size: 0.004089 MiB 00:07:03.391 element at address: 0x200000367c00 with size: 0.004089 MiB 00:07:03.391 element at address: 0x20000036a8c0 with size: 0.004089 MiB 00:07:03.391 element at address: 0x20000036b980 with size: 0.004089 MiB 00:07:03.391 element at address: 0x20000036e640 with size: 0.004089 MiB 00:07:03.391 element at address: 0x20000036f700 with size: 0.004089 MiB 00:07:03.391 element at address: 0x2000003723c0 with size: 0.004089 MiB 00:07:03.391 element at address: 0x200000373480 with size: 0.004089 MiB 00:07:03.391 element at address: 0x200000376140 with size: 0.004089 MiB 00:07:03.391 element at address: 0x200000377200 with size: 0.004089 MiB 00:07:03.391 element at address: 0x200000379ec0 with size: 0.004089 MiB 00:07:03.391 element at address: 0x20000037af80 with size: 0.004089 MiB 00:07:03.391 element at address: 0x20000037dc40 with size: 0.004089 MiB 00:07:03.391 element at address: 0x20000037ed00 with size: 0.004089 MiB 00:07:03.391 element at address: 0x2000003819c0 with size: 0.004089 MiB 00:07:03.391 element at address: 0x200000382a80 with size: 0.004089 MiB 00:07:03.391 element at address: 0x200000385740 with size: 0.004089 MiB 00:07:03.391 element at address: 0x200000386800 with size: 0.004089 MiB 00:07:03.391 element at address: 0x2000003894c0 with size: 0.004089 MiB 00:07:03.391 element at address: 0x20000038a580 with size: 0.004089 MiB 00:07:03.391 element at address: 0x20000038d240 with size: 0.004089 MiB 00:07:03.391 element at address: 0x20000038e300 with size: 0.004089 MiB 00:07:03.391 element at address: 0x200000390fc0 with size: 0.004089 MiB 00:07:03.391 element at address: 0x200000392080 with size: 0.004089 MiB 00:07:03.391 element at address: 0x200000394d40 with size: 0.004089 MiB 00:07:03.391 element at address: 0x200000395e00 with size: 0.004089 MiB 00:07:03.391 element at address: 0x200000398ac0 with size: 0.004089 MiB 00:07:03.391 element at address: 0x200000399b80 with size: 0.004089 MiB 00:07:03.391 element at address: 0x20000039c840 with size: 0.004089 MiB 00:07:03.391 element at address: 0x20000039d900 with size: 0.004089 MiB 00:07:03.391 element at address: 0x2000003a05c0 with size: 0.004089 MiB 00:07:03.391 element at address: 0x2000003a1680 with size: 0.004089 MiB 00:07:03.391 element at address: 0x2000003a4340 with size: 0.004089 MiB 00:07:03.391 element at address: 0x2000003a5400 with size: 0.004089 MiB 00:07:03.391 element at address: 0x2000003a80c0 with size: 0.004089 MiB 00:07:03.391 element at address: 0x2000003a9180 with size: 0.004089 MiB 00:07:03.391 element at address: 0x2000003abe40 with size: 0.004089 MiB 00:07:03.391 element at address: 0x2000003acf00 with size: 0.004089 MiB 00:07:03.391 element at address: 0x2000003afbc0 with size: 0.004089 MiB 00:07:03.391 element at address: 0x2000003b0c80 with size: 0.004089 MiB 00:07:03.391 element at address: 0x2000003b3940 with size: 0.004089 MiB 00:07:03.391 element at address: 0x2000003b4a00 with size: 0.004089 MiB 00:07:03.391 element at address: 0x2000003b76c0 with size: 0.004089 MiB 00:07:03.391 element at address: 0x2000003b8780 with size: 0.004089 MiB 00:07:03.391 element at address: 0x2000003bb440 with size: 0.004089 MiB 00:07:03.391 element at address: 0x2000003bc500 with size: 0.004089 MiB 00:07:03.391 element at address: 0x2000003bf1c0 with size: 0.004089 MiB 00:07:03.391 element at address: 0x2000003c0280 with size: 0.004089 MiB 00:07:03.391 element at address: 0x2000003c2f40 with size: 0.004089 MiB 00:07:03.391 element at address: 0x2000003c4000 with size: 0.004089 MiB 00:07:03.391 element at address: 0x2000003c6cc0 with size: 0.004089 MiB 00:07:03.391 element at address: 0x2000003c7d80 with size: 0.004089 MiB 00:07:03.391 element at address: 0x2000003caa40 with size: 0.004089 MiB 00:07:03.392 element at address: 0x2000003cbb00 with size: 0.004089 MiB 00:07:03.392 element at address: 0x2000003ce7c0 with size: 0.004089 MiB 00:07:03.392 element at address: 0x2000003cf880 with size: 0.004089 MiB 00:07:03.392 element at address: 0x2000003d2540 with size: 0.004089 MiB 00:07:03.392 element at address: 0x2000003d3600 with size: 0.004089 MiB 00:07:03.392 element at address: 0x2000003d6ac0 with size: 0.004089 MiB 00:07:03.392 element at address: 0x2000003d7b80 with size: 0.004089 MiB 00:07:03.392 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:07:03.392 element at address: 0x200000207300 with size: 0.000366 MiB 00:07:03.392 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:07:03.392 element at address: 0x200000200000 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000200100 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000200200 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000200300 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000200400 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000200500 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000200600 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000200700 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000200800 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000200900 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000200a00 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000200b00 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000200c00 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000200d00 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000200e00 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000200f00 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000201000 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000201100 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000201200 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000201300 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000201400 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000201500 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000201600 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000201700 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000201800 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000201900 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000201a00 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000201b00 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000201c00 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000201d00 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000201e00 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000201f00 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000202000 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000202100 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000202200 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000202300 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000202400 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000202500 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000202600 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000202700 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000202800 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000202900 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000202a00 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000202b00 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000202c00 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000202d00 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000202e00 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000202f00 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000203000 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000203100 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000203200 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000203300 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000203400 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000203500 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000203600 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000203700 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000203800 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000203900 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000203a00 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000203b00 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000203c00 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000203d00 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000203e00 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000203f00 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000204000 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000204100 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000204200 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000204300 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000204400 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000204500 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000204600 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000204700 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000204800 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000204900 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000204a00 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000204b00 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000204c00 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000204d00 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000204e00 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000204f00 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000205000 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000205100 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000205200 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000205300 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000205400 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000205500 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000205600 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000205700 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000205800 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000205900 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000205a00 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000205b00 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000205c00 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000205d00 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000205e00 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000205f00 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000206000 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000206100 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000206200 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000206300 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000206400 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000206500 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000206600 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000206700 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000206800 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000206900 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000206a00 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000206b00 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000206c00 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000206d00 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000206e00 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000206f00 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000207000 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000207100 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000207200 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000217540 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000217640 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000217740 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000217840 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000217940 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000217a40 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000217b40 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000217c40 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000217d40 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000217e40 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000217f40 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000218040 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000218140 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000218240 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000218340 with size: 0.000244 MiB 00:07:03.392 element at address: 0x200000218440 with size: 0.000244 MiB 00:07:03.392 element at address: 0x20000021c780 with size: 0.000244 MiB 00:07:03.392 element at address: 0x20000021c880 with size: 0.000244 MiB 00:07:03.392 element at address: 0x20000021c980 with size: 0.000244 MiB 00:07:03.392 element at address: 0x20000021ca80 with size: 0.000244 MiB 00:07:03.392 element at address: 0x20000021cb80 with size: 0.000244 MiB 00:07:03.392 element at address: 0x20000021cc80 with size: 0.000244 MiB 00:07:03.392 element at address: 0x20000021cd80 with size: 0.000244 MiB 00:07:03.392 element at address: 0x20000021ce80 with size: 0.000244 MiB 00:07:03.392 element at address: 0x20000021cf80 with size: 0.000244 MiB 00:07:03.392 element at address: 0x20000021d080 with size: 0.000244 MiB 00:07:03.392 element at address: 0x20000021d180 with size: 0.000244 MiB 00:07:03.392 element at address: 0x20000021d280 with size: 0.000244 MiB 00:07:03.392 element at address: 0x20000021d380 with size: 0.000244 MiB 00:07:03.392 element at address: 0x20000021d480 with size: 0.000244 MiB 00:07:03.392 element at address: 0x20000021d580 with size: 0.000244 MiB 00:07:03.392 element at address: 0x20000021d680 with size: 0.000244 MiB 00:07:03.392 element at address: 0x20000021d780 with size: 0.000244 MiB 00:07:03.392 element at address: 0x20000021d880 with size: 0.000244 MiB 00:07:03.392 element at address: 0x20000021db00 with size: 0.000244 MiB 00:07:03.392 element at address: 0x20000021dc00 with size: 0.000244 MiB 00:07:03.392 element at address: 0x20000021dd00 with size: 0.000244 MiB 00:07:03.392 element at address: 0x20000021de00 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000021df00 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000021e000 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000021e100 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000021e200 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000021e300 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000021e400 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000021e500 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000021e600 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000021e700 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000021e800 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000021e900 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000021ea00 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000021eb00 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000320d80 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000320e80 with size: 0.000244 MiB 00:07:03.393 element at address: 0x2000003210c0 with size: 0.000244 MiB 00:07:03.393 element at address: 0x2000003211c0 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000321400 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000324c00 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000324e40 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000324f40 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000325180 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000328980 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000328bc0 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000328cc0 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000328f00 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000032c700 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000032c940 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000032ca40 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000032cc80 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000330480 with size: 0.000244 MiB 00:07:03.393 element at address: 0x2000003306c0 with size: 0.000244 MiB 00:07:03.393 element at address: 0x2000003307c0 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000330a00 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000334200 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000334440 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000334540 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000334780 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000337f80 with size: 0.000244 MiB 00:07:03.393 element at address: 0x2000003381c0 with size: 0.000244 MiB 00:07:03.393 element at address: 0x2000003382c0 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000338500 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000033bd00 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000033bf40 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000033c040 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000033c280 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000033fa80 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000033fcc0 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000033fdc0 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000340000 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000343800 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000343a40 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000343b40 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000343d80 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000347580 with size: 0.000244 MiB 00:07:03.393 element at address: 0x2000003477c0 with size: 0.000244 MiB 00:07:03.393 element at address: 0x2000003478c0 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000347b00 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000034b300 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000034b540 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000034b640 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000034b880 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000034f080 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000034f2c0 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000034f3c0 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000034f600 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000352e00 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000353040 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000353140 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000353380 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000356b80 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000356dc0 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000356ec0 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000357100 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000035a900 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000035ab40 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000035ac40 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000035ae80 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000035e680 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000035e8c0 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000035e9c0 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000035ec00 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000362400 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000362640 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000362740 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000362980 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000366180 with size: 0.000244 MiB 00:07:03.393 element at address: 0x2000003663c0 with size: 0.000244 MiB 00:07:03.393 element at address: 0x2000003664c0 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000366700 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000369f00 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000036a140 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000036a240 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000036a480 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000036dc80 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000036dec0 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000036dfc0 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000036e200 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000371a00 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000371c40 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000371d40 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000371f80 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000375780 with size: 0.000244 MiB 00:07:03.393 element at address: 0x2000003759c0 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000375ac0 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000375d00 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000379500 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000379740 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000379840 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000379a80 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000037d280 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000037d4c0 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000037d5c0 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000037d800 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000381000 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000381240 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000381340 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000381580 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000384d80 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000384fc0 with size: 0.000244 MiB 00:07:03.393 element at address: 0x2000003850c0 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000385300 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000388b00 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000388d40 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000388e40 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000389080 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000038c880 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000038cac0 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000038cbc0 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000038ce00 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000390600 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000390840 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000390940 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000390b80 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000394380 with size: 0.000244 MiB 00:07:03.393 element at address: 0x2000003945c0 with size: 0.000244 MiB 00:07:03.393 element at address: 0x2000003946c0 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000394900 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000398100 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000398340 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000398440 with size: 0.000244 MiB 00:07:03.393 element at address: 0x200000398680 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000039be80 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000039c0c0 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000039c1c0 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000039c400 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000039fc00 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000039fe40 with size: 0.000244 MiB 00:07:03.393 element at address: 0x20000039ff40 with size: 0.000244 MiB 00:07:03.393 element at address: 0x2000003a0180 with size: 0.000244 MiB 00:07:03.393 element at address: 0x2000003a3980 with size: 0.000244 MiB 00:07:03.393 element at address: 0x2000003a3bc0 with size: 0.000244 MiB 00:07:03.393 element at address: 0x2000003a3cc0 with size: 0.000244 MiB 00:07:03.393 element at address: 0x2000003a3f00 with size: 0.000244 MiB 00:07:03.393 element at address: 0x2000003a7700 with size: 0.000244 MiB 00:07:03.393 element at address: 0x2000003a7940 with size: 0.000244 MiB 00:07:03.393 element at address: 0x2000003a7a40 with size: 0.000244 MiB 00:07:03.393 element at address: 0x2000003a7c80 with size: 0.000244 MiB 00:07:03.393 element at address: 0x2000003ab480 with size: 0.000244 MiB 00:07:03.393 element at address: 0x2000003ab6c0 with size: 0.000244 MiB 00:07:03.393 element at address: 0x2000003ab7c0 with size: 0.000244 MiB 00:07:03.393 element at address: 0x2000003aba00 with size: 0.000244 MiB 00:07:03.393 element at address: 0x2000003af200 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000003af440 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000003af540 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000003af780 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000003b2f80 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000003b31c0 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000003b32c0 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000003b3500 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000003b6d00 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000003b6f40 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000003b7040 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000003b7280 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000003baa80 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000003bacc0 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000003badc0 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000003bb000 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000003be800 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000003bea40 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000003beb40 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000003bed80 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000003c2580 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000003c27c0 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000003c28c0 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000003c2b00 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000003c6300 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000003c6540 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000003c6640 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000003c6880 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000003ca080 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000003ca2c0 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000003ca3c0 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000003ca600 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000003cde00 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000003ce040 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000003ce140 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000003ce380 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000003d1b80 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000003d1dc0 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000003d1ec0 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000003d2100 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000003d5a00 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000003d61c0 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000003d62c0 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000003d6680 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:07:03.394 element at address: 0x200013877b80 with size: 0.000244 MiB 00:07:03.394 element at address: 0x200013877c80 with size: 0.000244 MiB 00:07:03.394 element at address: 0x200013877d80 with size: 0.000244 MiB 00:07:03.394 element at address: 0x200013877e80 with size: 0.000244 MiB 00:07:03.394 element at address: 0x200013877f80 with size: 0.000244 MiB 00:07:03.394 element at address: 0x200013878080 with size: 0.000244 MiB 00:07:03.394 element at address: 0x200013878180 with size: 0.000244 MiB 00:07:03.394 element at address: 0x200013878280 with size: 0.000244 MiB 00:07:03.394 element at address: 0x200013878380 with size: 0.000244 MiB 00:07:03.394 element at address: 0x200013878480 with size: 0.000244 MiB 00:07:03.394 element at address: 0x200013878580 with size: 0.000244 MiB 00:07:03.394 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:07:03.394 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:07:03.394 element at address: 0x200028464f40 with size: 0.000244 MiB 00:07:03.394 element at address: 0x200028465040 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20002846bd00 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20002846c080 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20002846c180 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20002846c280 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20002846c380 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20002846c480 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20002846c580 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20002846c680 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20002846c780 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20002846c880 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20002846c980 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20002846d080 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20002846d180 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20002846d280 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20002846d380 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20002846d480 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20002846d580 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20002846d680 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20002846d780 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20002846d880 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20002846d980 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20002846da80 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20002846db80 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20002846de80 with size: 0.000244 MiB 00:07:03.394 element at address: 0x20002846df80 with size: 0.000244 MiB 00:07:03.395 element at address: 0x20002846e080 with size: 0.000244 MiB 00:07:03.395 element at address: 0x20002846e180 with size: 0.000244 MiB 00:07:03.395 element at address: 0x20002846e280 with size: 0.000244 MiB 00:07:03.395 element at address: 0x20002846e380 with size: 0.000244 MiB 00:07:03.395 element at address: 0x20002846e480 with size: 0.000244 MiB 00:07:03.395 element at address: 0x20002846e580 with size: 0.000244 MiB 00:07:03.395 element at address: 0x20002846e680 with size: 0.000244 MiB 00:07:03.395 element at address: 0x20002846e780 with size: 0.000244 MiB 00:07:03.395 element at address: 0x20002846e880 with size: 0.000244 MiB 00:07:03.395 element at address: 0x20002846e980 with size: 0.000244 MiB 00:07:03.395 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:07:03.395 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:07:03.395 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:07:03.395 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:07:03.395 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:07:03.395 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:07:03.395 element at address: 0x20002846f080 with size: 0.000244 MiB 00:07:03.395 element at address: 0x20002846f180 with size: 0.000244 MiB 00:07:03.395 element at address: 0x20002846f280 with size: 0.000244 MiB 00:07:03.395 element at address: 0x20002846f380 with size: 0.000244 MiB 00:07:03.395 element at address: 0x20002846f480 with size: 0.000244 MiB 00:07:03.395 element at address: 0x20002846f580 with size: 0.000244 MiB 00:07:03.395 element at address: 0x20002846f680 with size: 0.000244 MiB 00:07:03.395 element at address: 0x20002846f780 with size: 0.000244 MiB 00:07:03.395 element at address: 0x20002846f880 with size: 0.000244 MiB 00:07:03.395 element at address: 0x20002846f980 with size: 0.000244 MiB 00:07:03.395 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:07:03.395 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:07:03.395 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:07:03.395 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:07:03.395 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:07:03.395 list of memzone associated elements. size: 602.333740 MiB 00:07:03.395 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:07:03.395 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:03.395 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:07:03.395 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:03.395 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:07:03.395 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1290327_0 00:07:03.395 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:07:03.395 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1290327_0 00:07:03.395 element at address: 0x200003fff340 with size: 48.003113 MiB 00:07:03.395 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1290327_0 00:07:03.395 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:07:03.395 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:03.395 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:07:03.395 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:03.395 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:07:03.395 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1290327 00:07:03.395 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:07:03.395 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1290327 00:07:03.395 element at address: 0x20000021ec00 with size: 1.008179 MiB 00:07:03.395 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1290327 00:07:03.395 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:07:03.395 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:03.395 element at address: 0x200019abc780 with size: 1.008179 MiB 00:07:03.395 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:03.395 element at address: 0x200018efde00 with size: 1.008179 MiB 00:07:03.395 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:03.395 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:07:03.395 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:03.395 element at address: 0x200003eff100 with size: 1.000549 MiB 00:07:03.395 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1290327 00:07:03.395 element at address: 0x200003affb80 with size: 1.000549 MiB 00:07:03.395 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1290327 00:07:03.395 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:07:03.395 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1290327 00:07:03.395 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:07:03.395 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1290327 00:07:03.395 element at address: 0x200003a5f700 with size: 0.500549 MiB 00:07:03.395 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1290327 00:07:03.395 element at address: 0x20001927dbc0 with size: 0.500549 MiB 00:07:03.395 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:03.395 element at address: 0x200013878680 with size: 0.500549 MiB 00:07:03.395 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:03.395 element at address: 0x200019a7c540 with size: 0.250549 MiB 00:07:03.395 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:03.395 element at address: 0x200003adf940 with size: 0.125549 MiB 00:07:03.395 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1290327 00:07:03.395 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:07:03.395 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:03.395 element at address: 0x200028465140 with size: 0.023804 MiB 00:07:03.395 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:03.395 element at address: 0x200000218540 with size: 0.016174 MiB 00:07:03.395 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1290327 00:07:03.395 element at address: 0x20002846b2c0 with size: 0.002502 MiB 00:07:03.395 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:03.395 element at address: 0x2000003d5c40 with size: 0.001343 MiB 00:07:03.395 associated memzone info: size: 0.001160 MiB name: QAT_SYM_CAPA_GEN_1 00:07:03.395 element at address: 0x2000003d68c0 with size: 0.000488 MiB 00:07:03.395 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.0_qat 00:07:03.395 element at address: 0x2000003d2340 with size: 0.000488 MiB 00:07:03.395 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.1_qat 00:07:03.395 element at address: 0x2000003ce5c0 with size: 0.000488 MiB 00:07:03.395 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.2_qat 00:07:03.395 element at address: 0x2000003ca840 with size: 0.000488 MiB 00:07:03.395 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.3_qat 00:07:03.395 element at address: 0x2000003c6ac0 with size: 0.000488 MiB 00:07:03.395 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.4_qat 00:07:03.395 element at address: 0x2000003c2d40 with size: 0.000488 MiB 00:07:03.395 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.5_qat 00:07:03.395 element at address: 0x2000003befc0 with size: 0.000488 MiB 00:07:03.395 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.6_qat 00:07:03.395 element at address: 0x2000003bb240 with size: 0.000488 MiB 00:07:03.395 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.7_qat 00:07:03.395 element at address: 0x2000003b74c0 with size: 0.000488 MiB 00:07:03.395 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.0_qat 00:07:03.395 element at address: 0x2000003b3740 with size: 0.000488 MiB 00:07:03.395 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.1_qat 00:07:03.395 element at address: 0x2000003af9c0 with size: 0.000488 MiB 00:07:03.395 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.2_qat 00:07:03.395 element at address: 0x2000003abc40 with size: 0.000488 MiB 00:07:03.395 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.3_qat 00:07:03.395 element at address: 0x2000003a7ec0 with size: 0.000488 MiB 00:07:03.395 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.4_qat 00:07:03.395 element at address: 0x2000003a4140 with size: 0.000488 MiB 00:07:03.395 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.5_qat 00:07:03.395 element at address: 0x2000003a03c0 with size: 0.000488 MiB 00:07:03.395 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.6_qat 00:07:03.395 element at address: 0x20000039c640 with size: 0.000488 MiB 00:07:03.395 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.7_qat 00:07:03.395 element at address: 0x2000003988c0 with size: 0.000488 MiB 00:07:03.395 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.0_qat 00:07:03.395 element at address: 0x200000394b40 with size: 0.000488 MiB 00:07:03.395 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.1_qat 00:07:03.395 element at address: 0x200000390dc0 with size: 0.000488 MiB 00:07:03.395 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.2_qat 00:07:03.395 element at address: 0x20000038d040 with size: 0.000488 MiB 00:07:03.395 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.3_qat 00:07:03.395 element at address: 0x2000003892c0 with size: 0.000488 MiB 00:07:03.395 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.4_qat 00:07:03.395 element at address: 0x200000385540 with size: 0.000488 MiB 00:07:03.395 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.5_qat 00:07:03.395 element at address: 0x2000003817c0 with size: 0.000488 MiB 00:07:03.395 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.6_qat 00:07:03.395 element at address: 0x20000037da40 with size: 0.000488 MiB 00:07:03.395 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.7_qat 00:07:03.395 element at address: 0x200000379cc0 with size: 0.000488 MiB 00:07:03.395 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.0_qat 00:07:03.395 element at address: 0x200000375f40 with size: 0.000488 MiB 00:07:03.395 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.1_qat 00:07:03.395 element at address: 0x2000003721c0 with size: 0.000488 MiB 00:07:03.395 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.2_qat 00:07:03.395 element at address: 0x20000036e440 with size: 0.000488 MiB 00:07:03.395 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.3_qat 00:07:03.395 element at address: 0x20000036a6c0 with size: 0.000488 MiB 00:07:03.395 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.4_qat 00:07:03.395 element at address: 0x200000366940 with size: 0.000488 MiB 00:07:03.395 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.5_qat 00:07:03.395 element at address: 0x200000362bc0 with size: 0.000488 MiB 00:07:03.395 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.6_qat 00:07:03.395 element at address: 0x20000035ee40 with size: 0.000488 MiB 00:07:03.395 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.7_qat 00:07:03.395 element at address: 0x20000035b0c0 with size: 0.000488 MiB 00:07:03.395 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.0_qat 00:07:03.395 element at address: 0x200000357340 with size: 0.000488 MiB 00:07:03.396 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.1_qat 00:07:03.396 element at address: 0x2000003535c0 with size: 0.000488 MiB 00:07:03.396 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.2_qat 00:07:03.396 element at address: 0x20000034f840 with size: 0.000488 MiB 00:07:03.396 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.3_qat 00:07:03.396 element at address: 0x20000034bac0 with size: 0.000488 MiB 00:07:03.396 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.4_qat 00:07:03.396 element at address: 0x200000347d40 with size: 0.000488 MiB 00:07:03.396 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.5_qat 00:07:03.396 element at address: 0x200000343fc0 with size: 0.000488 MiB 00:07:03.396 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.6_qat 00:07:03.396 element at address: 0x200000340240 with size: 0.000488 MiB 00:07:03.396 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.7_qat 00:07:03.396 element at address: 0x20000033c4c0 with size: 0.000488 MiB 00:07:03.396 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.0_qat 00:07:03.396 element at address: 0x200000338740 with size: 0.000488 MiB 00:07:03.396 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.1_qat 00:07:03.396 element at address: 0x2000003349c0 with size: 0.000488 MiB 00:07:03.396 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.2_qat 00:07:03.396 element at address: 0x200000330c40 with size: 0.000488 MiB 00:07:03.396 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.3_qat 00:07:03.396 element at address: 0x20000032cec0 with size: 0.000488 MiB 00:07:03.396 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.4_qat 00:07:03.396 element at address: 0x200000329140 with size: 0.000488 MiB 00:07:03.396 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.5_qat 00:07:03.396 element at address: 0x2000003253c0 with size: 0.000488 MiB 00:07:03.396 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.6_qat 00:07:03.396 element at address: 0x200000321640 with size: 0.000488 MiB 00:07:03.396 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.7_qat 00:07:03.396 element at address: 0x2000003d6500 with size: 0.000366 MiB 00:07:03.396 associated memzone info: size: 0.000183 MiB name: QAT_ASYM_CAPA_GEN_1 00:07:03.396 element at address: 0x20000021d980 with size: 0.000366 MiB 00:07:03.396 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1290327 00:07:03.396 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:07:03.396 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1290327 00:07:03.396 element at address: 0x20002846be00 with size: 0.000366 MiB 00:07:03.396 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:03.396 element at address: 0x2000003d6780 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_0 00:07:03.396 element at address: 0x2000003d63c0 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_1 00:07:03.396 element at address: 0x2000003d5b00 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_0 00:07:03.396 element at address: 0x2000003d2200 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_2 00:07:03.396 element at address: 0x2000003d1fc0 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_3 00:07:03.396 element at address: 0x2000003d1c80 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_1 00:07:03.396 element at address: 0x2000003ce480 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_4 00:07:03.396 element at address: 0x2000003ce240 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_5 00:07:03.396 element at address: 0x2000003cdf00 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_2 00:07:03.396 element at address: 0x2000003ca700 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_6 00:07:03.396 element at address: 0x2000003ca4c0 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_7 00:07:03.396 element at address: 0x2000003ca180 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_3 00:07:03.396 element at address: 0x2000003c6980 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_8 00:07:03.396 element at address: 0x2000003c6740 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_9 00:07:03.396 element at address: 0x2000003c6400 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_4 00:07:03.396 element at address: 0x2000003c2c00 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_10 00:07:03.396 element at address: 0x2000003c29c0 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_11 00:07:03.396 element at address: 0x2000003c2680 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_5 00:07:03.396 element at address: 0x2000003bee80 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_12 00:07:03.396 element at address: 0x2000003bec40 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_13 00:07:03.396 element at address: 0x2000003be900 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_6 00:07:03.396 element at address: 0x2000003bb100 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_14 00:07:03.396 element at address: 0x2000003baec0 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_15 00:07:03.396 element at address: 0x2000003bab80 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_7 00:07:03.396 element at address: 0x2000003b7380 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_16 00:07:03.396 element at address: 0x2000003b7140 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_17 00:07:03.396 element at address: 0x2000003b6e00 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_8 00:07:03.396 element at address: 0x2000003b3600 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_18 00:07:03.396 element at address: 0x2000003b33c0 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_19 00:07:03.396 element at address: 0x2000003b3080 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_9 00:07:03.396 element at address: 0x2000003af880 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_20 00:07:03.396 element at address: 0x2000003af640 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_21 00:07:03.396 element at address: 0x2000003af300 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_10 00:07:03.396 element at address: 0x2000003abb00 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_22 00:07:03.396 element at address: 0x2000003ab8c0 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_23 00:07:03.396 element at address: 0x2000003ab580 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_11 00:07:03.396 element at address: 0x2000003a7d80 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_24 00:07:03.396 element at address: 0x2000003a7b40 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_25 00:07:03.396 element at address: 0x2000003a7800 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_12 00:07:03.396 element at address: 0x2000003a4000 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_26 00:07:03.396 element at address: 0x2000003a3dc0 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_27 00:07:03.396 element at address: 0x2000003a3a80 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_13 00:07:03.396 element at address: 0x2000003a0280 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_28 00:07:03.396 element at address: 0x2000003a0040 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_29 00:07:03.396 element at address: 0x20000039fd00 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_14 00:07:03.396 element at address: 0x20000039c500 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_30 00:07:03.396 element at address: 0x20000039c2c0 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_31 00:07:03.396 element at address: 0x20000039bf80 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_15 00:07:03.396 element at address: 0x200000398780 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_32 00:07:03.396 element at address: 0x200000398540 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_33 00:07:03.396 element at address: 0x200000398200 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_16 00:07:03.396 element at address: 0x200000394a00 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_34 00:07:03.396 element at address: 0x2000003947c0 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_35 00:07:03.396 element at address: 0x200000394480 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_17 00:07:03.396 element at address: 0x200000390c80 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_36 00:07:03.396 element at address: 0x200000390a40 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_37 00:07:03.396 element at address: 0x200000390700 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_18 00:07:03.396 element at address: 0x20000038cf00 with size: 0.000305 MiB 00:07:03.396 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_38 00:07:03.397 element at address: 0x20000038ccc0 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_39 00:07:03.397 element at address: 0x20000038c980 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_19 00:07:03.397 element at address: 0x200000389180 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_40 00:07:03.397 element at address: 0x200000388f40 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_41 00:07:03.397 element at address: 0x200000388c00 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_20 00:07:03.397 element at address: 0x200000385400 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_42 00:07:03.397 element at address: 0x2000003851c0 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_43 00:07:03.397 element at address: 0x200000384e80 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_21 00:07:03.397 element at address: 0x200000381680 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_44 00:07:03.397 element at address: 0x200000381440 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_45 00:07:03.397 element at address: 0x200000381100 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_22 00:07:03.397 element at address: 0x20000037d900 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_46 00:07:03.397 element at address: 0x20000037d6c0 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_47 00:07:03.397 element at address: 0x20000037d380 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_23 00:07:03.397 element at address: 0x200000379b80 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_48 00:07:03.397 element at address: 0x200000379940 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_49 00:07:03.397 element at address: 0x200000379600 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_24 00:07:03.397 element at address: 0x200000375e00 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_50 00:07:03.397 element at address: 0x200000375bc0 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_51 00:07:03.397 element at address: 0x200000375880 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_25 00:07:03.397 element at address: 0x200000372080 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_52 00:07:03.397 element at address: 0x200000371e40 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_53 00:07:03.397 element at address: 0x200000371b00 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_26 00:07:03.397 element at address: 0x20000036e300 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_54 00:07:03.397 element at address: 0x20000036e0c0 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_55 00:07:03.397 element at address: 0x20000036dd80 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_27 00:07:03.397 element at address: 0x20000036a580 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_56 00:07:03.397 element at address: 0x20000036a340 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_57 00:07:03.397 element at address: 0x20000036a000 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_28 00:07:03.397 element at address: 0x200000366800 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_58 00:07:03.397 element at address: 0x2000003665c0 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_59 00:07:03.397 element at address: 0x200000366280 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_29 00:07:03.397 element at address: 0x200000362a80 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_60 00:07:03.397 element at address: 0x200000362840 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_61 00:07:03.397 element at address: 0x200000362500 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_30 00:07:03.397 element at address: 0x20000035ed00 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_62 00:07:03.397 element at address: 0x20000035eac0 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_63 00:07:03.397 element at address: 0x20000035e780 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_31 00:07:03.397 element at address: 0x20000035af80 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_64 00:07:03.397 element at address: 0x20000035ad40 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_65 00:07:03.397 element at address: 0x20000035aa00 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_32 00:07:03.397 element at address: 0x200000357200 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_66 00:07:03.397 element at address: 0x200000356fc0 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_67 00:07:03.397 element at address: 0x200000356c80 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_33 00:07:03.397 element at address: 0x200000353480 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_68 00:07:03.397 element at address: 0x200000353240 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_69 00:07:03.397 element at address: 0x200000352f00 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_34 00:07:03.397 element at address: 0x20000034f700 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_70 00:07:03.397 element at address: 0x20000034f4c0 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_71 00:07:03.397 element at address: 0x20000034f180 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_35 00:07:03.397 element at address: 0x20000034b980 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_72 00:07:03.397 element at address: 0x20000034b740 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_73 00:07:03.397 element at address: 0x20000034b400 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_36 00:07:03.397 element at address: 0x200000347c00 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_74 00:07:03.397 element at address: 0x2000003479c0 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_75 00:07:03.397 element at address: 0x200000347680 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_37 00:07:03.397 element at address: 0x200000343e80 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_76 00:07:03.397 element at address: 0x200000343c40 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_77 00:07:03.397 element at address: 0x200000343900 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_38 00:07:03.397 element at address: 0x200000340100 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_78 00:07:03.397 element at address: 0x20000033fec0 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_79 00:07:03.397 element at address: 0x20000033fb80 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_39 00:07:03.397 element at address: 0x20000033c380 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_80 00:07:03.397 element at address: 0x20000033c140 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_81 00:07:03.397 element at address: 0x20000033be00 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_40 00:07:03.397 element at address: 0x200000338600 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_82 00:07:03.397 element at address: 0x2000003383c0 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_83 00:07:03.397 element at address: 0x200000338080 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_41 00:07:03.397 element at address: 0x200000334880 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_84 00:07:03.397 element at address: 0x200000334640 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_85 00:07:03.397 element at address: 0x200000334300 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_42 00:07:03.397 element at address: 0x200000330b00 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_86 00:07:03.397 element at address: 0x2000003308c0 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_87 00:07:03.397 element at address: 0x200000330580 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_43 00:07:03.397 element at address: 0x20000032cd80 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_88 00:07:03.397 element at address: 0x20000032cb40 with size: 0.000305 MiB 00:07:03.397 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_89 00:07:03.397 element at address: 0x20000032c800 with size: 0.000305 MiB 00:07:03.398 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_44 00:07:03.398 element at address: 0x200000329000 with size: 0.000305 MiB 00:07:03.398 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_90 00:07:03.398 element at address: 0x200000328dc0 with size: 0.000305 MiB 00:07:03.398 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_91 00:07:03.398 element at address: 0x200000328a80 with size: 0.000305 MiB 00:07:03.398 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_45 00:07:03.398 element at address: 0x200000325280 with size: 0.000305 MiB 00:07:03.398 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_92 00:07:03.398 element at address: 0x200000325040 with size: 0.000305 MiB 00:07:03.398 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_93 00:07:03.398 element at address: 0x200000324d00 with size: 0.000305 MiB 00:07:03.398 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_46 00:07:03.398 element at address: 0x200000321500 with size: 0.000305 MiB 00:07:03.398 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_94 00:07:03.398 element at address: 0x2000003212c0 with size: 0.000305 MiB 00:07:03.398 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_95 00:07:03.398 element at address: 0x200000320f80 with size: 0.000305 MiB 00:07:03.398 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_47 00:07:03.398 element at address: 0x2000003d5900 with size: 0.000244 MiB 00:07:03.398 associated memzone info: size: 0.000061 MiB name: QAT_COMP_CAPA_GEN_1 00:07:03.398 21:50:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:03.398 21:50:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1290327 00:07:03.398 21:50:22 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 1290327 ']' 00:07:03.398 21:50:22 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 1290327 00:07:03.398 21:50:22 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:07:03.398 21:50:22 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:03.398 21:50:22 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1290327 00:07:03.657 21:50:22 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:03.657 21:50:22 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:03.657 21:50:22 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1290327' 00:07:03.657 killing process with pid 1290327 00:07:03.657 21:50:22 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 1290327 00:07:03.657 21:50:22 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 1290327 00:07:06.193 00:07:06.193 real 0m4.007s 00:07:06.193 user 0m3.880s 00:07:06.193 sys 0m0.654s 00:07:06.193 21:50:25 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.193 21:50:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:06.193 ************************************ 00:07:06.193 END TEST dpdk_mem_utility 00:07:06.193 ************************************ 00:07:06.193 21:50:25 -- common/autotest_common.sh@1142 -- # return 0 00:07:06.193 21:50:25 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event.sh 00:07:06.193 21:50:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:06.193 21:50:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.193 21:50:25 -- common/autotest_common.sh@10 -- # set +x 00:07:06.193 ************************************ 00:07:06.193 START TEST event 00:07:06.193 ************************************ 00:07:06.193 21:50:25 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event.sh 00:07:06.193 * Looking for test storage... 00:07:06.193 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event 00:07:06.193 21:50:25 event -- event/event.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:07:06.193 21:50:25 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:06.193 21:50:25 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:06.193 21:50:25 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:06.193 21:50:25 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.193 21:50:25 event -- common/autotest_common.sh@10 -- # set +x 00:07:06.193 ************************************ 00:07:06.193 START TEST event_perf 00:07:06.193 ************************************ 00:07:06.193 21:50:25 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:06.193 Running I/O for 1 seconds...[2024-07-13 21:50:25.369933] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:06.193 [2024-07-13 21:50:25.370010] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1291169 ] 00:07:06.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.193 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:06.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.193 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:06.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.193 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:06.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.193 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:06.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.193 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:06.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.193 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:06.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.193 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:06.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.193 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:06.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.193 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:06.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.193 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:06.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.193 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:06.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.193 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:06.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.193 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:06.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.193 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:06.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.193 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:06.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.193 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:06.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.193 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:06.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.193 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:06.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.193 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:06.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.193 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:06.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.193 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:06.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.193 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:06.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.193 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:06.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.193 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:06.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.193 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:06.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.193 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:06.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.193 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:06.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.193 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:06.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.193 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:06.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.193 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:06.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.193 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:06.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.193 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:06.193 [2024-07-13 21:50:25.528844] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:06.452 [2024-07-13 21:50:25.730283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.452 [2024-07-13 21:50:25.730356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.452 [2024-07-13 21:50:25.730414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.452 [2024-07-13 21:50:25.730438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:07.831 Running I/O for 1 seconds... 00:07:07.831 lcore 0: 208549 00:07:07.831 lcore 1: 208548 00:07:07.831 lcore 2: 208550 00:07:07.831 lcore 3: 208550 00:07:07.831 done. 00:07:07.831 00:07:07.831 real 0m1.808s 00:07:07.831 user 0m4.612s 00:07:07.831 sys 0m0.190s 00:07:07.831 21:50:27 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.831 21:50:27 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:07.831 ************************************ 00:07:07.831 END TEST event_perf 00:07:07.831 ************************************ 00:07:07.831 21:50:27 event -- common/autotest_common.sh@1142 -- # return 0 00:07:07.831 21:50:27 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:07.831 21:50:27 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:07.831 21:50:27 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.831 21:50:27 event -- common/autotest_common.sh@10 -- # set +x 00:07:07.831 ************************************ 00:07:07.831 START TEST event_reactor 00:07:07.831 ************************************ 00:07:07.831 21:50:27 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:08.090 [2024-07-13 21:50:27.261496] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:08.090 [2024-07-13 21:50:27.261576] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1291459 ] 00:07:08.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.090 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:08.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.090 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:08.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.090 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:08.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.090 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:08.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.090 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:08.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.090 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:08.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.090 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:08.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.090 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:08.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.090 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:08.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.090 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:08.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.090 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:08.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.090 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:08.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.090 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:08.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.090 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:08.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.090 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:08.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.090 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:08.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.090 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:08.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.090 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:08.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.090 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:08.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.090 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:08.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.090 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:08.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.090 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:08.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.090 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:08.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.091 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:08.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.091 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:08.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.091 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:08.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.091 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:08.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.091 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:08.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.091 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:08.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.091 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:08.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.091 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:08.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.091 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:08.091 [2024-07-13 21:50:27.423808] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.350 [2024-07-13 21:50:27.636207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.727 test_start 00:07:09.727 oneshot 00:07:09.727 tick 100 00:07:09.727 tick 100 00:07:09.727 tick 250 00:07:09.727 tick 100 00:07:09.727 tick 100 00:07:09.727 tick 100 00:07:09.727 tick 250 00:07:09.727 tick 500 00:07:09.727 tick 100 00:07:09.727 tick 100 00:07:09.727 tick 250 00:07:09.727 tick 100 00:07:09.727 tick 100 00:07:09.727 test_end 00:07:09.727 00:07:09.727 real 0m1.823s 00:07:09.727 user 0m1.618s 00:07:09.728 sys 0m0.196s 00:07:09.728 21:50:29 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.728 21:50:29 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:09.728 ************************************ 00:07:09.728 END TEST event_reactor 00:07:09.728 ************************************ 00:07:09.728 21:50:29 event -- common/autotest_common.sh@1142 -- # return 0 00:07:09.728 21:50:29 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:09.728 21:50:29 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:09.728 21:50:29 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.728 21:50:29 event -- common/autotest_common.sh@10 -- # set +x 00:07:09.987 ************************************ 00:07:09.987 START TEST event_reactor_perf 00:07:09.987 ************************************ 00:07:09.987 21:50:29 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:09.987 [2024-07-13 21:50:29.166347] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:09.987 [2024-07-13 21:50:29.166423] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1291797 ] 00:07:09.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.987 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:09.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.987 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:09.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.987 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:09.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.987 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:09.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.987 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:09.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.987 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:09.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.987 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:09.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.987 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:09.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.987 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:09.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.987 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:09.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.987 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:09.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.987 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:09.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.987 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:09.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.987 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:09.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.987 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:09.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.987 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:09.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.987 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:09.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.987 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:09.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.987 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:09.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.987 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:09.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.988 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:09.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.988 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:09.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.988 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:09.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.988 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:09.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.988 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:09.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.988 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:09.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.988 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:09.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.988 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:09.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.988 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:09.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.988 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:09.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.988 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:09.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.988 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:09.988 [2024-07-13 21:50:29.325803] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.247 [2024-07-13 21:50:29.531812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.623 test_start 00:07:11.623 test_end 00:07:11.623 Performance: 401247 events per second 00:07:11.623 00:07:11.623 real 0m1.819s 00:07:11.623 user 0m1.635s 00:07:11.623 sys 0m0.176s 00:07:11.623 21:50:30 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.623 21:50:30 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:11.623 ************************************ 00:07:11.623 END TEST event_reactor_perf 00:07:11.623 ************************************ 00:07:11.623 21:50:30 event -- common/autotest_common.sh@1142 -- # return 0 00:07:11.623 21:50:30 event -- event/event.sh@49 -- # uname -s 00:07:11.623 21:50:30 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:11.623 21:50:30 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:11.623 21:50:30 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:11.623 21:50:30 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.623 21:50:30 event -- common/autotest_common.sh@10 -- # set +x 00:07:11.883 ************************************ 00:07:11.883 START TEST event_scheduler 00:07:11.883 ************************************ 00:07:11.883 21:50:31 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:11.883 * Looking for test storage... 00:07:11.883 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler 00:07:11.883 21:50:31 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:11.883 21:50:31 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:11.883 21:50:31 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1292311 00:07:11.883 21:50:31 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:11.883 21:50:31 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1292311 00:07:11.883 21:50:31 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 1292311 ']' 00:07:11.883 21:50:31 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.883 21:50:31 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:11.883 21:50:31 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.883 21:50:31 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:11.883 21:50:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:11.883 [2024-07-13 21:50:31.209925] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:11.883 [2024-07-13 21:50:31.210021] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1292311 ] 00:07:12.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.143 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:12.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.143 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:12.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.143 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:12.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.143 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:12.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.143 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:12.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.143 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:12.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.143 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:12.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.143 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:12.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.143 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:12.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.143 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:12.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.143 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:12.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.143 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:12.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.143 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:12.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.143 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:12.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.143 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:12.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.143 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:12.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.143 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:12.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.143 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:12.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.143 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:12.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.143 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:12.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.144 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:12.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.144 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:12.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.144 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:12.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.144 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:12.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.144 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:12.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.144 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:12.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.144 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:12.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.144 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:12.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.144 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:12.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.144 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:12.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.144 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:12.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.144 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:12.144 [2024-07-13 21:50:31.368270] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:12.403 [2024-07-13 21:50:31.572374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.403 [2024-07-13 21:50:31.572440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.403 [2024-07-13 21:50:31.572496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.403 [2024-07-13 21:50:31.572516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:12.662 21:50:31 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:12.662 21:50:31 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:07:12.662 21:50:31 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:12.662 21:50:31 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.662 21:50:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:12.662 [2024-07-13 21:50:31.998559] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:07:12.662 [2024-07-13 21:50:31.998591] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:07:12.662 [2024-07-13 21:50:31.998607] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:12.662 [2024-07-13 21:50:31.998619] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:12.662 [2024-07-13 21:50:31.998629] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:12.662 21:50:32 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.662 21:50:32 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:12.662 21:50:32 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.662 21:50:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:13.230 [2024-07-13 21:50:32.367511] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:13.230 21:50:32 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.230 21:50:32 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:13.230 21:50:32 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:13.230 21:50:32 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.230 21:50:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:13.230 ************************************ 00:07:13.230 START TEST scheduler_create_thread 00:07:13.230 ************************************ 00:07:13.230 21:50:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:07:13.230 21:50:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:13.230 21:50:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.230 21:50:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.230 2 00:07:13.230 21:50:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.230 21:50:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:13.230 21:50:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.230 21:50:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.230 3 00:07:13.230 21:50:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.230 21:50:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:13.230 21:50:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.230 21:50:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.230 4 00:07:13.230 21:50:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.230 21:50:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:13.230 21:50:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.230 21:50:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.230 5 00:07:13.230 21:50:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.230 21:50:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:13.230 21:50:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.230 21:50:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.230 6 00:07:13.230 21:50:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.230 21:50:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:13.230 21:50:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.230 21:50:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.230 7 00:07:13.230 21:50:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.230 21:50:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:13.230 21:50:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.230 21:50:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.230 8 00:07:13.230 21:50:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.230 21:50:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:13.230 21:50:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.230 21:50:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.230 9 00:07:13.230 21:50:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.230 21:50:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:13.230 21:50:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.230 21:50:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.231 10 00:07:13.231 21:50:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.231 21:50:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:13.231 21:50:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.231 21:50:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.231 21:50:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.231 21:50:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:13.231 21:50:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:13.231 21:50:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.231 21:50:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.231 21:50:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.231 21:50:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:13.231 21:50:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.231 21:50:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:14.641 21:50:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.641 21:50:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:14.641 21:50:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:14.641 21:50:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.641 21:50:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:16.017 21:50:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.017 00:07:16.017 real 0m2.626s 00:07:16.017 user 0m0.024s 00:07:16.017 sys 0m0.007s 00:07:16.017 21:50:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.017 21:50:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:16.017 ************************************ 00:07:16.017 END TEST scheduler_create_thread 00:07:16.017 ************************************ 00:07:16.017 21:50:35 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:07:16.017 21:50:35 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:16.017 21:50:35 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1292311 00:07:16.017 21:50:35 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 1292311 ']' 00:07:16.017 21:50:35 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 1292311 00:07:16.018 21:50:35 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:07:16.018 21:50:35 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:16.018 21:50:35 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1292311 00:07:16.018 21:50:35 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:16.018 21:50:35 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:16.018 21:50:35 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1292311' 00:07:16.018 killing process with pid 1292311 00:07:16.018 21:50:35 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 1292311 00:07:16.018 21:50:35 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 1292311 00:07:16.277 [2024-07-13 21:50:35.416298] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:17.658 00:07:17.658 real 0m5.665s 00:07:17.658 user 0m11.397s 00:07:17.658 sys 0m0.562s 00:07:17.658 21:50:36 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.658 21:50:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:17.658 ************************************ 00:07:17.658 END TEST event_scheduler 00:07:17.658 ************************************ 00:07:17.658 21:50:36 event -- common/autotest_common.sh@1142 -- # return 0 00:07:17.658 21:50:36 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:17.658 21:50:36 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:17.658 21:50:36 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:17.658 21:50:36 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.658 21:50:36 event -- common/autotest_common.sh@10 -- # set +x 00:07:17.658 ************************************ 00:07:17.658 START TEST app_repeat 00:07:17.658 ************************************ 00:07:17.658 21:50:36 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:07:17.658 21:50:36 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.658 21:50:36 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:17.658 21:50:36 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:17.658 21:50:36 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:17.658 21:50:36 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:17.658 21:50:36 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:17.658 21:50:36 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:17.658 21:50:36 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1293186 00:07:17.658 21:50:36 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:17.658 21:50:36 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:17.658 21:50:36 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1293186' 00:07:17.658 Process app_repeat pid: 1293186 00:07:17.658 21:50:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:17.658 21:50:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:17.658 spdk_app_start Round 0 00:07:17.658 21:50:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1293186 /var/tmp/spdk-nbd.sock 00:07:17.658 21:50:36 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1293186 ']' 00:07:17.659 21:50:36 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:17.659 21:50:36 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:17.659 21:50:36 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:17.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:17.659 21:50:36 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:17.659 21:50:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:17.659 [2024-07-13 21:50:36.852161] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:17.659 [2024-07-13 21:50:36.852255] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1293186 ] 00:07:17.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.659 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:17.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.659 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:17.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.659 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:17.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.659 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:17.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.659 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:17.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.659 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:17.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.659 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:17.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.659 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:17.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.659 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:17.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.659 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:17.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.659 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:17.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.659 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:17.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.659 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:17.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.659 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:17.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.659 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:17.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.659 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:17.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.659 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:17.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.659 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:17.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.659 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:17.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.659 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:17.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.659 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:17.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.659 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:17.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.659 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:17.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.659 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:17.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.659 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:17.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.659 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:17.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.659 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:17.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.659 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:17.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.659 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:17.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.659 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:17.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.659 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:17.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.659 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:17.659 [2024-07-13 21:50:37.017906] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:17.918 [2024-07-13 21:50:37.241911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.919 [2024-07-13 21:50:37.241920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.487 21:50:37 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:18.487 21:50:37 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:18.487 21:50:37 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:18.487 Malloc0 00:07:18.746 21:50:37 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:18.746 Malloc1 00:07:18.746 21:50:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:18.746 21:50:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.746 21:50:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:18.746 21:50:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:19.006 21:50:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:19.006 21:50:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:19.006 21:50:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:19.006 21:50:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.006 21:50:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:19.006 21:50:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:19.006 21:50:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:19.006 21:50:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:19.006 21:50:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:19.006 21:50:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:19.006 21:50:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:19.006 21:50:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:19.006 /dev/nbd0 00:07:19.006 21:50:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:19.006 21:50:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:19.006 21:50:38 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:19.006 21:50:38 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:19.006 21:50:38 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:19.006 21:50:38 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:19.006 21:50:38 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:19.006 21:50:38 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:19.006 21:50:38 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:19.006 21:50:38 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:19.006 21:50:38 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:19.006 1+0 records in 00:07:19.006 1+0 records out 00:07:19.006 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226816 s, 18.1 MB/s 00:07:19.006 21:50:38 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:07:19.006 21:50:38 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:19.006 21:50:38 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:07:19.006 21:50:38 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:19.007 21:50:38 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:19.007 21:50:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:19.007 21:50:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:19.007 21:50:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:19.266 /dev/nbd1 00:07:19.266 21:50:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:19.266 21:50:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:19.266 21:50:38 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:19.266 21:50:38 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:19.266 21:50:38 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:19.266 21:50:38 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:19.266 21:50:38 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:19.266 21:50:38 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:19.266 21:50:38 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:19.266 21:50:38 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:19.266 21:50:38 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:19.266 1+0 records in 00:07:19.266 1+0 records out 00:07:19.266 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270469 s, 15.1 MB/s 00:07:19.266 21:50:38 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:07:19.266 21:50:38 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:19.266 21:50:38 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:07:19.266 21:50:38 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:19.266 21:50:38 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:19.266 21:50:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:19.266 21:50:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:19.266 21:50:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:19.266 21:50:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.266 21:50:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:19.545 21:50:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:19.545 { 00:07:19.545 "nbd_device": "/dev/nbd0", 00:07:19.545 "bdev_name": "Malloc0" 00:07:19.545 }, 00:07:19.545 { 00:07:19.545 "nbd_device": "/dev/nbd1", 00:07:19.545 "bdev_name": "Malloc1" 00:07:19.545 } 00:07:19.545 ]' 00:07:19.545 21:50:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:19.545 { 00:07:19.545 "nbd_device": "/dev/nbd0", 00:07:19.545 "bdev_name": "Malloc0" 00:07:19.545 }, 00:07:19.545 { 00:07:19.545 "nbd_device": "/dev/nbd1", 00:07:19.545 "bdev_name": "Malloc1" 00:07:19.545 } 00:07:19.545 ]' 00:07:19.545 21:50:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:19.545 21:50:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:19.545 /dev/nbd1' 00:07:19.545 21:50:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:19.545 /dev/nbd1' 00:07:19.545 21:50:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:19.545 21:50:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:19.545 21:50:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:19.545 21:50:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:19.545 21:50:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:19.545 21:50:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:19.545 21:50:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:19.545 21:50:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:19.545 21:50:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:19.545 21:50:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:07:19.545 21:50:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:19.545 21:50:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:19.545 256+0 records in 00:07:19.545 256+0 records out 00:07:19.545 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00593605 s, 177 MB/s 00:07:19.545 21:50:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:19.545 21:50:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:19.545 256+0 records in 00:07:19.545 256+0 records out 00:07:19.545 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0150098 s, 69.9 MB/s 00:07:19.545 21:50:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:19.545 21:50:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:19.545 256+0 records in 00:07:19.545 256+0 records out 00:07:19.545 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242311 s, 43.3 MB/s 00:07:19.545 21:50:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:19.545 21:50:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:19.545 21:50:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:19.545 21:50:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:19.545 21:50:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:07:19.545 21:50:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:19.545 21:50:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:19.545 21:50:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:19.545 21:50:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:19.545 21:50:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:19.545 21:50:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:19.545 21:50:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:07:19.545 21:50:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:19.545 21:50:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.545 21:50:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:19.545 21:50:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:19.545 21:50:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:19.545 21:50:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:19.545 21:50:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:19.805 21:50:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:19.805 21:50:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:19.805 21:50:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:19.805 21:50:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:19.805 21:50:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:19.805 21:50:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:19.805 21:50:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:19.805 21:50:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:19.805 21:50:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:19.805 21:50:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:20.065 21:50:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:20.065 21:50:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:20.065 21:50:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:20.065 21:50:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.065 21:50:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.065 21:50:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:20.065 21:50:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:20.065 21:50:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.065 21:50:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:20.066 21:50:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.066 21:50:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:20.066 21:50:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:20.066 21:50:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:20.066 21:50:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:20.325 21:50:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:20.325 21:50:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:20.325 21:50:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:20.325 21:50:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:20.325 21:50:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:20.325 21:50:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:20.325 21:50:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:20.325 21:50:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:20.325 21:50:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:20.325 21:50:39 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:20.584 21:50:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:21.960 [2024-07-13 21:50:41.229089] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:22.220 [2024-07-13 21:50:41.427782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.220 [2024-07-13 21:50:41.427783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.479 [2024-07-13 21:50:41.647631] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:22.479 [2024-07-13 21:50:41.647681] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:23.858 21:50:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:23.858 21:50:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:23.858 spdk_app_start Round 1 00:07:23.858 21:50:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1293186 /var/tmp/spdk-nbd.sock 00:07:23.858 21:50:42 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1293186 ']' 00:07:23.858 21:50:42 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:23.858 21:50:42 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:23.858 21:50:42 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:23.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:23.858 21:50:42 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:23.858 21:50:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:23.858 21:50:43 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:23.858 21:50:43 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:23.858 21:50:43 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:23.858 Malloc0 00:07:24.116 21:50:43 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:24.116 Malloc1 00:07:24.373 21:50:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:24.373 21:50:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:24.373 21:50:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:24.373 21:50:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:24.373 21:50:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:24.373 21:50:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:24.373 21:50:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:24.373 21:50:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:24.373 21:50:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:24.373 21:50:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:24.373 21:50:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:24.373 21:50:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:24.373 21:50:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:24.373 21:50:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:24.373 21:50:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:24.373 21:50:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:24.373 /dev/nbd0 00:07:24.373 21:50:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:24.373 21:50:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:24.373 21:50:43 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:24.373 21:50:43 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:24.373 21:50:43 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:24.373 21:50:43 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:24.373 21:50:43 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:24.373 21:50:43 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:24.373 21:50:43 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:24.374 21:50:43 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:24.374 21:50:43 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:24.374 1+0 records in 00:07:24.374 1+0 records out 00:07:24.374 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022075 s, 18.6 MB/s 00:07:24.374 21:50:43 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:07:24.374 21:50:43 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:24.374 21:50:43 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:07:24.374 21:50:43 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:24.374 21:50:43 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:24.374 21:50:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:24.374 21:50:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:24.374 21:50:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:24.632 /dev/nbd1 00:07:24.632 21:50:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:24.632 21:50:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:24.632 21:50:43 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:24.632 21:50:43 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:24.632 21:50:43 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:24.632 21:50:43 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:24.632 21:50:43 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:24.632 21:50:43 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:24.632 21:50:43 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:24.632 21:50:43 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:24.632 21:50:43 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:24.632 1+0 records in 00:07:24.632 1+0 records out 00:07:24.632 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223171 s, 18.4 MB/s 00:07:24.632 21:50:43 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:07:24.632 21:50:43 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:24.632 21:50:43 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:07:24.632 21:50:43 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:24.632 21:50:43 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:24.632 21:50:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:24.633 21:50:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:24.633 21:50:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:24.633 21:50:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:24.633 21:50:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:24.892 21:50:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:24.892 { 00:07:24.892 "nbd_device": "/dev/nbd0", 00:07:24.892 "bdev_name": "Malloc0" 00:07:24.892 }, 00:07:24.892 { 00:07:24.892 "nbd_device": "/dev/nbd1", 00:07:24.892 "bdev_name": "Malloc1" 00:07:24.892 } 00:07:24.892 ]' 00:07:24.892 21:50:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:24.892 { 00:07:24.892 "nbd_device": "/dev/nbd0", 00:07:24.892 "bdev_name": "Malloc0" 00:07:24.892 }, 00:07:24.892 { 00:07:24.892 "nbd_device": "/dev/nbd1", 00:07:24.892 "bdev_name": "Malloc1" 00:07:24.892 } 00:07:24.892 ]' 00:07:24.892 21:50:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:24.892 21:50:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:24.892 /dev/nbd1' 00:07:24.892 21:50:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:24.892 /dev/nbd1' 00:07:24.892 21:50:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:24.892 21:50:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:24.892 21:50:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:24.892 21:50:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:24.892 21:50:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:24.892 21:50:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:24.892 21:50:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:24.892 21:50:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:24.892 21:50:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:24.892 21:50:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:07:24.892 21:50:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:24.892 21:50:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:24.892 256+0 records in 00:07:24.892 256+0 records out 00:07:24.892 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106922 s, 98.1 MB/s 00:07:24.892 21:50:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:24.892 21:50:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:24.892 256+0 records in 00:07:24.892 256+0 records out 00:07:24.893 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206635 s, 50.7 MB/s 00:07:24.893 21:50:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:24.893 21:50:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:24.893 256+0 records in 00:07:24.893 256+0 records out 00:07:24.893 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0231132 s, 45.4 MB/s 00:07:24.893 21:50:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:24.893 21:50:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:24.893 21:50:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:24.893 21:50:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:24.893 21:50:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:07:24.893 21:50:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:24.893 21:50:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:24.893 21:50:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:24.893 21:50:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:24.893 21:50:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:24.893 21:50:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:24.893 21:50:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:07:24.893 21:50:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:24.893 21:50:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:24.893 21:50:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:24.893 21:50:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:24.893 21:50:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:24.893 21:50:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:24.893 21:50:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:25.153 21:50:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:25.153 21:50:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:25.153 21:50:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:25.153 21:50:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:25.153 21:50:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:25.153 21:50:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:25.153 21:50:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:25.153 21:50:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:25.153 21:50:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:25.153 21:50:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:25.412 21:50:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:25.412 21:50:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:25.412 21:50:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:25.412 21:50:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:25.412 21:50:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:25.412 21:50:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:25.412 21:50:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:25.412 21:50:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:25.412 21:50:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:25.412 21:50:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:25.412 21:50:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:25.412 21:50:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:25.671 21:50:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:25.671 21:50:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:25.671 21:50:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:25.671 21:50:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:25.671 21:50:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:25.671 21:50:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:25.671 21:50:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:25.671 21:50:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:25.671 21:50:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:25.671 21:50:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:25.671 21:50:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:25.671 21:50:44 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:25.930 21:50:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:27.310 [2024-07-13 21:50:46.594755] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:27.605 [2024-07-13 21:50:46.788949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.605 [2024-07-13 21:50:46.788955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.864 [2024-07-13 21:50:47.015103] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:27.864 [2024-07-13 21:50:47.015149] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:29.243 21:50:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:29.243 21:50:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:29.243 spdk_app_start Round 2 00:07:29.243 21:50:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1293186 /var/tmp/spdk-nbd.sock 00:07:29.243 21:50:48 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1293186 ']' 00:07:29.243 21:50:48 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:29.243 21:50:48 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:29.243 21:50:48 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:29.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:29.243 21:50:48 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:29.243 21:50:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:29.243 21:50:48 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:29.243 21:50:48 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:29.243 21:50:48 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:29.243 Malloc0 00:07:29.243 21:50:48 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:29.503 Malloc1 00:07:29.503 21:50:48 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:29.503 21:50:48 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:29.503 21:50:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:29.503 21:50:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:29.503 21:50:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:29.503 21:50:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:29.503 21:50:48 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:29.503 21:50:48 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:29.503 21:50:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:29.503 21:50:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:29.503 21:50:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:29.503 21:50:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:29.503 21:50:48 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:29.503 21:50:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:29.503 21:50:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:29.503 21:50:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:29.761 /dev/nbd0 00:07:29.762 21:50:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:29.762 21:50:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:29.762 21:50:49 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:29.762 21:50:49 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:29.762 21:50:49 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:29.762 21:50:49 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:29.762 21:50:49 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:29.762 21:50:49 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:29.762 21:50:49 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:29.762 21:50:49 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:29.762 21:50:49 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:29.762 1+0 records in 00:07:29.762 1+0 records out 00:07:29.762 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000133786 s, 30.6 MB/s 00:07:29.762 21:50:49 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:07:29.762 21:50:49 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:29.762 21:50:49 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:07:29.762 21:50:49 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:29.762 21:50:49 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:29.762 21:50:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:29.762 21:50:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:29.762 21:50:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:30.021 /dev/nbd1 00:07:30.021 21:50:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:30.021 21:50:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:30.021 21:50:49 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:30.021 21:50:49 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:30.021 21:50:49 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:30.021 21:50:49 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:30.021 21:50:49 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:30.021 21:50:49 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:30.021 21:50:49 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:30.021 21:50:49 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:30.021 21:50:49 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:30.021 1+0 records in 00:07:30.021 1+0 records out 00:07:30.021 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340528 s, 12.0 MB/s 00:07:30.021 21:50:49 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:07:30.021 21:50:49 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:30.021 21:50:49 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:07:30.021 21:50:49 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:30.021 21:50:49 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:30.021 21:50:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:30.021 21:50:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:30.021 21:50:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:30.021 21:50:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:30.021 21:50:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:30.281 21:50:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:30.281 { 00:07:30.281 "nbd_device": "/dev/nbd0", 00:07:30.281 "bdev_name": "Malloc0" 00:07:30.281 }, 00:07:30.281 { 00:07:30.281 "nbd_device": "/dev/nbd1", 00:07:30.281 "bdev_name": "Malloc1" 00:07:30.281 } 00:07:30.281 ]' 00:07:30.281 21:50:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:30.281 21:50:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:30.281 { 00:07:30.281 "nbd_device": "/dev/nbd0", 00:07:30.281 "bdev_name": "Malloc0" 00:07:30.281 }, 00:07:30.281 { 00:07:30.281 "nbd_device": "/dev/nbd1", 00:07:30.281 "bdev_name": "Malloc1" 00:07:30.281 } 00:07:30.281 ]' 00:07:30.281 21:50:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:30.281 /dev/nbd1' 00:07:30.281 21:50:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:30.281 /dev/nbd1' 00:07:30.281 21:50:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:30.281 21:50:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:30.281 21:50:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:30.281 21:50:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:30.281 21:50:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:30.281 21:50:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:30.281 21:50:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:30.281 21:50:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:30.281 21:50:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:30.281 21:50:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:07:30.281 21:50:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:30.281 21:50:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:30.281 256+0 records in 00:07:30.281 256+0 records out 00:07:30.281 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0033576 s, 312 MB/s 00:07:30.281 21:50:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:30.281 21:50:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:30.281 256+0 records in 00:07:30.281 256+0 records out 00:07:30.281 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0215228 s, 48.7 MB/s 00:07:30.281 21:50:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:30.281 21:50:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:30.281 256+0 records in 00:07:30.281 256+0 records out 00:07:30.281 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0244205 s, 42.9 MB/s 00:07:30.281 21:50:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:30.281 21:50:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:30.281 21:50:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:30.281 21:50:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:30.281 21:50:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:07:30.281 21:50:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:30.281 21:50:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:30.281 21:50:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:30.281 21:50:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:30.281 21:50:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:30.281 21:50:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:30.281 21:50:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:07:30.282 21:50:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:30.282 21:50:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:30.282 21:50:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:30.282 21:50:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:30.282 21:50:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:30.282 21:50:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:30.282 21:50:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:30.570 21:50:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:30.570 21:50:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:30.570 21:50:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:30.570 21:50:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:30.570 21:50:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:30.570 21:50:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:30.570 21:50:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:30.570 21:50:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:30.570 21:50:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:30.570 21:50:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:30.570 21:50:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:30.570 21:50:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:30.570 21:50:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:30.570 21:50:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:30.570 21:50:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:30.570 21:50:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:30.570 21:50:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:30.570 21:50:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:30.570 21:50:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:30.570 21:50:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:30.570 21:50:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:30.829 21:50:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:30.829 21:50:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:30.829 21:50:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:30.829 21:50:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:30.829 21:50:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:30.829 21:50:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:30.829 21:50:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:30.829 21:50:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:30.829 21:50:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:30.829 21:50:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:30.829 21:50:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:30.829 21:50:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:30.829 21:50:50 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:31.396 21:50:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:32.771 [2024-07-13 21:50:51.891016] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:32.771 [2024-07-13 21:50:52.085027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.771 [2024-07-13 21:50:52.085028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.030 [2024-07-13 21:50:52.308315] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:33.030 [2024-07-13 21:50:52.308363] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:34.406 21:50:53 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1293186 /var/tmp/spdk-nbd.sock 00:07:34.406 21:50:53 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1293186 ']' 00:07:34.406 21:50:53 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:34.406 21:50:53 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:34.406 21:50:53 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:34.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:34.407 21:50:53 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:34.407 21:50:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:34.407 21:50:53 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:34.407 21:50:53 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:34.407 21:50:53 event.app_repeat -- event/event.sh@39 -- # killprocess 1293186 00:07:34.407 21:50:53 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 1293186 ']' 00:07:34.407 21:50:53 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 1293186 00:07:34.407 21:50:53 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:07:34.407 21:50:53 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:34.407 21:50:53 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1293186 00:07:34.407 21:50:53 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:34.407 21:50:53 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:34.407 21:50:53 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1293186' 00:07:34.407 killing process with pid 1293186 00:07:34.407 21:50:53 event.app_repeat -- common/autotest_common.sh@967 -- # kill 1293186 00:07:34.407 21:50:53 event.app_repeat -- common/autotest_common.sh@972 -- # wait 1293186 00:07:35.781 spdk_app_start is called in Round 0. 00:07:35.781 Shutdown signal received, stop current app iteration 00:07:35.781 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:07:35.781 spdk_app_start is called in Round 1. 00:07:35.781 Shutdown signal received, stop current app iteration 00:07:35.781 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:07:35.781 spdk_app_start is called in Round 2. 00:07:35.781 Shutdown signal received, stop current app iteration 00:07:35.781 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:07:35.781 spdk_app_start is called in Round 3. 00:07:35.781 Shutdown signal received, stop current app iteration 00:07:35.781 21:50:54 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:35.781 21:50:54 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:35.781 00:07:35.781 real 0m18.144s 00:07:35.781 user 0m36.046s 00:07:35.781 sys 0m3.117s 00:07:35.781 21:50:54 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.781 21:50:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:35.781 ************************************ 00:07:35.781 END TEST app_repeat 00:07:35.781 ************************************ 00:07:35.781 21:50:54 event -- common/autotest_common.sh@1142 -- # return 0 00:07:35.781 21:50:54 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:35.781 00:07:35.781 real 0m29.796s 00:07:35.781 user 0m55.513s 00:07:35.781 sys 0m4.616s 00:07:35.781 21:50:54 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.781 21:50:54 event -- common/autotest_common.sh@10 -- # set +x 00:07:35.781 ************************************ 00:07:35.781 END TEST event 00:07:35.781 ************************************ 00:07:35.781 21:50:55 -- common/autotest_common.sh@1142 -- # return 0 00:07:35.781 21:50:55 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/thread.sh 00:07:35.781 21:50:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:35.781 21:50:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.781 21:50:55 -- common/autotest_common.sh@10 -- # set +x 00:07:35.781 ************************************ 00:07:35.781 START TEST thread 00:07:35.781 ************************************ 00:07:35.781 21:50:55 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/thread.sh 00:07:35.781 * Looking for test storage... 00:07:35.781 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread 00:07:35.781 21:50:55 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:35.781 21:50:55 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:35.781 21:50:55 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.781 21:50:55 thread -- common/autotest_common.sh@10 -- # set +x 00:07:36.040 ************************************ 00:07:36.040 START TEST thread_poller_perf 00:07:36.040 ************************************ 00:07:36.040 21:50:55 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:36.040 [2024-07-13 21:50:55.230718] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:36.040 [2024-07-13 21:50:55.230803] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1296576 ] 00:07:36.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.040 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:36.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.040 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:36.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.040 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:36.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.041 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:36.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.041 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:36.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.041 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:36.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.041 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:36.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.041 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:36.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.041 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:36.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.041 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:36.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.041 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:36.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.041 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:36.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.041 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:36.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.041 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:36.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.041 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:36.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.041 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:36.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.041 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:36.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.041 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:36.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.041 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:36.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.041 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:36.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.041 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:36.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.041 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:36.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.041 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:36.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.041 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:36.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.041 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:36.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.041 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:36.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.041 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:36.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.041 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:36.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.041 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:36.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.041 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:36.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.041 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:36.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.041 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:36.041 [2024-07-13 21:50:55.392224] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.299 [2024-07-13 21:50:55.595028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.299 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:37.675 ====================================== 00:07:37.675 busy:2507638674 (cyc) 00:07:37.675 total_run_count: 420000 00:07:37.675 tsc_hz: 2500000000 (cyc) 00:07:37.675 ====================================== 00:07:37.675 poller_cost: 5970 (cyc), 2388 (nsec) 00:07:37.675 00:07:37.675 real 0m1.814s 00:07:37.675 user 0m1.622s 00:07:37.675 sys 0m0.185s 00:07:37.675 21:50:56 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.675 21:50:56 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:37.675 ************************************ 00:07:37.675 END TEST thread_poller_perf 00:07:37.675 ************************************ 00:07:37.675 21:50:57 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:37.675 21:50:57 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:37.675 21:50:57 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:37.675 21:50:57 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.675 21:50:57 thread -- common/autotest_common.sh@10 -- # set +x 00:07:37.935 ************************************ 00:07:37.935 START TEST thread_poller_perf 00:07:37.935 ************************************ 00:07:37.935 21:50:57 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:37.935 [2024-07-13 21:50:57.119158] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:37.935 [2024-07-13 21:50:57.119242] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1297017 ] 00:07:37.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.935 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:37.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.935 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:37.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.936 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:37.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.936 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:37.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.936 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:37.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.936 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:37.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.936 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:37.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.936 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:37.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.936 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:37.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.936 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:37.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.936 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:37.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.936 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:37.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.936 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:37.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.936 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:37.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.936 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:37.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.936 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:37.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.936 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:37.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.936 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:37.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.936 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:37.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.936 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:37.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.936 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:37.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.936 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:37.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.936 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:37.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.936 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:37.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.936 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:37.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.936 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:37.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.936 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:37.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.936 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:37.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.936 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:37.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.936 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:37.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.936 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:37.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.936 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:37.936 [2024-07-13 21:50:57.281740] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.195 [2024-07-13 21:50:57.484781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.195 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:39.574 ====================================== 00:07:39.574 busy:2502881122 (cyc) 00:07:39.574 total_run_count: 5441000 00:07:39.574 tsc_hz: 2500000000 (cyc) 00:07:39.574 ====================================== 00:07:39.574 poller_cost: 460 (cyc), 184 (nsec) 00:07:39.574 00:07:39.574 real 0m1.817s 00:07:39.574 user 0m1.614s 00:07:39.574 sys 0m0.196s 00:07:39.574 21:50:58 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.574 21:50:58 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:39.574 ************************************ 00:07:39.574 END TEST thread_poller_perf 00:07:39.574 ************************************ 00:07:39.574 21:50:58 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:39.574 21:50:58 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:39.574 00:07:39.574 real 0m3.883s 00:07:39.574 user 0m3.318s 00:07:39.574 sys 0m0.572s 00:07:39.574 21:50:58 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.575 21:50:58 thread -- common/autotest_common.sh@10 -- # set +x 00:07:39.575 ************************************ 00:07:39.575 END TEST thread 00:07:39.575 ************************************ 00:07:39.834 21:50:58 -- common/autotest_common.sh@1142 -- # return 0 00:07:39.834 21:50:58 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel.sh 00:07:39.834 21:50:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:39.834 21:50:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.834 21:50:58 -- common/autotest_common.sh@10 -- # set +x 00:07:39.834 ************************************ 00:07:39.834 START TEST accel 00:07:39.834 ************************************ 00:07:39.834 21:50:59 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel.sh 00:07:39.834 * Looking for test storage... 00:07:39.834 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel 00:07:39.834 21:50:59 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:39.834 21:50:59 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:39.834 21:50:59 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:39.834 21:50:59 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1297441 00:07:39.834 21:50:59 accel -- accel/accel.sh@63 -- # waitforlisten 1297441 00:07:39.834 21:50:59 accel -- common/autotest_common.sh@829 -- # '[' -z 1297441 ']' 00:07:39.834 21:50:59 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.834 21:50:59 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:39.834 21:50:59 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:39.834 21:50:59 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:39.834 21:50:59 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.834 21:50:59 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:39.834 21:50:59 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:39.834 21:50:59 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:39.834 21:50:59 accel -- common/autotest_common.sh@10 -- # set +x 00:07:39.834 21:50:59 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.834 21:50:59 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.834 21:50:59 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:39.834 21:50:59 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:39.834 21:50:59 accel -- accel/accel.sh@41 -- # jq -r . 00:07:39.834 [2024-07-13 21:50:59.218162] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:39.834 [2024-07-13 21:50:59.218255] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1297441 ] 00:07:40.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.094 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:40.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.094 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:40.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.094 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:40.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.094 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:40.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.094 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:40.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.094 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:40.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.094 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:40.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.094 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:40.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.094 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:40.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.094 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:40.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.094 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:40.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.094 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:40.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.094 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:40.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.094 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:40.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.094 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:40.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.094 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:40.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.094 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:40.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.094 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:40.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.094 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:40.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.094 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:40.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.094 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:40.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.094 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:40.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.094 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:40.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.094 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:40.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.094 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:40.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.094 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:40.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.094 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:40.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.094 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:40.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.094 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:40.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.094 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:40.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.094 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:40.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.094 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:40.094 [2024-07-13 21:50:59.379611] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.354 [2024-07-13 21:50:59.570688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.293 21:51:00 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:41.293 21:51:00 accel -- common/autotest_common.sh@862 -- # return 0 00:07:41.293 21:51:00 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:41.293 21:51:00 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:41.293 21:51:00 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:41.293 21:51:00 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:41.293 21:51:00 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:41.293 21:51:00 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:41.293 21:51:00 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:41.293 21:51:00 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.293 21:51:00 accel -- common/autotest_common.sh@10 -- # set +x 00:07:41.293 21:51:00 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.293 21:51:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:41.293 21:51:00 accel -- accel/accel.sh@72 -- # IFS== 00:07:41.293 21:51:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:41.293 21:51:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:41.293 21:51:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:41.293 21:51:00 accel -- accel/accel.sh@72 -- # IFS== 00:07:41.293 21:51:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:41.293 21:51:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:41.293 21:51:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:41.293 21:51:00 accel -- accel/accel.sh@72 -- # IFS== 00:07:41.293 21:51:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:41.293 21:51:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:41.293 21:51:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:41.293 21:51:00 accel -- accel/accel.sh@72 -- # IFS== 00:07:41.293 21:51:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:41.293 21:51:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:41.293 21:51:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:41.293 21:51:00 accel -- accel/accel.sh@72 -- # IFS== 00:07:41.293 21:51:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:41.293 21:51:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:41.293 21:51:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:41.293 21:51:00 accel -- accel/accel.sh@72 -- # IFS== 00:07:41.293 21:51:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:41.293 21:51:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:41.293 21:51:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:41.293 21:51:00 accel -- accel/accel.sh@72 -- # IFS== 00:07:41.293 21:51:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:41.293 21:51:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:41.293 21:51:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:41.293 21:51:00 accel -- accel/accel.sh@72 -- # IFS== 00:07:41.293 21:51:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:41.293 21:51:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:41.293 21:51:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:41.293 21:51:00 accel -- accel/accel.sh@72 -- # IFS== 00:07:41.293 21:51:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:41.293 21:51:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:41.293 21:51:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:41.293 21:51:00 accel -- accel/accel.sh@72 -- # IFS== 00:07:41.293 21:51:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:41.293 21:51:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:41.293 21:51:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:41.293 21:51:00 accel -- accel/accel.sh@72 -- # IFS== 00:07:41.293 21:51:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:41.293 21:51:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:41.293 21:51:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:41.293 21:51:00 accel -- accel/accel.sh@72 -- # IFS== 00:07:41.293 21:51:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:41.293 21:51:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:41.293 21:51:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:41.293 21:51:00 accel -- accel/accel.sh@72 -- # IFS== 00:07:41.293 21:51:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:41.293 21:51:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:41.293 21:51:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:41.293 21:51:00 accel -- accel/accel.sh@72 -- # IFS== 00:07:41.293 21:51:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:41.293 21:51:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:41.293 21:51:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:41.293 21:51:00 accel -- accel/accel.sh@72 -- # IFS== 00:07:41.293 21:51:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:41.293 21:51:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:41.293 21:51:00 accel -- accel/accel.sh@75 -- # killprocess 1297441 00:07:41.293 21:51:00 accel -- common/autotest_common.sh@948 -- # '[' -z 1297441 ']' 00:07:41.293 21:51:00 accel -- common/autotest_common.sh@952 -- # kill -0 1297441 00:07:41.293 21:51:00 accel -- common/autotest_common.sh@953 -- # uname 00:07:41.293 21:51:00 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:41.293 21:51:00 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1297441 00:07:41.293 21:51:00 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:41.293 21:51:00 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:41.293 21:51:00 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1297441' 00:07:41.293 killing process with pid 1297441 00:07:41.293 21:51:00 accel -- common/autotest_common.sh@967 -- # kill 1297441 00:07:41.293 21:51:00 accel -- common/autotest_common.sh@972 -- # wait 1297441 00:07:43.833 21:51:02 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:43.833 21:51:02 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:43.833 21:51:02 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:43.833 21:51:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.833 21:51:02 accel -- common/autotest_common.sh@10 -- # set +x 00:07:43.833 21:51:02 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:07:43.833 21:51:02 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:43.833 21:51:02 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:43.833 21:51:02 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:43.833 21:51:02 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:43.833 21:51:02 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.833 21:51:02 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.833 21:51:02 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:43.833 21:51:02 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:43.833 21:51:02 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:43.833 21:51:03 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.833 21:51:03 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:43.833 21:51:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:43.833 21:51:03 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:43.833 21:51:03 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:43.833 21:51:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.833 21:51:03 accel -- common/autotest_common.sh@10 -- # set +x 00:07:43.833 ************************************ 00:07:43.833 START TEST accel_missing_filename 00:07:43.833 ************************************ 00:07:43.833 21:51:03 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:07:43.833 21:51:03 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:43.833 21:51:03 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:43.833 21:51:03 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:43.833 21:51:03 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:43.833 21:51:03 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:43.833 21:51:03 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:43.833 21:51:03 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:43.833 21:51:03 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:43.833 21:51:03 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:43.833 21:51:03 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:43.833 21:51:03 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:43.833 21:51:03 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.833 21:51:03 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.833 21:51:03 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:43.833 21:51:03 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:43.833 21:51:03 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:43.833 [2024-07-13 21:51:03.156580] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:43.833 [2024-07-13 21:51:03.156673] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1298272 ] 00:07:44.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.092 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:44.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.092 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:44.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.092 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:44.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.092 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:44.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.092 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:44.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.092 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:44.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.092 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:44.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.092 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:44.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.092 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:44.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.092 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:44.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.092 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:44.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.092 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:44.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.092 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:44.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.092 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:44.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.092 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:44.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.092 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:44.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.092 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:44.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.092 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:44.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.092 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:44.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.092 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:44.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.092 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:44.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.092 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:44.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.092 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:44.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.092 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:44.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.092 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:44.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.092 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:44.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.092 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:44.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.092 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:44.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.092 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:44.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.092 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:44.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.092 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:44.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.092 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:44.092 [2024-07-13 21:51:03.320504] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.351 [2024-07-13 21:51:03.517537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.609 [2024-07-13 21:51:03.756072] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:45.176 [2024-07-13 21:51:04.280101] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:45.435 A filename is required. 00:07:45.435 21:51:04 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:45.435 21:51:04 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:45.435 21:51:04 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:45.435 21:51:04 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:45.435 21:51:04 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:45.435 21:51:04 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:45.435 00:07:45.435 real 0m1.592s 00:07:45.435 user 0m1.375s 00:07:45.435 sys 0m0.243s 00:07:45.435 21:51:04 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.435 21:51:04 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:45.435 ************************************ 00:07:45.435 END TEST accel_missing_filename 00:07:45.435 ************************************ 00:07:45.435 21:51:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:45.435 21:51:04 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:45.435 21:51:04 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:45.435 21:51:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.435 21:51:04 accel -- common/autotest_common.sh@10 -- # set +x 00:07:45.435 ************************************ 00:07:45.435 START TEST accel_compress_verify 00:07:45.435 ************************************ 00:07:45.435 21:51:04 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:45.435 21:51:04 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:45.435 21:51:04 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:45.435 21:51:04 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:45.435 21:51:04 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.435 21:51:04 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:45.435 21:51:04 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.435 21:51:04 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:45.435 21:51:04 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:45.435 21:51:04 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:45.435 21:51:04 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:45.435 21:51:04 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:45.435 21:51:04 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.435 21:51:04 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.435 21:51:04 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:45.435 21:51:04 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:45.435 21:51:04 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:45.695 [2024-07-13 21:51:04.828829] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:45.695 [2024-07-13 21:51:04.829003] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1298911 ] 00:07:45.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.695 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:45.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.695 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:45.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.695 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:45.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.695 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:45.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.695 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:45.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.695 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:45.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.695 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:45.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.695 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:45.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.695 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:45.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.695 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:45.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.695 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:45.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.695 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:45.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.695 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:45.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.695 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:45.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.695 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:45.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.695 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:45.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.695 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:45.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.695 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:45.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.695 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:45.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.695 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:45.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.695 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:45.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.695 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:45.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.695 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:45.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.695 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:45.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.695 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:45.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.695 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:45.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.695 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:45.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.695 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:45.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.695 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:45.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.695 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:45.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.695 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:45.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.695 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:45.695 [2024-07-13 21:51:04.992283] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.955 [2024-07-13 21:51:05.199383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.214 [2024-07-13 21:51:05.444541] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:46.783 [2024-07-13 21:51:05.976887] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:47.043 00:07:47.043 Compression does not support the verify option, aborting. 00:07:47.043 21:51:06 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:47.043 21:51:06 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:47.043 21:51:06 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:47.043 21:51:06 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:47.043 21:51:06 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:47.043 21:51:06 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:47.043 00:07:47.043 real 0m1.613s 00:07:47.043 user 0m1.378s 00:07:47.043 sys 0m0.254s 00:07:47.043 21:51:06 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.043 21:51:06 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:47.043 ************************************ 00:07:47.043 END TEST accel_compress_verify 00:07:47.043 ************************************ 00:07:47.043 21:51:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:47.043 21:51:06 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:47.043 21:51:06 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:47.043 21:51:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.043 21:51:06 accel -- common/autotest_common.sh@10 -- # set +x 00:07:47.303 ************************************ 00:07:47.303 START TEST accel_wrong_workload 00:07:47.303 ************************************ 00:07:47.303 21:51:06 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:07:47.303 21:51:06 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:47.303 21:51:06 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:47.303 21:51:06 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:47.303 21:51:06 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:47.303 21:51:06 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:47.303 21:51:06 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:47.303 21:51:06 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:47.303 21:51:06 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:47.303 21:51:06 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:47.303 21:51:06 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:47.303 21:51:06 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:47.304 21:51:06 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.304 21:51:06 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.304 21:51:06 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:47.304 21:51:06 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:47.304 21:51:06 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:47.304 Unsupported workload type: foobar 00:07:47.304 [2024-07-13 21:51:06.519260] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:47.304 accel_perf options: 00:07:47.304 [-h help message] 00:07:47.304 [-q queue depth per core] 00:07:47.304 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:47.304 [-T number of threads per core 00:07:47.304 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:47.304 [-t time in seconds] 00:07:47.304 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:47.304 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:47.304 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:47.304 [-l for compress/decompress workloads, name of uncompressed input file 00:07:47.304 [-S for crc32c workload, use this seed value (default 0) 00:07:47.304 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:47.304 [-f for fill workload, use this BYTE value (default 255) 00:07:47.304 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:47.304 [-y verify result if this switch is on] 00:07:47.304 [-a tasks to allocate per core (default: same value as -q)] 00:07:47.304 Can be used to spread operations across a wider range of memory. 00:07:47.304 21:51:06 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:47.304 21:51:06 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:47.304 21:51:06 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:47.304 21:51:06 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:47.304 00:07:47.304 real 0m0.088s 00:07:47.304 user 0m0.084s 00:07:47.304 sys 0m0.045s 00:07:47.304 21:51:06 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.304 21:51:06 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:47.304 ************************************ 00:07:47.304 END TEST accel_wrong_workload 00:07:47.304 ************************************ 00:07:47.304 21:51:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:47.304 21:51:06 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:47.304 21:51:06 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:47.304 21:51:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.304 21:51:06 accel -- common/autotest_common.sh@10 -- # set +x 00:07:47.304 ************************************ 00:07:47.304 START TEST accel_negative_buffers 00:07:47.304 ************************************ 00:07:47.304 21:51:06 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:47.304 21:51:06 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:47.304 21:51:06 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:47.304 21:51:06 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:47.304 21:51:06 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:47.304 21:51:06 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:47.304 21:51:06 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:47.304 21:51:06 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:47.304 21:51:06 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:47.304 21:51:06 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:47.304 21:51:06 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:47.304 21:51:06 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:47.304 21:51:06 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.304 21:51:06 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.304 21:51:06 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:47.304 21:51:06 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:47.304 21:51:06 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:47.304 -x option must be non-negative. 00:07:47.304 [2024-07-13 21:51:06.672317] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:47.564 accel_perf options: 00:07:47.564 [-h help message] 00:07:47.564 [-q queue depth per core] 00:07:47.564 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:47.564 [-T number of threads per core 00:07:47.564 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:47.564 [-t time in seconds] 00:07:47.564 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:47.564 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:47.564 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:47.564 [-l for compress/decompress workloads, name of uncompressed input file 00:07:47.564 [-S for crc32c workload, use this seed value (default 0) 00:07:47.564 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:47.564 [-f for fill workload, use this BYTE value (default 255) 00:07:47.564 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:47.564 [-y verify result if this switch is on] 00:07:47.564 [-a tasks to allocate per core (default: same value as -q)] 00:07:47.564 Can be used to spread operations across a wider range of memory. 00:07:47.564 21:51:06 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:47.564 21:51:06 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:47.564 21:51:06 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:47.564 21:51:06 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:47.564 00:07:47.564 real 0m0.082s 00:07:47.564 user 0m0.063s 00:07:47.564 sys 0m0.043s 00:07:47.564 21:51:06 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.564 21:51:06 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:47.564 ************************************ 00:07:47.564 END TEST accel_negative_buffers 00:07:47.564 ************************************ 00:07:47.564 21:51:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:47.564 21:51:06 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:47.564 21:51:06 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:47.564 21:51:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.564 21:51:06 accel -- common/autotest_common.sh@10 -- # set +x 00:07:47.564 ************************************ 00:07:47.564 START TEST accel_crc32c 00:07:47.564 ************************************ 00:07:47.564 21:51:06 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:47.564 21:51:06 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:47.564 21:51:06 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:47.564 21:51:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.564 21:51:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.564 21:51:06 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:47.564 21:51:06 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:47.564 21:51:06 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:47.564 21:51:06 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:47.564 21:51:06 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:47.564 21:51:06 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.564 21:51:06 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.564 21:51:06 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:47.564 21:51:06 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:47.564 21:51:06 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:47.564 [2024-07-13 21:51:06.834831] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:47.564 [2024-07-13 21:51:06.834912] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1299425 ] 00:07:47.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.564 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:47.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.564 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:47.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.564 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:47.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.564 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:47.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.564 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:47.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.564 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:47.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.564 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:47.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.564 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:47.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.564 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:47.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.564 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:47.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.564 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:47.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.564 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:47.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.564 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:47.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.564 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:47.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.564 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:47.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.564 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:47.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.565 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:47.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.565 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:47.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.565 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:47.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.565 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:47.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.565 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:47.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.565 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:47.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.565 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:47.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.565 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:47.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.565 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:47.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.565 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:47.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.565 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:47.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.565 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:47.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.565 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:47.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.565 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:47.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.565 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:47.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.565 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:47.824 [2024-07-13 21:51:06.990944] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.824 [2024-07-13 21:51:07.190178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:48.084 21:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.990 21:51:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:49.990 21:51:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.990 21:51:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.990 21:51:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.990 21:51:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:49.990 21:51:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.990 21:51:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.990 21:51:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.990 21:51:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:49.990 21:51:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.990 21:51:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.990 21:51:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.990 21:51:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:49.990 21:51:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.990 21:51:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.990 21:51:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.990 21:51:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:49.990 21:51:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.990 21:51:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.990 21:51:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.990 21:51:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:49.990 21:51:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.990 21:51:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.990 21:51:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.990 21:51:09 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:49.990 21:51:09 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:49.990 21:51:09 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:49.990 00:07:49.990 real 0m2.570s 00:07:49.990 user 0m2.318s 00:07:49.990 sys 0m0.257s 00:07:49.990 21:51:09 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.990 21:51:09 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:49.990 ************************************ 00:07:49.990 END TEST accel_crc32c 00:07:49.990 ************************************ 00:07:50.250 21:51:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:50.250 21:51:09 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:50.250 21:51:09 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:50.250 21:51:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.250 21:51:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:50.250 ************************************ 00:07:50.250 START TEST accel_crc32c_C2 00:07:50.250 ************************************ 00:07:50.250 21:51:09 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:50.250 21:51:09 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:50.250 21:51:09 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:50.250 21:51:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:50.250 21:51:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:50.250 21:51:09 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:50.250 21:51:09 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:50.250 21:51:09 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:50.250 21:51:09 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:50.250 21:51:09 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:50.250 21:51:09 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.250 21:51:09 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.250 21:51:09 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:50.250 21:51:09 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:50.250 21:51:09 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:50.250 [2024-07-13 21:51:09.486172] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:50.250 [2024-07-13 21:51:09.486253] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1299947 ] 00:07:50.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.250 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:50.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.250 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:50.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.250 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:50.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.250 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:50.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.250 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:50.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.250 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:50.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.250 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:50.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.251 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:50.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.251 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:50.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.251 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:50.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.251 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:50.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.251 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:50.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.251 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:50.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.251 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:50.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.251 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:50.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.251 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:50.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.251 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:50.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.251 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:50.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.251 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:50.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.251 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:50.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.251 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:50.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.251 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:50.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.251 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:50.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.251 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:50.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.251 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:50.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.251 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:50.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.251 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:50.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.251 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:50.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.251 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:50.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.251 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:50.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.251 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:50.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.251 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:50.510 [2024-07-13 21:51:09.644286] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.510 [2024-07-13 21:51:09.845343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.770 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:50.770 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.770 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:50.771 21:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.681 21:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:52.681 21:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.681 21:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.681 21:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.681 21:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:52.681 21:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.681 21:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.681 21:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.681 21:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:52.681 21:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.681 21:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.681 21:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.681 21:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:52.681 21:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.681 21:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.681 21:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.681 21:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:52.681 21:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.681 21:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.681 21:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.681 21:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:52.681 21:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.681 21:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.681 21:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.681 21:51:12 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:52.681 21:51:12 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:52.681 21:51:12 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:52.681 00:07:52.681 real 0m2.580s 00:07:52.681 user 0m2.336s 00:07:52.681 sys 0m0.251s 00:07:52.681 21:51:12 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.681 21:51:12 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:52.681 ************************************ 00:07:52.681 END TEST accel_crc32c_C2 00:07:52.681 ************************************ 00:07:52.681 21:51:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:52.681 21:51:12 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:52.681 21:51:12 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:52.681 21:51:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.681 21:51:12 accel -- common/autotest_common.sh@10 -- # set +x 00:07:52.940 ************************************ 00:07:52.940 START TEST accel_copy 00:07:52.940 ************************************ 00:07:52.940 21:51:12 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:52.940 21:51:12 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:52.940 21:51:12 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:52.940 21:51:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:52.940 21:51:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:52.940 21:51:12 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:52.940 21:51:12 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:52.940 21:51:12 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:52.940 21:51:12 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:52.940 21:51:12 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:52.940 21:51:12 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.940 21:51:12 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.940 21:51:12 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:52.940 21:51:12 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:52.940 21:51:12 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:52.940 [2024-07-13 21:51:12.148839] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:52.940 [2024-07-13 21:51:12.148924] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1300284 ] 00:07:52.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.940 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:52.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.940 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:52.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.940 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:52.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.940 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:52.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.940 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:52.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.940 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:52.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.940 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:52.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.940 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:52.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.940 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:52.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.940 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:52.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.940 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:52.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.940 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:52.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.940 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:52.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.940 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:52.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.940 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:52.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.940 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:52.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.940 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:52.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.940 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:52.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.940 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:52.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.940 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:52.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.940 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:52.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.940 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:52.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.940 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:52.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.940 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:52.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.940 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:52.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.940 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:52.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.940 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:52.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.940 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:52.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.941 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:52.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.941 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:52.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.941 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:52.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.941 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:52.941 [2024-07-13 21:51:12.308795] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.199 [2024-07-13 21:51:12.511461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:53.459 21:51:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:55.371 21:51:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:55.371 21:51:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:55.371 21:51:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:55.371 21:51:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:55.371 21:51:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:55.371 21:51:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:55.371 21:51:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:55.371 21:51:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:55.371 21:51:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:55.371 21:51:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:55.371 21:51:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:55.371 21:51:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:55.371 21:51:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:55.371 21:51:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:55.371 21:51:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:55.371 21:51:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:55.372 21:51:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:55.372 21:51:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:55.372 21:51:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:55.372 21:51:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:55.372 21:51:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:55.372 21:51:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:55.372 21:51:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:55.372 21:51:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:55.372 21:51:14 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:55.372 21:51:14 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:55.372 21:51:14 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:55.372 00:07:55.372 real 0m2.572s 00:07:55.372 user 0m2.340s 00:07:55.372 sys 0m0.234s 00:07:55.372 21:51:14 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:55.372 21:51:14 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:55.372 ************************************ 00:07:55.372 END TEST accel_copy 00:07:55.372 ************************************ 00:07:55.372 21:51:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:55.372 21:51:14 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:55.372 21:51:14 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:55.372 21:51:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.372 21:51:14 accel -- common/autotest_common.sh@10 -- # set +x 00:07:55.372 ************************************ 00:07:55.372 START TEST accel_fill 00:07:55.372 ************************************ 00:07:55.372 21:51:14 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:55.372 21:51:14 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:55.372 21:51:14 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:55.372 21:51:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:55.372 21:51:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:55.372 21:51:14 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:55.372 21:51:14 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:55.372 21:51:14 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:55.372 21:51:14 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:55.372 21:51:14 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:55.372 21:51:14 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:55.372 21:51:14 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:55.372 21:51:14 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:55.372 21:51:14 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:55.372 21:51:14 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:55.631 [2024-07-13 21:51:14.802876] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:55.631 [2024-07-13 21:51:14.802974] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1300811 ] 00:07:55.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.631 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:55.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.631 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:55.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.631 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:55.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.631 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:55.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.631 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:55.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.631 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:55.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.631 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:55.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.631 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:55.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.631 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:55.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.631 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:55.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.631 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:55.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.631 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:55.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.631 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:55.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.631 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:55.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.631 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:55.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.631 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:55.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.631 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:55.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.631 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:55.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.631 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:55.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.631 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:55.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.631 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:55.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.631 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:55.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.631 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:55.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.631 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:55.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.631 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:55.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.631 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:55.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.631 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:55.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.631 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:55.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.631 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:55.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.631 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:55.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.631 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:55.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.631 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:55.631 [2024-07-13 21:51:14.963982] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.926 [2024-07-13 21:51:15.176714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:56.185 21:51:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:58.103 21:51:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:58.103 21:51:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:58.103 21:51:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:58.103 21:51:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:58.103 21:51:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:58.103 21:51:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:58.103 21:51:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:58.103 21:51:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:58.103 21:51:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:58.103 21:51:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:58.103 21:51:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:58.103 21:51:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:58.103 21:51:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:58.103 21:51:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:58.103 21:51:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:58.103 21:51:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:58.103 21:51:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:58.103 21:51:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:58.103 21:51:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:58.103 21:51:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:58.103 21:51:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:58.103 21:51:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:58.103 21:51:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:58.103 21:51:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:58.103 21:51:17 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:58.103 21:51:17 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:58.103 21:51:17 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:58.103 00:07:58.103 real 0m2.619s 00:07:58.103 user 0m2.361s 00:07:58.103 sys 0m0.257s 00:07:58.103 21:51:17 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:58.103 21:51:17 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:58.103 ************************************ 00:07:58.103 END TEST accel_fill 00:07:58.103 ************************************ 00:07:58.103 21:51:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:58.103 21:51:17 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:58.103 21:51:17 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:58.103 21:51:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.103 21:51:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:58.103 ************************************ 00:07:58.103 START TEST accel_copy_crc32c 00:07:58.103 ************************************ 00:07:58.103 21:51:17 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:58.103 21:51:17 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:58.103 21:51:17 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:58.103 21:51:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:58.103 21:51:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:58.103 21:51:17 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:58.103 21:51:17 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:58.103 21:51:17 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:58.103 21:51:17 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:58.103 21:51:17 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:58.103 21:51:17 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:58.103 21:51:17 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:58.103 21:51:17 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:58.103 21:51:17 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:58.103 21:51:17 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:58.362 [2024-07-13 21:51:17.507429] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:58.362 [2024-07-13 21:51:17.507507] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1301345 ] 00:07:58.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.362 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:58.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.362 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:58.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.362 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:58.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.362 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:58.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.362 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:58.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.362 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:58.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.362 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:58.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.362 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:58.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.362 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:58.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.362 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:58.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.362 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:58.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.362 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:58.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.362 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:58.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.362 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:58.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.362 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:58.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.362 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:58.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.362 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:58.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.362 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:58.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.362 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:58.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.362 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:58.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.362 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:58.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.362 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:58.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.362 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:58.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.362 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:58.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.362 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:58.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.362 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:58.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.362 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:58.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.362 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:58.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.362 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:58.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.362 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:58.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.362 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:58.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:58.362 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:58.362 [2024-07-13 21:51:17.662216] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.621 [2024-07-13 21:51:17.864335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:58.880 21:51:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:00.787 21:51:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:00.787 21:51:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:00.787 21:51:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:00.787 21:51:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:00.787 21:51:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:00.787 21:51:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:00.787 21:51:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:00.787 21:51:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:00.787 21:51:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:00.787 21:51:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:00.787 21:51:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:00.787 21:51:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:00.787 21:51:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:00.787 21:51:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:00.787 21:51:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:00.787 21:51:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:00.787 21:51:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:00.787 21:51:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:00.787 21:51:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:00.787 21:51:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:00.787 21:51:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:00.787 21:51:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:00.787 21:51:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:00.787 21:51:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:00.787 21:51:20 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:00.787 21:51:20 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:00.787 21:51:20 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:00.787 00:08:00.787 real 0m2.592s 00:08:00.787 user 0m2.347s 00:08:00.787 sys 0m0.251s 00:08:00.787 21:51:20 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:00.787 21:51:20 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:08:00.787 ************************************ 00:08:00.787 END TEST accel_copy_crc32c 00:08:00.787 ************************************ 00:08:00.787 21:51:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:00.787 21:51:20 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:08:00.787 21:51:20 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:00.787 21:51:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.787 21:51:20 accel -- common/autotest_common.sh@10 -- # set +x 00:08:00.787 ************************************ 00:08:00.787 START TEST accel_copy_crc32c_C2 00:08:00.787 ************************************ 00:08:00.787 21:51:20 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:08:00.787 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:08:00.787 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:08:00.787 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:00.787 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:00.787 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:08:00.787 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:08:00.787 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:08:00.787 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:00.787 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:00.787 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:00.787 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:00.787 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:00.787 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:08:00.787 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:08:01.046 [2024-07-13 21:51:20.177460] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:01.046 [2024-07-13 21:51:20.177553] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1301808 ] 00:08:01.047 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.047 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:01.047 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.047 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:01.047 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.047 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:01.047 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.047 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:01.047 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.047 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:01.047 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.047 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:01.047 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.047 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:01.047 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.047 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:01.047 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.047 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:01.047 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.047 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:01.047 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.047 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:01.047 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.047 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:01.047 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.047 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:01.047 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.047 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:01.047 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.047 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:01.047 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.047 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:01.047 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.047 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:01.047 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.047 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:01.047 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.047 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:01.047 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.047 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:01.047 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.047 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:01.047 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.047 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:01.047 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.047 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:01.047 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.047 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:01.047 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.047 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:01.047 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.047 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:01.047 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.047 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:01.047 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.047 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:01.047 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.047 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:01.047 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.047 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:01.047 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.047 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:01.047 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.047 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:01.047 [2024-07-13 21:51:20.340531] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.306 [2024-07-13 21:51:20.542849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.564 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:01.564 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.564 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:01.564 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:01.564 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:01.564 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.564 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:01.564 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:01.564 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:08:01.564 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.564 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:01.564 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:01.564 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:01.564 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.564 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:01.564 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:01.564 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:01.564 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.564 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:01.564 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:01.564 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:08:01.564 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.564 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:08:01.564 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:01.564 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:01.564 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:08:01.564 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.564 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:01.564 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:01.564 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:01.564 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.564 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:01.564 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:01.564 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:08:01.564 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.564 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:01.565 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:01.565 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:01.565 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.565 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:01.565 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:01.565 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:08:01.565 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.565 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:08:01.565 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:01.565 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:01.565 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:01.565 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.565 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:01.565 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:01.565 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:01.565 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.565 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:01.565 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:01.565 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:08:01.565 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.565 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:01.565 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:01.565 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:01.565 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.565 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:01.565 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:01.565 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:08:01.565 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.565 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:01.565 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:01.565 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:01.565 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.565 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:01.565 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:01.565 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:01.565 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.565 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:01.565 21:51:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:03.471 21:51:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:03.471 21:51:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.471 21:51:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:03.471 21:51:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:03.471 21:51:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:03.471 21:51:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.471 21:51:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:03.471 21:51:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:03.471 21:51:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:03.471 21:51:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.471 21:51:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:03.471 21:51:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:03.471 21:51:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:03.471 21:51:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.471 21:51:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:03.471 21:51:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:03.471 21:51:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:03.471 21:51:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.471 21:51:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:03.471 21:51:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:03.471 21:51:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:03.471 21:51:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.471 21:51:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:03.471 21:51:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:03.471 21:51:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:03.471 21:51:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:03.471 21:51:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:03.471 00:08:03.471 real 0m2.595s 00:08:03.471 user 0m2.333s 00:08:03.471 sys 0m0.270s 00:08:03.471 21:51:22 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:03.471 21:51:22 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:08:03.471 ************************************ 00:08:03.471 END TEST accel_copy_crc32c_C2 00:08:03.471 ************************************ 00:08:03.471 21:51:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:03.471 21:51:22 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:08:03.471 21:51:22 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:03.471 21:51:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.471 21:51:22 accel -- common/autotest_common.sh@10 -- # set +x 00:08:03.471 ************************************ 00:08:03.471 START TEST accel_dualcast 00:08:03.471 ************************************ 00:08:03.471 21:51:22 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:08:03.471 21:51:22 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:08:03.471 21:51:22 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:08:03.471 21:51:22 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:08:03.471 21:51:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:03.471 21:51:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:03.471 21:51:22 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:03.471 21:51:22 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:08:03.471 21:51:22 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:03.471 21:51:22 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:03.471 21:51:22 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:03.471 21:51:22 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:03.471 21:51:22 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:03.471 21:51:22 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:08:03.471 21:51:22 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:08:03.471 [2024-07-13 21:51:22.844385] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:03.471 [2024-07-13 21:51:22.844467] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1302184 ] 00:08:03.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.730 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:03.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.731 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:03.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.731 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:03.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.731 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:03.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.731 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:03.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.731 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:03.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.731 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:03.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.731 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:03.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.731 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:03.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.731 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:03.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.731 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:03.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.731 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:03.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.731 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:03.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.731 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:03.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.731 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:03.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.731 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:03.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.731 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:03.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.731 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:03.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.731 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:03.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.731 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:03.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.731 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:03.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.731 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:03.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.731 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:03.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.731 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:03.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.731 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:03.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.731 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:03.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.731 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:03.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.731 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:03.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.731 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:03.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.731 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:03.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.731 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:03.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.731 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:03.731 [2024-07-13 21:51:23.005634] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.000 [2024-07-13 21:51:23.208948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.261 21:51:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:04.261 21:51:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:04.261 21:51:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:04.261 21:51:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:04.261 21:51:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:04.261 21:51:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:04.261 21:51:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:04.261 21:51:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:04.261 21:51:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:08:04.261 21:51:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:04.261 21:51:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:04.261 21:51:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:04.261 21:51:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:04.261 21:51:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:04.262 21:51:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:04.262 21:51:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:04.262 21:51:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:04.262 21:51:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:04.262 21:51:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:04.262 21:51:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:04.262 21:51:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:08:04.262 21:51:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:04.262 21:51:23 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:08:04.262 21:51:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:04.262 21:51:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:04.262 21:51:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:04.262 21:51:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:04.262 21:51:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:04.262 21:51:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:04.262 21:51:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:04.262 21:51:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:04.262 21:51:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:04.262 21:51:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:04.262 21:51:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:08:04.262 21:51:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:04.262 21:51:23 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:08:04.262 21:51:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:04.262 21:51:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:04.262 21:51:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:08:04.262 21:51:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:04.262 21:51:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:04.262 21:51:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:04.262 21:51:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:08:04.262 21:51:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:04.262 21:51:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:04.262 21:51:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:04.262 21:51:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:08:04.262 21:51:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:04.262 21:51:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:04.262 21:51:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:04.262 21:51:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:08:04.262 21:51:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:04.262 21:51:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:04.262 21:51:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:04.262 21:51:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:08:04.262 21:51:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:04.262 21:51:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:04.262 21:51:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:04.262 21:51:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:04.262 21:51:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:04.262 21:51:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:04.262 21:51:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:04.262 21:51:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:04.262 21:51:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:04.262 21:51:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:04.262 21:51:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:06.165 21:51:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:06.165 21:51:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:06.165 21:51:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:06.165 21:51:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:06.165 21:51:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:06.165 21:51:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:06.165 21:51:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:06.165 21:51:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:06.165 21:51:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:06.165 21:51:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:06.165 21:51:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:06.165 21:51:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:06.165 21:51:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:06.165 21:51:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:06.165 21:51:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:06.165 21:51:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:06.165 21:51:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:06.165 21:51:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:06.165 21:51:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:06.165 21:51:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:06.165 21:51:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:06.165 21:51:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:06.165 21:51:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:06.165 21:51:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:06.165 21:51:25 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:06.165 21:51:25 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:08:06.165 21:51:25 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:06.165 00:08:06.165 real 0m2.585s 00:08:06.165 user 0m2.353s 00:08:06.165 sys 0m0.235s 00:08:06.165 21:51:25 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:06.165 21:51:25 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:08:06.165 ************************************ 00:08:06.165 END TEST accel_dualcast 00:08:06.165 ************************************ 00:08:06.165 21:51:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:06.165 21:51:25 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:08:06.165 21:51:25 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:06.165 21:51:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.165 21:51:25 accel -- common/autotest_common.sh@10 -- # set +x 00:08:06.165 ************************************ 00:08:06.165 START TEST accel_compare 00:08:06.165 ************************************ 00:08:06.165 21:51:25 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:08:06.165 21:51:25 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:08:06.165 21:51:25 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:08:06.165 21:51:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:06.165 21:51:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:06.165 21:51:25 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:08:06.165 21:51:25 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:06.165 21:51:25 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:08:06.165 21:51:25 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:06.165 21:51:25 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:06.165 21:51:25 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:06.165 21:51:25 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:06.165 21:51:25 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:06.165 21:51:25 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:08:06.165 21:51:25 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:08:06.165 [2024-07-13 21:51:25.527201] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:06.165 [2024-07-13 21:51:25.527281] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1302729 ] 00:08:06.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.424 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:06.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.424 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:06.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.424 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:06.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.424 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:06.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.424 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:06.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.424 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:06.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.424 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:06.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.424 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:06.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.424 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:06.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.424 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:06.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.424 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:06.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.424 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:06.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.424 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:06.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.424 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:06.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.424 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:06.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.424 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:06.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.424 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:06.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.424 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:06.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.424 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:06.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.424 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:06.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.424 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:06.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.424 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:06.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.424 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:06.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.424 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:06.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.424 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:06.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.424 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:06.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.424 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:06.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.425 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:06.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.425 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:06.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.425 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:06.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.425 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:06.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.425 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:06.425 [2024-07-13 21:51:25.684101] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.683 [2024-07-13 21:51:25.897831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:06.942 21:51:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:08.843 21:51:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:08.843 21:51:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:08.843 21:51:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:08.843 21:51:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:08.843 21:51:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:08.843 21:51:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:08.843 21:51:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:08.843 21:51:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:08.843 21:51:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:08.843 21:51:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:08.843 21:51:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:08.843 21:51:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:08.843 21:51:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:08.843 21:51:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:08.843 21:51:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:08.843 21:51:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:08.843 21:51:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:08.843 21:51:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:08.843 21:51:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:08.843 21:51:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:08.843 21:51:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:08.843 21:51:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:08.843 21:51:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:08.843 21:51:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:08.843 21:51:28 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:08.843 21:51:28 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:08:08.843 21:51:28 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:08.843 00:08:08.843 real 0m2.622s 00:08:08.843 user 0m2.397s 00:08:08.843 sys 0m0.230s 00:08:08.843 21:51:28 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:08.843 21:51:28 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:08:08.843 ************************************ 00:08:08.843 END TEST accel_compare 00:08:08.843 ************************************ 00:08:08.843 21:51:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:08.843 21:51:28 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:08:08.843 21:51:28 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:08.843 21:51:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.843 21:51:28 accel -- common/autotest_common.sh@10 -- # set +x 00:08:08.843 ************************************ 00:08:08.843 START TEST accel_xor 00:08:08.843 ************************************ 00:08:08.843 21:51:28 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:08:08.843 21:51:28 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:08.843 21:51:28 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:08.843 21:51:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:08.843 21:51:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:08.843 21:51:28 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:08:08.843 21:51:28 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:08.843 21:51:28 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:08.843 21:51:28 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:08.843 21:51:28 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:08.843 21:51:28 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:08.843 21:51:28 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:08.843 21:51:28 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:08.843 21:51:28 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:08.843 21:51:28 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:08.843 [2024-07-13 21:51:28.231488] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:08.843 [2024-07-13 21:51:28.231577] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1303272 ] 00:08:09.103 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.103 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:09.103 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.103 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:09.103 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.103 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:09.103 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.103 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:09.103 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.103 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:09.103 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.103 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:09.103 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.103 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:09.103 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.103 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:09.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.104 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:09.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.104 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:09.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.104 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:09.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.104 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:09.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.104 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:09.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.104 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:09.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.104 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:09.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.104 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:09.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.104 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:09.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.104 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:09.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.104 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:09.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.104 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:09.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.104 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:09.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.104 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:09.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.104 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:09.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.104 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:09.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.104 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:09.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.104 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:09.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.104 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:09.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.104 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:09.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.104 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:09.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.104 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:09.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.104 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:09.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.104 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:09.104 [2024-07-13 21:51:28.389088] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.364 [2024-07-13 21:51:28.600326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.624 21:51:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:09.624 21:51:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:09.624 21:51:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:09.624 21:51:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:09.624 21:51:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:09.624 21:51:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:09.624 21:51:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:09.624 21:51:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:09.624 21:51:28 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:09.624 21:51:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:09.624 21:51:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:09.624 21:51:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:09.624 21:51:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:09.624 21:51:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:09.624 21:51:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:09.624 21:51:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:09.624 21:51:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:09.624 21:51:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:09.624 21:51:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:09.624 21:51:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:09.624 21:51:28 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:09.624 21:51:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:09.624 21:51:28 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:09.624 21:51:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:09.624 21:51:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:09.624 21:51:28 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:08:09.624 21:51:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:09.624 21:51:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:09.624 21:51:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:09.624 21:51:28 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:09.624 21:51:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:09.624 21:51:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:09.624 21:51:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:09.624 21:51:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:09.624 21:51:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:09.624 21:51:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:09.624 21:51:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:09.624 21:51:28 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:09.624 21:51:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:09.624 21:51:28 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:09.624 21:51:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:09.625 21:51:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:09.625 21:51:28 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:09.625 21:51:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:09.625 21:51:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:09.625 21:51:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:09.625 21:51:28 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:09.625 21:51:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:09.625 21:51:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:09.625 21:51:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:09.625 21:51:28 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:09.625 21:51:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:09.625 21:51:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:09.625 21:51:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:09.625 21:51:28 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:09.625 21:51:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:09.625 21:51:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:09.625 21:51:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:09.625 21:51:28 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:09.625 21:51:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:09.625 21:51:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:09.625 21:51:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:09.625 21:51:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:09.625 21:51:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:09.625 21:51:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:09.625 21:51:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:09.625 21:51:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:09.625 21:51:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:09.625 21:51:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:09.625 21:51:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:11.543 21:51:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:11.543 21:51:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:11.543 21:51:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:11.543 21:51:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:11.543 21:51:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:11.543 21:51:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:11.543 21:51:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:11.543 21:51:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:11.543 21:51:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:11.543 21:51:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:11.543 21:51:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:11.543 21:51:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:11.543 21:51:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:11.543 21:51:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:11.543 21:51:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:11.543 21:51:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:11.543 21:51:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:11.543 21:51:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:11.543 21:51:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:11.543 21:51:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:11.543 21:51:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:11.543 21:51:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:11.543 21:51:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:11.543 21:51:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:11.543 21:51:30 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:11.543 21:51:30 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:11.543 21:51:30 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:11.543 00:08:11.543 real 0m2.646s 00:08:11.543 user 0m2.397s 00:08:11.543 sys 0m0.247s 00:08:11.543 21:51:30 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:11.543 21:51:30 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:11.543 ************************************ 00:08:11.543 END TEST accel_xor 00:08:11.543 ************************************ 00:08:11.543 21:51:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:11.543 21:51:30 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:08:11.543 21:51:30 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:11.543 21:51:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.543 21:51:30 accel -- common/autotest_common.sh@10 -- # set +x 00:08:11.543 ************************************ 00:08:11.543 START TEST accel_xor 00:08:11.543 ************************************ 00:08:11.543 21:51:30 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:08:11.543 21:51:30 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:11.543 21:51:30 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:11.543 21:51:30 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:08:11.543 21:51:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:11.543 21:51:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:11.543 21:51:30 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:11.543 21:51:30 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:11.543 21:51:30 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:11.543 21:51:30 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:11.543 21:51:30 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:11.543 21:51:30 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:11.543 21:51:30 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:11.543 21:51:30 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:11.544 21:51:30 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:11.803 [2024-07-13 21:51:30.937613] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:11.803 [2024-07-13 21:51:30.937692] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1303678 ] 00:08:11.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.803 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:11.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.803 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:11.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.803 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:11.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.803 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:11.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.803 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:11.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.803 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:11.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.803 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:11.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.803 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:11.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.803 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:11.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.803 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:11.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.803 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:11.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.803 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:11.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.803 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:11.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.803 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:11.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.803 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:11.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.803 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:11.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.803 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:11.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.803 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:11.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.803 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:11.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.803 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:11.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.803 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:11.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.803 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:11.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.803 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:11.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.803 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:11.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.803 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:11.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.803 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:11.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.803 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:11.804 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.804 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:11.804 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.804 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:11.804 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.804 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:11.804 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.804 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:11.804 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.804 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:11.804 [2024-07-13 21:51:31.097855] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.063 [2024-07-13 21:51:31.298000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.322 21:51:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:12.322 21:51:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.322 21:51:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.322 21:51:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.322 21:51:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:12.322 21:51:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.322 21:51:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.322 21:51:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.322 21:51:31 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:12.322 21:51:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.322 21:51:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.322 21:51:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.322 21:51:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:12.322 21:51:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.322 21:51:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.322 21:51:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.322 21:51:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:12.322 21:51:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.322 21:51:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.322 21:51:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.322 21:51:31 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:12.322 21:51:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.322 21:51:31 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:12.322 21:51:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.322 21:51:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.322 21:51:31 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:08:12.322 21:51:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.322 21:51:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.322 21:51:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.322 21:51:31 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:12.322 21:51:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.322 21:51:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.322 21:51:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.322 21:51:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:12.322 21:51:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.322 21:51:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.322 21:51:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.322 21:51:31 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:12.322 21:51:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.322 21:51:31 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:12.322 21:51:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.322 21:51:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.322 21:51:31 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:12.322 21:51:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.322 21:51:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.322 21:51:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.323 21:51:31 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:12.323 21:51:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.323 21:51:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.323 21:51:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.323 21:51:31 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:12.323 21:51:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.323 21:51:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.323 21:51:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.323 21:51:31 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:12.323 21:51:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.323 21:51:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.323 21:51:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.323 21:51:31 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:12.323 21:51:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.323 21:51:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.323 21:51:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.323 21:51:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:12.323 21:51:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.323 21:51:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.323 21:51:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.323 21:51:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:12.323 21:51:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.323 21:51:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.323 21:51:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:14.228 21:51:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:14.228 21:51:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:14.228 21:51:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:14.228 21:51:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:14.228 21:51:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:14.228 21:51:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:14.228 21:51:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:14.228 21:51:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:14.228 21:51:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:14.228 21:51:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:14.228 21:51:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:14.228 21:51:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:14.228 21:51:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:14.228 21:51:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:14.228 21:51:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:14.228 21:51:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:14.228 21:51:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:14.228 21:51:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:14.228 21:51:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:14.228 21:51:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:14.228 21:51:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:14.228 21:51:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:14.228 21:51:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:14.228 21:51:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:14.228 21:51:33 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:14.228 21:51:33 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:14.228 21:51:33 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:14.228 00:08:14.228 real 0m2.591s 00:08:14.228 user 0m2.353s 00:08:14.229 sys 0m0.244s 00:08:14.229 21:51:33 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:14.229 21:51:33 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:14.229 ************************************ 00:08:14.229 END TEST accel_xor 00:08:14.229 ************************************ 00:08:14.229 21:51:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:14.229 21:51:33 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:08:14.229 21:51:33 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:14.229 21:51:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.229 21:51:33 accel -- common/autotest_common.sh@10 -- # set +x 00:08:14.229 ************************************ 00:08:14.229 START TEST accel_dif_verify 00:08:14.229 ************************************ 00:08:14.229 21:51:33 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:08:14.229 21:51:33 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:08:14.229 21:51:33 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:08:14.229 21:51:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:14.229 21:51:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:14.229 21:51:33 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:08:14.229 21:51:33 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:14.229 21:51:33 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:14.229 21:51:33 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:14.229 21:51:33 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:14.229 21:51:33 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:14.229 21:51:33 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:14.229 21:51:33 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:14.229 21:51:33 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:14.229 21:51:33 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:08:14.487 [2024-07-13 21:51:33.625145] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:14.487 [2024-07-13 21:51:33.625226] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1304113 ] 00:08:14.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.487 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:14.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.487 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:14.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.487 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:14.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.487 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:14.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.487 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:14.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.487 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:14.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.487 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:14.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.487 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:14.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.487 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:14.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.487 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:14.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.487 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:14.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.487 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:14.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.487 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:14.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.487 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:14.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.487 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:14.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.487 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:14.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.487 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:14.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.487 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:14.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.487 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:14.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.487 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:14.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.487 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:14.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.487 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:14.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.487 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:14.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.487 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:14.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.487 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:14.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.487 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:14.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.487 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:14.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.487 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:14.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.487 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:14.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.487 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:14.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.487 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:14.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.487 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:14.487 [2024-07-13 21:51:33.784485] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.745 [2024-07-13 21:51:33.992175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.004 21:51:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:15.004 21:51:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:15.004 21:51:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:15.004 21:51:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:15.004 21:51:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:15.004 21:51:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:15.004 21:51:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:15.004 21:51:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:15.004 21:51:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:08:15.004 21:51:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:15.004 21:51:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:15.004 21:51:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:15.004 21:51:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:15.004 21:51:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:15.004 21:51:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:15.004 21:51:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:15.004 21:51:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:15.004 21:51:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:15.004 21:51:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:15.004 21:51:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:15.004 21:51:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:08:15.004 21:51:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:15.004 21:51:34 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:08:15.004 21:51:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:15.004 21:51:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:15.004 21:51:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:15.004 21:51:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:15.005 21:51:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:15.005 21:51:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:15.005 21:51:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:15.005 21:51:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:15.005 21:51:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:15.005 21:51:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:15.005 21:51:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:08:15.005 21:51:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:15.005 21:51:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:15.005 21:51:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:15.005 21:51:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:08:15.005 21:51:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:15.005 21:51:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:15.005 21:51:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:15.005 21:51:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:15.005 21:51:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:15.005 21:51:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:15.005 21:51:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:15.005 21:51:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:08:15.005 21:51:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:15.005 21:51:34 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:08:15.005 21:51:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:15.005 21:51:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:15.005 21:51:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:15.005 21:51:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:15.005 21:51:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:15.005 21:51:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:15.005 21:51:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:15.005 21:51:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:15.005 21:51:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:15.005 21:51:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:15.005 21:51:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:08:15.005 21:51:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:15.005 21:51:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:15.005 21:51:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:15.005 21:51:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:08:15.005 21:51:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:15.005 21:51:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:15.005 21:51:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:15.005 21:51:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:08:15.005 21:51:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:15.005 21:51:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:15.005 21:51:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:15.005 21:51:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:15.005 21:51:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:15.005 21:51:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:15.005 21:51:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:15.005 21:51:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:15.005 21:51:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:15.005 21:51:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:15.005 21:51:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:16.908 21:51:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:16.908 21:51:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:16.908 21:51:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:16.908 21:51:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:16.908 21:51:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:16.908 21:51:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:16.908 21:51:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:16.908 21:51:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:16.908 21:51:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:16.908 21:51:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:16.908 21:51:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:16.908 21:51:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:16.908 21:51:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:16.908 21:51:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:16.908 21:51:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:16.908 21:51:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:16.908 21:51:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:16.908 21:51:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:16.908 21:51:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:16.908 21:51:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:16.908 21:51:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:16.908 21:51:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:16.908 21:51:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:16.908 21:51:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:16.908 21:51:36 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:16.908 21:51:36 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:08:16.908 21:51:36 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:16.908 00:08:16.908 real 0m2.613s 00:08:16.908 user 0m2.376s 00:08:16.908 sys 0m0.244s 00:08:16.908 21:51:36 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:16.908 21:51:36 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:08:16.908 ************************************ 00:08:16.908 END TEST accel_dif_verify 00:08:16.909 ************************************ 00:08:16.909 21:51:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:16.909 21:51:36 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:08:16.909 21:51:36 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:16.909 21:51:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.909 21:51:36 accel -- common/autotest_common.sh@10 -- # set +x 00:08:16.909 ************************************ 00:08:16.909 START TEST accel_dif_generate 00:08:16.909 ************************************ 00:08:16.909 21:51:36 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:08:16.909 21:51:36 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:08:16.909 21:51:36 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:08:16.909 21:51:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:16.909 21:51:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:16.909 21:51:36 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:16.909 21:51:36 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:16.909 21:51:36 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:08:16.909 21:51:36 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:16.909 21:51:36 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:16.909 21:51:36 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:16.909 21:51:36 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:16.909 21:51:36 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:16.909 21:51:36 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:08:16.909 21:51:36 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:08:17.168 [2024-07-13 21:51:36.323244] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:17.168 [2024-07-13 21:51:36.323324] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1304659 ] 00:08:17.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.168 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:17.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.168 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:17.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.168 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:17.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.168 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:17.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.168 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:17.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.168 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:17.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.168 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:17.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.168 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:17.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.168 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:17.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.168 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:17.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.168 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:17.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.168 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:17.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.168 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:17.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.168 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:17.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.168 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:17.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.169 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:17.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.169 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:17.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.169 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:17.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.169 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:17.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.169 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:17.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.169 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:17.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.169 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:17.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.169 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:17.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.169 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:17.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.169 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:17.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.169 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:17.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.169 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:17.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.169 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:17.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.169 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:17.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.169 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:17.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.169 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:17.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.169 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:17.169 [2024-07-13 21:51:36.482617] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.428 [2024-07-13 21:51:36.678491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:17.688 21:51:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:17.689 21:51:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:17.689 21:51:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:17.689 21:51:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:08:17.689 21:51:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:17.689 21:51:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:17.689 21:51:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:17.689 21:51:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:08:17.689 21:51:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:17.689 21:51:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:17.689 21:51:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:17.689 21:51:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:08:17.689 21:51:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:17.689 21:51:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:17.689 21:51:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:17.689 21:51:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:17.689 21:51:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:17.689 21:51:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:17.689 21:51:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:17.689 21:51:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:17.689 21:51:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:17.689 21:51:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:17.689 21:51:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:19.595 21:51:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:19.595 21:51:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:19.595 21:51:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:19.595 21:51:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:19.595 21:51:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:19.595 21:51:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:19.595 21:51:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:19.595 21:51:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:19.595 21:51:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:19.595 21:51:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:19.595 21:51:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:19.595 21:51:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:19.595 21:51:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:19.595 21:51:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:19.595 21:51:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:19.595 21:51:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:19.595 21:51:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:19.595 21:51:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:19.595 21:51:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:19.595 21:51:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:19.595 21:51:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:19.595 21:51:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:19.595 21:51:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:19.595 21:51:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:19.595 21:51:38 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:19.595 21:51:38 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:08:19.595 21:51:38 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:19.595 00:08:19.595 real 0m2.574s 00:08:19.595 user 0m2.342s 00:08:19.595 sys 0m0.240s 00:08:19.595 21:51:38 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:19.595 21:51:38 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:08:19.595 ************************************ 00:08:19.595 END TEST accel_dif_generate 00:08:19.595 ************************************ 00:08:19.595 21:51:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:19.595 21:51:38 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:19.595 21:51:38 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:19.595 21:51:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.595 21:51:38 accel -- common/autotest_common.sh@10 -- # set +x 00:08:19.595 ************************************ 00:08:19.595 START TEST accel_dif_generate_copy 00:08:19.595 ************************************ 00:08:19.595 21:51:38 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:08:19.595 21:51:38 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:19.595 21:51:38 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:08:19.595 21:51:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:19.595 21:51:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:19.595 21:51:38 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:19.595 21:51:38 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:19.595 21:51:38 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:19.595 21:51:38 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:19.595 21:51:38 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:19.595 21:51:38 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:19.595 21:51:38 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:19.595 21:51:38 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:19.595 21:51:38 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:19.595 21:51:38 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:08:19.595 [2024-07-13 21:51:38.980676] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:19.595 [2024-07-13 21:51:38.980758] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1305183 ] 00:08:19.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.854 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:19.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.854 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:19.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.854 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:19.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.854 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:19.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.854 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:19.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.854 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:19.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.854 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:19.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.854 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:19.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.854 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:19.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.854 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:19.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.854 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:19.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.854 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:19.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.854 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:19.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.854 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:19.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.854 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:19.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.854 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:19.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.854 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:19.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.854 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:19.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.854 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:19.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.854 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:19.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.854 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:19.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.854 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:19.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.854 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:19.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.854 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:19.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.854 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:19.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.854 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:19.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.854 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:19.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.854 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:19.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.854 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:19.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.854 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:19.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.854 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:19.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.854 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:19.854 [2024-07-13 21:51:39.136918] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.113 [2024-07-13 21:51:39.340298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.370 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:20.371 21:51:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:22.274 21:51:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:22.274 21:51:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:22.274 21:51:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:22.274 21:51:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:22.274 21:51:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:22.274 21:51:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:22.274 21:51:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:22.274 21:51:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:22.274 21:51:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:22.274 21:51:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:22.274 21:51:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:22.274 21:51:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:22.274 21:51:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:22.274 21:51:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:22.274 21:51:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:22.274 21:51:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:22.274 21:51:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:22.274 21:51:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:22.274 21:51:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:22.274 21:51:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:22.274 21:51:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:22.274 21:51:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:22.274 21:51:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:22.274 21:51:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:22.274 21:51:41 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:22.274 21:51:41 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:08:22.274 21:51:41 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:22.274 00:08:22.274 real 0m2.584s 00:08:22.274 user 0m2.341s 00:08:22.274 sys 0m0.243s 00:08:22.274 21:51:41 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:22.274 21:51:41 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:08:22.274 ************************************ 00:08:22.274 END TEST accel_dif_generate_copy 00:08:22.274 ************************************ 00:08:22.274 21:51:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:22.274 21:51:41 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:08:22.274 21:51:41 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:22.274 21:51:41 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:08:22.274 21:51:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:22.274 21:51:41 accel -- common/autotest_common.sh@10 -- # set +x 00:08:22.274 ************************************ 00:08:22.274 START TEST accel_comp 00:08:22.274 ************************************ 00:08:22.274 21:51:41 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:22.274 21:51:41 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:08:22.274 21:51:41 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:08:22.274 21:51:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:22.274 21:51:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:22.274 21:51:41 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:22.274 21:51:41 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:22.274 21:51:41 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:08:22.274 21:51:41 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:22.274 21:51:41 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:22.274 21:51:41 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:22.274 21:51:41 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:22.274 21:51:41 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:22.274 21:51:41 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:08:22.274 21:51:41 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:08:22.274 [2024-07-13 21:51:41.648817] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:22.274 [2024-07-13 21:51:41.648895] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1305519 ] 00:08:22.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.534 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:22.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.534 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:22.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.534 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:22.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.534 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:22.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.534 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:22.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.534 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:22.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.534 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:22.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.534 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:22.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.534 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:22.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.535 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:22.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.535 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:22.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.535 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:22.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.535 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:22.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.535 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:22.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.535 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:22.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.535 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:22.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.535 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:22.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.535 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:22.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.535 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:22.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.535 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:22.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.535 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:22.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.535 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:22.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.535 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:22.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.535 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:22.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.535 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:22.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.535 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:22.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.535 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:22.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.535 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:22.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.535 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:22.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.535 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:22.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.535 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:22.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.535 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:22.535 [2024-07-13 21:51:41.809886] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.794 [2024-07-13 21:51:42.014565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:23.054 21:51:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:24.961 21:51:44 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:24.961 21:51:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:24.961 21:51:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:24.961 21:51:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:24.961 21:51:44 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:24.961 21:51:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:24.961 21:51:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:24.961 21:51:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:24.961 21:51:44 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:24.961 21:51:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:24.961 21:51:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:24.961 21:51:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:24.961 21:51:44 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:24.961 21:51:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:24.961 21:51:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:24.961 21:51:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:24.961 21:51:44 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:24.961 21:51:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:24.961 21:51:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:24.961 21:51:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:24.961 21:51:44 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:24.961 21:51:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:24.961 21:51:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:24.961 21:51:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:24.961 21:51:44 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:24.961 21:51:44 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:24.961 21:51:44 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:24.961 00:08:24.961 real 0m2.600s 00:08:24.961 user 0m2.356s 00:08:24.961 sys 0m0.253s 00:08:24.961 21:51:44 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:24.961 21:51:44 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:08:24.961 ************************************ 00:08:24.961 END TEST accel_comp 00:08:24.961 ************************************ 00:08:24.961 21:51:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:24.961 21:51:44 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:08:24.961 21:51:44 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:24.961 21:51:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:24.961 21:51:44 accel -- common/autotest_common.sh@10 -- # set +x 00:08:24.961 ************************************ 00:08:24.961 START TEST accel_decomp 00:08:24.961 ************************************ 00:08:24.961 21:51:44 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:08:24.961 21:51:44 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:08:24.961 21:51:44 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:08:24.961 21:51:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:24.961 21:51:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:24.961 21:51:44 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:08:24.961 21:51:44 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:08:24.961 21:51:44 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:08:24.961 21:51:44 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:24.961 21:51:44 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:24.961 21:51:44 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:24.962 21:51:44 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:24.962 21:51:44 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:24.962 21:51:44 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:08:24.962 21:51:44 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:08:24.962 [2024-07-13 21:51:44.332490] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:24.962 [2024-07-13 21:51:44.332567] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1306041 ] 00:08:25.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.221 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:25.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.221 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:25.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.221 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:25.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.221 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:25.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.221 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:25.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.221 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:25.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.221 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:25.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.221 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:25.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.221 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:25.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.221 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:25.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.221 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:25.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.221 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:25.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.221 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:25.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.221 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:25.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.221 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:25.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.221 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:25.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.221 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:25.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.221 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:25.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.221 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:25.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.222 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:25.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.222 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:25.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.222 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:25.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.222 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:25.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.222 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:25.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.222 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:25.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.222 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:25.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.222 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:25.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.222 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:25.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.222 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:25.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.222 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:25.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.222 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:25.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.222 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:25.222 [2024-07-13 21:51:44.489160] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.481 [2024-07-13 21:51:44.694723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.742 21:51:44 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:25.742 21:51:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:25.742 21:51:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:25.742 21:51:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:25.742 21:51:44 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:25.742 21:51:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:25.742 21:51:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:25.742 21:51:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:25.742 21:51:44 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:25.742 21:51:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:25.742 21:51:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:25.742 21:51:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:25.742 21:51:44 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:08:25.742 21:51:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:25.742 21:51:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:25.742 21:51:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:25.742 21:51:44 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:25.742 21:51:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:25.742 21:51:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:25.742 21:51:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:25.742 21:51:44 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:25.742 21:51:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:25.742 21:51:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:25.742 21:51:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:25.742 21:51:44 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:08:25.742 21:51:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:25.743 21:51:44 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:25.743 21:51:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:25.743 21:51:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:25.743 21:51:44 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:25.743 21:51:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:25.743 21:51:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:25.743 21:51:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:25.743 21:51:44 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:25.743 21:51:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:25.743 21:51:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:25.743 21:51:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:25.743 21:51:44 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:08:25.743 21:51:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:25.743 21:51:44 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:08:25.743 21:51:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:25.743 21:51:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:25.743 21:51:44 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:25.743 21:51:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:25.743 21:51:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:25.743 21:51:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:25.743 21:51:44 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:25.743 21:51:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:25.743 21:51:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:25.743 21:51:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:25.743 21:51:44 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:25.743 21:51:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:25.743 21:51:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:25.743 21:51:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:25.743 21:51:44 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:08:25.743 21:51:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:25.743 21:51:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:25.743 21:51:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:25.743 21:51:44 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:25.743 21:51:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:25.743 21:51:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:25.743 21:51:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:25.743 21:51:44 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:08:25.743 21:51:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:25.743 21:51:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:25.743 21:51:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:25.743 21:51:44 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:25.743 21:51:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:25.743 21:51:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:25.743 21:51:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:25.743 21:51:44 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:25.743 21:51:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:25.743 21:51:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:25.743 21:51:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:27.648 21:51:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:27.648 21:51:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:27.648 21:51:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:27.648 21:51:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:27.648 21:51:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:27.648 21:51:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:27.648 21:51:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:27.648 21:51:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:27.648 21:51:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:27.648 21:51:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:27.648 21:51:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:27.648 21:51:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:27.648 21:51:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:27.648 21:51:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:27.648 21:51:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:27.648 21:51:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:27.648 21:51:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:27.648 21:51:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:27.648 21:51:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:27.648 21:51:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:27.648 21:51:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:27.648 21:51:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:27.648 21:51:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:27.648 21:51:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:27.648 21:51:46 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:27.648 21:51:46 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:27.648 21:51:46 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:27.648 00:08:27.648 real 0m2.611s 00:08:27.648 user 0m2.380s 00:08:27.648 sys 0m0.239s 00:08:27.648 21:51:46 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:27.648 21:51:46 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:08:27.648 ************************************ 00:08:27.648 END TEST accel_decomp 00:08:27.648 ************************************ 00:08:27.648 21:51:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:27.648 21:51:46 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:27.648 21:51:46 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:27.648 21:51:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:27.648 21:51:46 accel -- common/autotest_common.sh@10 -- # set +x 00:08:27.648 ************************************ 00:08:27.648 START TEST accel_decomp_full 00:08:27.648 ************************************ 00:08:27.648 21:51:46 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:27.648 21:51:46 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:08:27.648 21:51:46 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:08:27.648 21:51:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:27.648 21:51:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:27.648 21:51:46 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:27.648 21:51:46 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:27.648 21:51:46 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:08:27.648 21:51:46 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:27.648 21:51:46 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:27.648 21:51:46 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:27.648 21:51:46 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:27.648 21:51:46 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:27.648 21:51:46 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:08:27.648 21:51:46 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:08:27.648 [2024-07-13 21:51:47.023461] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:27.649 [2024-07-13 21:51:47.023540] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1306580 ] 00:08:27.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.908 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:27.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.908 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:27.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.908 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:27.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.908 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:27.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.908 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:27.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.908 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:27.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.908 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:27.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.908 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:27.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.909 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:27.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.909 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:27.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.909 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:27.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.909 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:27.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.909 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:27.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.909 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:27.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.909 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:27.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.909 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:27.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.909 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:27.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.909 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:27.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.909 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:27.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.909 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:27.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.909 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:27.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.909 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:27.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.909 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:27.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.909 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:27.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.909 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:27.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.909 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:27.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.909 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:27.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.909 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:27.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.909 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:27.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.909 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:27.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.909 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:27.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.909 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:27.909 [2024-07-13 21:51:47.180356] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.168 [2024-07-13 21:51:47.382692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.426 21:51:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:30.330 21:51:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:30.330 21:51:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:30.330 21:51:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:30.330 21:51:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:30.330 21:51:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:30.330 21:51:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:30.330 21:51:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:30.330 21:51:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:30.330 21:51:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:30.330 21:51:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:30.330 21:51:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:30.330 21:51:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:30.330 21:51:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:30.330 21:51:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:30.330 21:51:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:30.330 21:51:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:30.330 21:51:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:30.330 21:51:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:30.330 21:51:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:30.330 21:51:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:30.330 21:51:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:30.330 21:51:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:30.330 21:51:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:30.330 21:51:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:30.330 21:51:49 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:30.330 21:51:49 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:30.330 21:51:49 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:30.330 00:08:30.330 real 0m2.621s 00:08:30.330 user 0m2.390s 00:08:30.330 sys 0m0.238s 00:08:30.330 21:51:49 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:30.330 21:51:49 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:08:30.330 ************************************ 00:08:30.330 END TEST accel_decomp_full 00:08:30.330 ************************************ 00:08:30.330 21:51:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:30.330 21:51:49 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:30.330 21:51:49 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:30.330 21:51:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.330 21:51:49 accel -- common/autotest_common.sh@10 -- # set +x 00:08:30.330 ************************************ 00:08:30.330 START TEST accel_decomp_mcore 00:08:30.330 ************************************ 00:08:30.330 21:51:49 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:30.330 21:51:49 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:30.330 21:51:49 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:30.330 21:51:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.330 21:51:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.330 21:51:49 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:30.330 21:51:49 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:30.330 21:51:49 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:30.330 21:51:49 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:30.330 21:51:49 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:30.330 21:51:49 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:30.330 21:51:49 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:30.330 21:51:49 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:30.330 21:51:49 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:30.330 21:51:49 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:30.589 [2024-07-13 21:51:49.720996] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:30.589 [2024-07-13 21:51:49.721076] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1307059 ] 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:30.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.589 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:30.589 [2024-07-13 21:51:49.881785] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:30.848 [2024-07-13 21:51:50.104047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:30.848 [2024-07-13 21:51:50.104121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:30.848 [2024-07-13 21:51:50.104180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.848 [2024-07-13 21:51:50.104191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.107 21:51:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.012 21:51:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:33.012 21:51:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.012 21:51:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.012 21:51:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.012 21:51:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:33.012 21:51:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.012 21:51:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.012 21:51:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.012 21:51:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:33.012 21:51:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.012 21:51:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.012 21:51:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.012 21:51:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:33.012 21:51:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.012 21:51:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.012 21:51:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.012 21:51:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:33.012 21:51:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.012 21:51:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.012 21:51:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.012 21:51:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:33.012 21:51:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.012 21:51:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.012 21:51:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.012 21:51:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:33.012 21:51:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.012 21:51:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.012 21:51:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.012 21:51:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:33.012 21:51:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.012 21:51:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.012 21:51:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.012 21:51:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:33.012 21:51:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.012 21:51:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.012 21:51:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.013 21:51:52 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:33.013 21:51:52 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:33.013 21:51:52 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:33.013 00:08:33.013 real 0m2.673s 00:08:33.013 user 0m7.869s 00:08:33.013 sys 0m0.272s 00:08:33.013 21:51:52 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:33.013 21:51:52 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:33.013 ************************************ 00:08:33.013 END TEST accel_decomp_mcore 00:08:33.013 ************************************ 00:08:33.013 21:51:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:33.013 21:51:52 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:33.013 21:51:52 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:33.013 21:51:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.013 21:51:52 accel -- common/autotest_common.sh@10 -- # set +x 00:08:33.272 ************************************ 00:08:33.272 START TEST accel_decomp_full_mcore 00:08:33.272 ************************************ 00:08:33.272 21:51:52 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:33.272 21:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:33.272 21:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:33.272 21:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.272 21:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.272 21:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:33.272 21:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:33.272 21:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:33.272 21:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:33.272 21:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:33.272 21:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:33.272 21:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:33.272 21:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:33.272 21:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:33.272 21:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:33.272 [2024-07-13 21:51:52.475774] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:33.272 [2024-07-13 21:51:52.475855] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1307439 ] 00:08:33.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.272 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:33.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.272 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:33.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.272 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:33.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.272 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:33.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.272 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:33.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.272 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:33.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.272 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:33.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.272 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:33.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.272 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:33.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.272 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:33.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.272 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:33.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.272 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:33.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.272 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:33.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.272 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:33.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.272 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:33.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.273 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:33.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.273 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:33.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.273 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:33.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.273 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:33.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.273 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:33.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.273 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:33.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.273 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:33.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.273 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:33.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.273 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:33.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.273 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:33.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.273 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:33.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.273 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:33.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.273 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:33.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.273 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:33.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.273 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:33.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.273 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:33.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.273 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:33.273 [2024-07-13 21:51:52.640339] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:33.532 [2024-07-13 21:51:52.852740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.532 [2024-07-13 21:51:52.852815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:33.532 [2024-07-13 21:51:52.852870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.532 [2024-07-13 21:51:52.852881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:33.791 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:33.791 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.791 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.791 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.791 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:33.791 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.791 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.791 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.791 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:33.791 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.791 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.791 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.791 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:33.791 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.791 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.791 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.791 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:33.791 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.791 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.791 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.791 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:33.791 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.791 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.791 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.791 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:33.791 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.791 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:33.791 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.791 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.791 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:33.791 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.791 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.791 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.792 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:33.792 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.792 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.792 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.792 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:08:33.792 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.792 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:33.792 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.792 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.792 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:33.792 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.792 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.792 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.792 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:33.792 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.792 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.792 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.792 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:33.792 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.792 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.792 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.792 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:08:33.792 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.792 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.792 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.792 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:33.792 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.792 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.792 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.792 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:33.792 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.792 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.792 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.792 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:33.792 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.792 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.792 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.792 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:33.792 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.792 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.792 21:51:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:36.327 21:51:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:36.327 21:51:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:36.327 21:51:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:36.327 21:51:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:36.327 21:51:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:36.327 21:51:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:36.327 21:51:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:36.327 21:51:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:36.327 21:51:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:36.327 21:51:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:36.327 21:51:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:36.327 21:51:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:36.327 21:51:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:36.327 21:51:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:36.327 21:51:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:36.327 21:51:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:36.327 21:51:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:36.327 21:51:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:36.327 21:51:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:36.327 21:51:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:36.327 21:51:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:36.327 21:51:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:36.327 21:51:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:36.327 21:51:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:36.327 21:51:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:36.327 21:51:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:36.327 21:51:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:36.327 21:51:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:36.327 21:51:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:36.327 21:51:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:36.327 21:51:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:36.327 21:51:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:36.327 21:51:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:36.327 21:51:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:36.327 21:51:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:36.327 21:51:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:36.327 21:51:55 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:36.327 21:51:55 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:36.327 21:51:55 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:36.327 00:08:36.327 real 0m2.742s 00:08:36.327 user 0m8.167s 00:08:36.327 sys 0m0.267s 00:08:36.327 21:51:55 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:36.327 21:51:55 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:36.327 ************************************ 00:08:36.327 END TEST accel_decomp_full_mcore 00:08:36.327 ************************************ 00:08:36.327 21:51:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:36.327 21:51:55 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:36.327 21:51:55 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:36.327 21:51:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:36.327 21:51:55 accel -- common/autotest_common.sh@10 -- # set +x 00:08:36.327 ************************************ 00:08:36.327 START TEST accel_decomp_mthread 00:08:36.327 ************************************ 00:08:36.327 21:51:55 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:36.327 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:36.327 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:36.327 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:36.327 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.327 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:36.327 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.327 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:36.327 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:36.327 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:36.327 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:36.327 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:36.327 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:36.327 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:36.327 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:36.327 [2024-07-13 21:51:55.283232] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:36.328 [2024-07-13 21:51:55.283307] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1307970 ] 00:08:36.328 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.328 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:36.328 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.328 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:36.328 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.328 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:36.328 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.328 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:36.328 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.328 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:36.328 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.328 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:36.328 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.328 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:36.328 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.328 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:36.328 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.328 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:36.328 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.328 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:36.328 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.328 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:36.328 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.328 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:36.328 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.328 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:36.328 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.328 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:36.328 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.328 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:36.328 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.328 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:36.328 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.328 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:36.328 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.328 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:36.328 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.328 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:36.328 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.328 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:36.328 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.328 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:36.328 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.328 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:36.328 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.328 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:36.328 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.328 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:36.328 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.328 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:36.328 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.328 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:36.328 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.328 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:36.328 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.328 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:36.328 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.328 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:36.328 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.328 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:36.328 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.328 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:36.328 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.328 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:36.328 [2024-07-13 21:51:55.441000] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.328 [2024-07-13 21:51:55.647195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.588 21:51:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.494 21:51:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:38.494 21:51:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.494 21:51:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.494 21:51:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.494 21:51:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:38.494 21:51:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.494 21:51:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.494 21:51:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.494 21:51:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:38.494 21:51:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.494 21:51:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.494 21:51:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.494 21:51:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:38.494 21:51:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.494 21:51:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.494 21:51:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.494 21:51:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:38.494 21:51:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.494 21:51:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.494 21:51:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.494 21:51:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:38.494 21:51:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.494 21:51:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.494 21:51:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.494 21:51:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:38.494 21:51:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.494 21:51:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.494 21:51:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.494 21:51:57 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:38.494 21:51:57 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:38.494 21:51:57 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:38.494 00:08:38.494 real 0m2.596s 00:08:38.494 user 0m2.356s 00:08:38.494 sys 0m0.239s 00:08:38.494 21:51:57 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:38.494 21:51:57 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:38.494 ************************************ 00:08:38.494 END TEST accel_decomp_mthread 00:08:38.494 ************************************ 00:08:38.494 21:51:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:38.494 21:51:57 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:38.494 21:51:57 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:38.494 21:51:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:38.494 21:51:57 accel -- common/autotest_common.sh@10 -- # set +x 00:08:38.754 ************************************ 00:08:38.754 START TEST accel_decomp_full_mthread 00:08:38.754 ************************************ 00:08:38.754 21:51:57 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:38.754 21:51:57 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:38.754 21:51:57 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:38.754 21:51:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.754 21:51:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.754 21:51:57 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:38.754 21:51:57 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:38.754 21:51:57 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:38.754 21:51:57 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:38.754 21:51:57 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:38.754 21:51:57 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:38.754 21:51:57 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:38.754 21:51:57 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:38.754 21:51:57 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:38.754 21:51:57 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:38.754 [2024-07-13 21:51:57.964777] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:38.754 [2024-07-13 21:51:57.964849] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1308516 ] 00:08:38.754 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.754 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:38.754 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.754 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:38.754 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.754 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:38.754 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.754 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:38.754 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.754 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:38.754 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.754 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:38.754 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.754 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:38.754 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.754 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:38.754 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.754 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:38.754 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.754 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:38.754 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.754 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:38.754 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.754 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:38.754 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.754 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:38.754 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.754 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:38.754 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.754 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:38.754 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.754 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:38.754 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.754 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:38.754 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.754 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:38.754 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.754 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:38.754 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.754 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:38.754 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.755 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:38.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.755 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:38.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.755 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:38.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.755 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:38.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.755 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:38.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.755 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:38.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.755 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:38.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.755 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:38.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.755 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:38.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.755 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:38.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.755 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:38.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.755 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:38.755 [2024-07-13 21:51:58.118734] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.014 [2024-07-13 21:51:58.322565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:39.274 21:51:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:41.210 21:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:41.210 21:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:41.210 21:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:41.210 21:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:41.210 21:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:41.210 21:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:41.210 21:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:41.210 21:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:41.210 21:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:41.210 21:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:41.210 21:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:41.210 21:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:41.210 21:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:41.210 21:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:41.210 21:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:41.210 21:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:41.210 21:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:41.210 21:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:41.210 21:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:41.210 21:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:41.210 21:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:41.210 21:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:41.210 21:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:41.210 21:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:41.210 21:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:41.210 21:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:41.210 21:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:41.210 21:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:41.210 21:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:41.210 21:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:41.210 21:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:41.210 00:08:41.210 real 0m2.627s 00:08:41.210 user 0m2.393s 00:08:41.210 sys 0m0.241s 00:08:41.210 21:52:00 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:41.210 21:52:00 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:41.210 ************************************ 00:08:41.210 END TEST accel_decomp_full_mthread 00:08:41.210 ************************************ 00:08:41.469 21:52:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:41.469 21:52:00 accel -- accel/accel.sh@124 -- # [[ y == y ]] 00:08:41.469 21:52:00 accel -- accel/accel.sh@125 -- # COMPRESSDEV=1 00:08:41.469 21:52:00 accel -- accel/accel.sh@126 -- # get_expected_opcs 00:08:41.469 21:52:00 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:41.469 21:52:00 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1308989 00:08:41.469 21:52:00 accel -- accel/accel.sh@63 -- # waitforlisten 1308989 00:08:41.469 21:52:00 accel -- common/autotest_common.sh@829 -- # '[' -z 1308989 ']' 00:08:41.469 21:52:00 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.469 21:52:00 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:41.469 21:52:00 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:08:41.469 21:52:00 accel -- accel/accel.sh@61 -- # build_accel_config 00:08:41.469 21:52:00 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.469 21:52:00 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:41.469 21:52:00 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:41.469 21:52:00 accel -- common/autotest_common.sh@10 -- # set +x 00:08:41.469 21:52:00 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:41.469 21:52:00 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:41.469 21:52:00 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:41.469 21:52:00 accel -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:08:41.469 21:52:00 accel -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:08:41.469 21:52:00 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:41.469 21:52:00 accel -- accel/accel.sh@41 -- # jq -r . 00:08:41.469 [2024-07-13 21:52:00.680319] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:41.469 [2024-07-13 21:52:00.680413] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1308989 ] 00:08:41.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.469 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:41.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.469 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:41.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.469 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:41.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.469 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:41.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.469 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:41.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.469 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:41.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.469 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:41.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.470 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:41.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.470 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:41.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.470 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:41.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.470 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:41.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.470 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:41.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.470 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:41.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.470 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:41.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.470 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:41.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.470 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:41.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.470 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:41.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.470 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:41.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.470 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:41.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.470 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:41.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.470 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:41.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.470 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:41.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.470 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:41.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.470 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:41.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.470 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:41.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.470 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:41.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.470 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:41.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.470 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:41.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.470 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:41.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.470 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:41.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.470 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:41.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.470 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:41.470 [2024-07-13 21:52:00.838126] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.728 [2024-07-13 21:52:01.040742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.664 [2024-07-13 21:52:02.015678] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:08:44.050 21:52:03 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:44.050 21:52:03 accel -- common/autotest_common.sh@862 -- # return 0 00:08:44.050 21:52:03 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:08:44.050 21:52:03 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:08:44.050 21:52:03 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:08:44.050 21:52:03 accel -- accel/accel.sh@68 -- # [[ -n 1 ]] 00:08:44.050 21:52:03 accel -- accel/accel.sh@68 -- # check_save_config compressdev_scan_accel_module 00:08:44.050 21:52:03 accel -- accel/accel.sh@56 -- # rpc_cmd save_config 00:08:44.050 21:52:03 accel -- accel/accel.sh@56 -- # jq -r '.subsystems[] | select(.subsystem=="accel").config[]' 00:08:44.050 21:52:03 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.050 21:52:03 accel -- common/autotest_common.sh@10 -- # set +x 00:08:44.050 21:52:03 accel -- accel/accel.sh@56 -- # grep compressdev_scan_accel_module 00:08:44.050 21:52:03 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.050 "method": "compressdev_scan_accel_module", 00:08:44.050 21:52:03 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:08:44.050 21:52:03 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:08:44.050 21:52:03 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:08:44.050 21:52:03 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.050 21:52:03 accel -- common/autotest_common.sh@10 -- # set +x 00:08:44.050 21:52:03 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.050 21:52:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:44.050 21:52:03 accel -- accel/accel.sh@72 -- # IFS== 00:08:44.050 21:52:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:44.050 21:52:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:44.050 21:52:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:44.050 21:52:03 accel -- accel/accel.sh@72 -- # IFS== 00:08:44.050 21:52:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:44.050 21:52:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:44.050 21:52:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:44.050 21:52:03 accel -- accel/accel.sh@72 -- # IFS== 00:08:44.050 21:52:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:44.050 21:52:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:44.050 21:52:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:44.050 21:52:03 accel -- accel/accel.sh@72 -- # IFS== 00:08:44.050 21:52:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:44.050 21:52:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:44.050 21:52:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:44.050 21:52:03 accel -- accel/accel.sh@72 -- # IFS== 00:08:44.050 21:52:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:44.050 21:52:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:44.050 21:52:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:44.050 21:52:03 accel -- accel/accel.sh@72 -- # IFS== 00:08:44.050 21:52:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:44.050 21:52:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:44.050 21:52:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:44.050 21:52:03 accel -- accel/accel.sh@72 -- # IFS== 00:08:44.050 21:52:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:44.050 21:52:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dpdk_compressdev 00:08:44.050 21:52:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:44.050 21:52:03 accel -- accel/accel.sh@72 -- # IFS== 00:08:44.050 21:52:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:44.050 21:52:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dpdk_compressdev 00:08:44.050 21:52:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:44.050 21:52:03 accel -- accel/accel.sh@72 -- # IFS== 00:08:44.050 21:52:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:44.050 21:52:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:44.050 21:52:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:44.050 21:52:03 accel -- accel/accel.sh@72 -- # IFS== 00:08:44.050 21:52:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:44.050 21:52:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:44.050 21:52:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:44.050 21:52:03 accel -- accel/accel.sh@72 -- # IFS== 00:08:44.050 21:52:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:44.050 21:52:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:44.050 21:52:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:44.050 21:52:03 accel -- accel/accel.sh@72 -- # IFS== 00:08:44.050 21:52:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:44.050 21:52:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:44.050 21:52:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:44.050 21:52:03 accel -- accel/accel.sh@72 -- # IFS== 00:08:44.050 21:52:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:44.050 21:52:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:44.050 21:52:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:44.050 21:52:03 accel -- accel/accel.sh@72 -- # IFS== 00:08:44.050 21:52:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:44.050 21:52:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:44.050 21:52:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:44.050 21:52:03 accel -- accel/accel.sh@72 -- # IFS== 00:08:44.050 21:52:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:44.050 21:52:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:44.050 21:52:03 accel -- accel/accel.sh@75 -- # killprocess 1308989 00:08:44.050 21:52:03 accel -- common/autotest_common.sh@948 -- # '[' -z 1308989 ']' 00:08:44.050 21:52:03 accel -- common/autotest_common.sh@952 -- # kill -0 1308989 00:08:44.050 21:52:03 accel -- common/autotest_common.sh@953 -- # uname 00:08:44.050 21:52:03 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:44.050 21:52:03 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1308989 00:08:44.050 21:52:03 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:44.050 21:52:03 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:44.050 21:52:03 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1308989' 00:08:44.050 killing process with pid 1308989 00:08:44.050 21:52:03 accel -- common/autotest_common.sh@967 -- # kill 1308989 00:08:44.050 21:52:03 accel -- common/autotest_common.sh@972 -- # wait 1308989 00:08:45.953 21:52:05 accel -- accel/accel.sh@76 -- # trap - ERR 00:08:45.953 21:52:05 accel -- accel/accel.sh@127 -- # run_test accel_cdev_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:45.953 21:52:05 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:08:45.953 21:52:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:45.953 21:52:05 accel -- common/autotest_common.sh@10 -- # set +x 00:08:46.212 ************************************ 00:08:46.212 START TEST accel_cdev_comp 00:08:46.212 ************************************ 00:08:46.212 21:52:05 accel.accel_cdev_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:46.212 21:52:05 accel.accel_cdev_comp -- accel/accel.sh@16 -- # local accel_opc 00:08:46.212 21:52:05 accel.accel_cdev_comp -- accel/accel.sh@17 -- # local accel_module 00:08:46.212 21:52:05 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:46.212 21:52:05 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:46.212 21:52:05 accel.accel_cdev_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:46.212 21:52:05 accel.accel_cdev_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:46.212 21:52:05 accel.accel_cdev_comp -- accel/accel.sh@12 -- # build_accel_config 00:08:46.212 21:52:05 accel.accel_cdev_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:46.212 21:52:05 accel.accel_cdev_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:46.212 21:52:05 accel.accel_cdev_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:46.212 21:52:05 accel.accel_cdev_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:46.212 21:52:05 accel.accel_cdev_comp -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:08:46.212 21:52:05 accel.accel_cdev_comp -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:08:46.212 21:52:05 accel.accel_cdev_comp -- accel/accel.sh@40 -- # local IFS=, 00:08:46.212 21:52:05 accel.accel_cdev_comp -- accel/accel.sh@41 -- # jq -r . 00:08:46.212 [2024-07-13 21:52:05.400639] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:46.212 [2024-07-13 21:52:05.400716] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1309868 ] 00:08:46.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.212 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:46.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.212 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:46.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.212 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:46.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.212 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:46.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.212 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:46.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.212 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:46.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.212 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:46.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.212 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:46.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.212 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:46.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.212 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:46.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.212 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:46.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.212 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:46.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.212 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:46.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.212 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:46.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.212 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:46.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.212 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:46.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.212 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:46.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.212 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:46.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.212 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:46.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.212 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:46.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.212 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:46.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.212 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:46.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.212 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:46.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.212 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:46.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.212 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:46.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.212 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:46.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.212 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:46.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.212 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:46.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.212 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:46.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.212 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:46.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.212 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:46.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.212 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:46.212 [2024-07-13 21:52:05.559884] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.471 [2024-07-13 21:52:05.761082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.407 [2024-07-13 21:52:06.728659] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:08:47.407 [2024-07-13 21:52:06.731064] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000016140 PMD being used: compress_qat 00:08:47.407 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:47.407 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.407 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:47.407 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:47.407 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:47.407 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.407 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:47.407 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:47.407 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:47.407 [2024-07-13 21:52:06.737414] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000016220 PMD being used: compress_qat 00:08:47.407 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.407 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:47.407 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:47.407 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=0x1 00:08:47.407 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.407 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=compress 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=32 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=32 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=1 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=No 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:47.408 21:52:06 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:49.312 21:52:08 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:49.312 21:52:08 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:49.312 21:52:08 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:49.312 21:52:08 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:49.312 21:52:08 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:49.312 21:52:08 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:49.312 21:52:08 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:49.312 21:52:08 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:49.312 21:52:08 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:49.312 21:52:08 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:49.312 21:52:08 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:49.312 21:52:08 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:49.312 21:52:08 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:49.312 21:52:08 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:49.312 21:52:08 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:49.312 21:52:08 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:49.312 21:52:08 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:49.312 21:52:08 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:49.312 21:52:08 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:49.312 21:52:08 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:49.312 21:52:08 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:49.312 21:52:08 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:49.312 21:52:08 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:49.312 21:52:08 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:49.312 21:52:08 accel.accel_cdev_comp -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:08:49.312 21:52:08 accel.accel_cdev_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:49.312 21:52:08 accel.accel_cdev_comp -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:08:49.312 00:08:49.312 real 0m2.944s 00:08:49.312 user 0m2.448s 00:08:49.312 sys 0m0.503s 00:08:49.312 21:52:08 accel.accel_cdev_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:49.312 21:52:08 accel.accel_cdev_comp -- common/autotest_common.sh@10 -- # set +x 00:08:49.312 ************************************ 00:08:49.312 END TEST accel_cdev_comp 00:08:49.312 ************************************ 00:08:49.312 21:52:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:49.312 21:52:08 accel -- accel/accel.sh@128 -- # run_test accel_cdev_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:08:49.312 21:52:08 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:49.312 21:52:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:49.312 21:52:08 accel -- common/autotest_common.sh@10 -- # set +x 00:08:49.312 ************************************ 00:08:49.312 START TEST accel_cdev_decomp 00:08:49.312 ************************************ 00:08:49.312 21:52:08 accel.accel_cdev_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:08:49.312 21:52:08 accel.accel_cdev_decomp -- accel/accel.sh@16 -- # local accel_opc 00:08:49.312 21:52:08 accel.accel_cdev_decomp -- accel/accel.sh@17 -- # local accel_module 00:08:49.312 21:52:08 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:49.312 21:52:08 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:49.312 21:52:08 accel.accel_cdev_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:08:49.312 21:52:08 accel.accel_cdev_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:08:49.312 21:52:08 accel.accel_cdev_decomp -- accel/accel.sh@12 -- # build_accel_config 00:08:49.312 21:52:08 accel.accel_cdev_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:49.312 21:52:08 accel.accel_cdev_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:49.312 21:52:08 accel.accel_cdev_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:49.312 21:52:08 accel.accel_cdev_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:49.312 21:52:08 accel.accel_cdev_decomp -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:08:49.312 21:52:08 accel.accel_cdev_decomp -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:08:49.312 21:52:08 accel.accel_cdev_decomp -- accel/accel.sh@40 -- # local IFS=, 00:08:49.312 21:52:08 accel.accel_cdev_decomp -- accel/accel.sh@41 -- # jq -r . 00:08:49.312 [2024-07-13 21:52:08.430000] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:49.312 [2024-07-13 21:52:08.430079] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1310391 ] 00:08:49.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.312 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:49.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.312 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:49.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.312 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:49.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.312 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:49.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.312 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:49.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.312 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:49.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.312 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:49.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.313 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:49.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.313 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:49.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.313 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:49.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.313 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:49.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.313 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:49.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.313 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:49.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.313 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:49.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.313 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:49.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.313 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:49.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.313 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:49.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.313 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:49.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.313 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:49.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.313 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:49.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.313 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:49.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.313 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:49.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.313 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:49.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.313 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:49.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.313 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:49.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.313 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:49.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.313 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:49.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.313 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:49.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.313 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:49.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.313 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:49.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.313 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:49.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.313 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:49.313 [2024-07-13 21:52:08.587599] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.573 [2024-07-13 21:52:08.790669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.511 [2024-07-13 21:52:09.756935] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:08:50.511 [2024-07-13 21:52:09.759417] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000016140 PMD being used: compress_qat 00:08:50.511 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:50.511 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:50.511 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:50.511 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:50.511 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:50.511 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:50.511 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:50.511 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:50.511 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:50.511 [2024-07-13 21:52:09.765868] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000016220 PMD being used: compress_qat 00:08:50.511 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:50.511 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:50.511 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:50.511 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=0x1 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=decompress 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=32 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=32 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=1 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=Yes 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:50.512 21:52:09 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:52.419 21:52:11 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:52.419 21:52:11 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:52.419 21:52:11 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:52.419 21:52:11 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:52.419 21:52:11 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:52.419 21:52:11 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:52.419 21:52:11 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:52.419 21:52:11 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:52.419 21:52:11 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:52.419 21:52:11 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:52.419 21:52:11 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:52.419 21:52:11 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:52.419 21:52:11 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:52.419 21:52:11 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:52.419 21:52:11 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:52.419 21:52:11 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:52.419 21:52:11 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:52.419 21:52:11 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:52.419 21:52:11 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:52.419 21:52:11 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:52.419 21:52:11 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:52.419 21:52:11 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:52.419 21:52:11 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:52.419 21:52:11 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:52.419 21:52:11 accel.accel_cdev_decomp -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:08:52.419 21:52:11 accel.accel_cdev_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:52.419 21:52:11 accel.accel_cdev_decomp -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:08:52.419 00:08:52.419 real 0m2.927s 00:08:52.419 user 0m2.427s 00:08:52.419 sys 0m0.504s 00:08:52.419 21:52:11 accel.accel_cdev_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:52.419 21:52:11 accel.accel_cdev_decomp -- common/autotest_common.sh@10 -- # set +x 00:08:52.419 ************************************ 00:08:52.419 END TEST accel_cdev_decomp 00:08:52.419 ************************************ 00:08:52.419 21:52:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:52.419 21:52:11 accel -- accel/accel.sh@129 -- # run_test accel_cdev_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:52.419 21:52:11 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:52.419 21:52:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:52.419 21:52:11 accel -- common/autotest_common.sh@10 -- # set +x 00:08:52.419 ************************************ 00:08:52.419 START TEST accel_cdev_decomp_full 00:08:52.419 ************************************ 00:08:52.419 21:52:11 accel.accel_cdev_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:52.419 21:52:11 accel.accel_cdev_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:08:52.419 21:52:11 accel.accel_cdev_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:08:52.419 21:52:11 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:52.419 21:52:11 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:52.419 21:52:11 accel.accel_cdev_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:52.419 21:52:11 accel.accel_cdev_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:52.419 21:52:11 accel.accel_cdev_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:08:52.419 21:52:11 accel.accel_cdev_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:52.419 21:52:11 accel.accel_cdev_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:52.419 21:52:11 accel.accel_cdev_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:52.419 21:52:11 accel.accel_cdev_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:52.419 21:52:11 accel.accel_cdev_decomp_full -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:08:52.419 21:52:11 accel.accel_cdev_decomp_full -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:08:52.419 21:52:11 accel.accel_cdev_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:08:52.419 21:52:11 accel.accel_cdev_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:08:52.419 [2024-07-13 21:52:11.440246] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:52.419 [2024-07-13 21:52:11.440345] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1310875 ] 00:08:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.419 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.419 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.419 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.419 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.419 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.419 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.419 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.419 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.419 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.419 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.419 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.419 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.419 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.419 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.419 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.419 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.419 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.419 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.419 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.419 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.419 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.419 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.420 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:52.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.420 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:52.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.420 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:52.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.420 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:52.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.420 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:52.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.420 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:52.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.420 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:52.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.420 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:52.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.420 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:52.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.420 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:52.420 [2024-07-13 21:52:11.600594] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.420 [2024-07-13 21:52:11.798058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.799 [2024-07-13 21:52:12.757649] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:08:53.799 [2024-07-13 21:52:12.759993] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000016140 PMD being used: compress_qat 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:53.799 [2024-07-13 21:52:12.765933] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000016220 PMD being used: compress_qat 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=1 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:53.799 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:53.800 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:53.800 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:53.800 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:53.800 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:53.800 21:52:12 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:55.176 21:52:14 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:55.176 21:52:14 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:55.176 21:52:14 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:55.176 21:52:14 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:55.176 21:52:14 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:55.176 21:52:14 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:55.176 21:52:14 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:55.176 21:52:14 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:55.176 21:52:14 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:55.176 21:52:14 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:55.176 21:52:14 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:55.176 21:52:14 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:55.176 21:52:14 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:55.176 21:52:14 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:55.176 21:52:14 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:55.176 21:52:14 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:55.176 21:52:14 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:55.176 21:52:14 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:55.176 21:52:14 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:55.176 21:52:14 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:55.176 21:52:14 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:55.176 21:52:14 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:55.176 21:52:14 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:55.176 21:52:14 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:55.176 21:52:14 accel.accel_cdev_decomp_full -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:08:55.176 21:52:14 accel.accel_cdev_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:55.176 21:52:14 accel.accel_cdev_decomp_full -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:08:55.176 00:08:55.176 real 0m2.931s 00:08:55.176 user 0m2.441s 00:08:55.176 sys 0m0.491s 00:08:55.176 21:52:14 accel.accel_cdev_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:55.176 21:52:14 accel.accel_cdev_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:08:55.176 ************************************ 00:08:55.176 END TEST accel_cdev_decomp_full 00:08:55.176 ************************************ 00:08:55.176 21:52:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:55.176 21:52:14 accel -- accel/accel.sh@130 -- # run_test accel_cdev_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:55.176 21:52:14 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:55.176 21:52:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:55.176 21:52:14 accel -- common/autotest_common.sh@10 -- # set +x 00:08:55.176 ************************************ 00:08:55.176 START TEST accel_cdev_decomp_mcore 00:08:55.176 ************************************ 00:08:55.176 21:52:14 accel.accel_cdev_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:55.176 21:52:14 accel.accel_cdev_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:55.176 21:52:14 accel.accel_cdev_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:55.176 21:52:14 accel.accel_cdev_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:55.176 21:52:14 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.176 21:52:14 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.176 21:52:14 accel.accel_cdev_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:55.176 21:52:14 accel.accel_cdev_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:55.176 21:52:14 accel.accel_cdev_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:55.176 21:52:14 accel.accel_cdev_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:55.176 21:52:14 accel.accel_cdev_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:55.176 21:52:14 accel.accel_cdev_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:55.176 21:52:14 accel.accel_cdev_decomp_mcore -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:08:55.176 21:52:14 accel.accel_cdev_decomp_mcore -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:08:55.176 21:52:14 accel.accel_cdev_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:55.176 21:52:14 accel.accel_cdev_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:55.176 [2024-07-13 21:52:14.458303] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:55.176 [2024-07-13 21:52:14.458399] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1311372 ] 00:08:55.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.176 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:55.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.176 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:55.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.176 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:55.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.176 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:55.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.176 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:55.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.176 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:55.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.176 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:55.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.176 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:55.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.176 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:55.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.176 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:55.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.176 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:55.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.176 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:55.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.176 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:55.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.176 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:55.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.176 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:55.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.176 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:55.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.176 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:55.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.176 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:55.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.176 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:55.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.176 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:55.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.176 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:55.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.176 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:55.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.176 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:55.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.176 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:55.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.176 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:55.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.176 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:55.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.176 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:55.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.176 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:55.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.176 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:55.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.176 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:55.177 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.177 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:55.177 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.177 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:55.435 [2024-07-13 21:52:14.619569] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:55.696 [2024-07-13 21:52:14.824853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:55.696 [2024-07-13 21:52:14.824932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:55.696 [2024-07-13 21:52:14.824972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.696 [2024-07-13 21:52:14.824981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:56.660 [2024-07-13 21:52:15.849476] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:08:56.660 [2024-07-13 21:52:15.852025] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00002d1a0 PMD being used: compress_qat 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:56.660 [2024-07-13 21:52:15.860538] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000010100 PMD being used: compress_qat 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:56.660 [2024-07-13 21:52:15.861880] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000017100 PMD being used: compress_qat 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:56.660 [2024-07-13 21:52:15.864128] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000020160 PMD being used: compress_qat 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:56.660 [2024-07-13 21:52:15.864313] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00002d280 PMD being used: compress_qat 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:56.660 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:56.661 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:56.661 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:56.661 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:56.661 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:56.661 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:56.661 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:56.661 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:56.661 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:56.661 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:56.661 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:56.661 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:56.661 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:56.661 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:56.661 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:56.661 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:56.661 21:52:15 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.567 21:52:17 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:58.567 21:52:17 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.567 21:52:17 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.567 21:52:17 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.567 21:52:17 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:58.567 21:52:17 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.567 21:52:17 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.567 21:52:17 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.567 21:52:17 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:58.567 21:52:17 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.567 21:52:17 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.567 21:52:17 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.567 21:52:17 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:58.567 21:52:17 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.567 21:52:17 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.567 21:52:17 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.567 21:52:17 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:58.567 21:52:17 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.567 21:52:17 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.567 21:52:17 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.567 21:52:17 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:58.567 21:52:17 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.567 21:52:17 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.567 21:52:17 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.567 21:52:17 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:58.567 21:52:17 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.567 21:52:17 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.567 21:52:17 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.567 21:52:17 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:58.567 21:52:17 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.567 21:52:17 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.567 21:52:17 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.567 21:52:17 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:58.567 21:52:17 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.567 21:52:17 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.567 21:52:17 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.567 21:52:17 accel.accel_cdev_decomp_mcore -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:08:58.567 21:52:17 accel.accel_cdev_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:58.567 21:52:17 accel.accel_cdev_decomp_mcore -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:08:58.567 00:08:58.567 real 0m3.135s 00:08:58.567 user 0m0.026s 00:08:58.567 sys 0m0.006s 00:08:58.567 21:52:17 accel.accel_cdev_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:58.567 21:52:17 accel.accel_cdev_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:58.567 ************************************ 00:08:58.567 END TEST accel_cdev_decomp_mcore 00:08:58.567 ************************************ 00:08:58.567 21:52:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:58.567 21:52:17 accel -- accel/accel.sh@131 -- # run_test accel_cdev_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:58.567 21:52:17 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:58.567 21:52:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:58.567 21:52:17 accel -- common/autotest_common.sh@10 -- # set +x 00:08:58.567 ************************************ 00:08:58.567 START TEST accel_cdev_decomp_full_mcore 00:08:58.567 ************************************ 00:08:58.567 21:52:17 accel.accel_cdev_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:58.567 21:52:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:58.567 21:52:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:58.567 21:52:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.567 21:52:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.567 21:52:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:58.567 21:52:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:58.567 21:52:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:58.567 21:52:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:58.567 21:52:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:58.567 21:52:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:58.568 21:52:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:58.568 21:52:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:08:58.568 21:52:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:08:58.568 21:52:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:58.568 21:52:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:58.568 [2024-07-13 21:52:17.678104] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:58.568 [2024-07-13 21:52:17.678187] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1311954 ] 00:08:58.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:58.568 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:58.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:58.568 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:58.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:58.568 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:58.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:58.568 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:58.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:58.568 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:58.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:58.568 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:58.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:58.568 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:58.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:58.568 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:58.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:58.568 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:58.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:58.568 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:58.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:58.568 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:58.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:58.568 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:58.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:58.568 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:58.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:58.568 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:58.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:58.568 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:58.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:58.568 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:58.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:58.568 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:58.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:58.568 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:58.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:58.568 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:58.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:58.568 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:58.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:58.568 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:58.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:58.568 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:58.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:58.568 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:58.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:58.568 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:58.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:58.568 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:58.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:58.568 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:58.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:58.568 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:58.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:58.568 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:58.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:58.568 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:58.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:58.568 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:58.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:58.568 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:58.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:58.568 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:58.568 [2024-07-13 21:52:17.835731] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:58.827 [2024-07-13 21:52:18.039199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:58.827 [2024-07-13 21:52:18.039276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:58.827 [2024-07-13 21:52:18.039338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.827 [2024-07-13 21:52:18.039348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:59.765 [2024-07-13 21:52:19.067857] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:08:59.765 [2024-07-13 21:52:19.070560] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00002d1a0 PMD being used: compress_qat 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:59.765 [2024-07-13 21:52:19.077970] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000010100 PMD being used: compress_qat 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:59.765 [2024-07-13 21:52:19.079545] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000017100 PMD being used: compress_qat 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:59.765 [2024-07-13 21:52:19.082680] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000020160 PMD being used: compress_qat 00:08:59.765 [2024-07-13 21:52:19.082847] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00002d280 PMD being used: compress_qat 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:59.765 21:52:19 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.670 21:52:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:01.670 21:52:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.670 21:52:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.670 21:52:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.670 21:52:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:01.670 21:52:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.670 21:52:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.670 21:52:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.670 21:52:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:01.670 21:52:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.670 21:52:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.670 21:52:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.670 21:52:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:01.670 21:52:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.670 21:52:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.670 21:52:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.670 21:52:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:01.670 21:52:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.670 21:52:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.670 21:52:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.670 21:52:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:01.670 21:52:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.670 21:52:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.670 21:52:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.671 21:52:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:01.671 21:52:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.671 21:52:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.671 21:52:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.671 21:52:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:01.671 21:52:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.671 21:52:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.671 21:52:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.671 21:52:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:01.671 21:52:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.671 21:52:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.671 21:52:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.671 21:52:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:09:01.671 21:52:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:01.671 21:52:20 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:09:01.671 00:09:01.671 real 0m3.173s 00:09:01.671 user 0m9.604s 00:09:01.671 sys 0m0.543s 00:09:01.671 21:52:20 accel.accel_cdev_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:01.671 21:52:20 accel.accel_cdev_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:09:01.671 ************************************ 00:09:01.671 END TEST accel_cdev_decomp_full_mcore 00:09:01.671 ************************************ 00:09:01.671 21:52:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:01.671 21:52:20 accel -- accel/accel.sh@132 -- # run_test accel_cdev_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:09:01.671 21:52:20 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:09:01.671 21:52:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:01.671 21:52:20 accel -- common/autotest_common.sh@10 -- # set +x 00:09:01.671 ************************************ 00:09:01.671 START TEST accel_cdev_decomp_mthread 00:09:01.671 ************************************ 00:09:01.671 21:52:20 accel.accel_cdev_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:09:01.671 21:52:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:09:01.671 21:52:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:09:01.671 21:52:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.671 21:52:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.671 21:52:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:09:01.671 21:52:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:09:01.671 21:52:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:09:01.671 21:52:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:01.671 21:52:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:01.671 21:52:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:01.671 21:52:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:01.671 21:52:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:09:01.671 21:52:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:09:01.671 21:52:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:09:01.671 21:52:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:09:01.671 [2024-07-13 21:52:20.934183] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:01.671 [2024-07-13 21:52:20.934262] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1312573 ] 00:09:01.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.671 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:01.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.671 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:01.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.671 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:01.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.671 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:01.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.671 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:01.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.671 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:01.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.671 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:01.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.671 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:01.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.671 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:01.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.671 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:01.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.671 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:01.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.671 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:01.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.671 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:01.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.671 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:01.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.671 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:01.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.671 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:01.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.671 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:01.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.671 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:01.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.671 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:01.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.671 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:01.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.671 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:01.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.671 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:01.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.671 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:01.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.671 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:01.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.671 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:01.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.671 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:01.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.671 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:01.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.671 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:01.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.671 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:01.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.671 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:01.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.671 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:01.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.671 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:01.930 [2024-07-13 21:52:21.090147] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.930 [2024-07-13 21:52:21.289361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.307 [2024-07-13 21:52:22.258653] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:09:03.307 [2024-07-13 21:52:22.261066] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000016140 PMD being used: compress_qat 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:03.307 [2024-07-13 21:52:22.269491] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000016220 PMD being used: compress_qat 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.307 [2024-07-13 21:52:22.272600] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000016300 PMD being used: compress_qat 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.307 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.308 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.308 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:03.308 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.308 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.308 21:52:22 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.683 21:52:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:04.683 21:52:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.683 21:52:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.683 21:52:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.683 21:52:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:04.683 21:52:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.683 21:52:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.683 21:52:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.683 21:52:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:04.683 21:52:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.683 21:52:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.683 21:52:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.683 21:52:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:04.683 21:52:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.683 21:52:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.683 21:52:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.683 21:52:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:04.683 21:52:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.683 21:52:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.683 21:52:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.683 21:52:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:04.683 21:52:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.683 21:52:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.683 21:52:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.683 21:52:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:04.683 21:52:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.683 21:52:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.683 21:52:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.683 21:52:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:09:04.683 21:52:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:04.683 21:52:23 accel.accel_cdev_decomp_mthread -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:09:04.683 00:09:04.683 real 0m2.944s 00:09:04.683 user 0m2.443s 00:09:04.683 sys 0m0.508s 00:09:04.683 21:52:23 accel.accel_cdev_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:04.683 21:52:23 accel.accel_cdev_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:09:04.683 ************************************ 00:09:04.683 END TEST accel_cdev_decomp_mthread 00:09:04.683 ************************************ 00:09:04.683 21:52:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:04.683 21:52:23 accel -- accel/accel.sh@133 -- # run_test accel_cdev_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:09:04.684 21:52:23 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:09:04.684 21:52:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:04.684 21:52:23 accel -- common/autotest_common.sh@10 -- # set +x 00:09:04.684 ************************************ 00:09:04.684 START TEST accel_cdev_decomp_full_mthread 00:09:04.684 ************************************ 00:09:04.684 21:52:23 accel.accel_cdev_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:09:04.684 21:52:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:09:04.684 21:52:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:09:04.684 21:52:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.684 21:52:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.684 21:52:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:09:04.684 21:52:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:09:04.684 21:52:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:09:04.684 21:52:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:04.684 21:52:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:04.684 21:52:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:04.684 21:52:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:04.684 21:52:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:09:04.684 21:52:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:09:04.684 21:52:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:09:04.684 21:52:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:09:04.684 [2024-07-13 21:52:23.961800] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:04.684 [2024-07-13 21:52:23.961880] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1313108 ] 00:09:04.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.684 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:04.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.684 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:04.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.684 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:04.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.684 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:04.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.684 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:04.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.684 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:04.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.684 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:04.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.684 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:04.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.684 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:04.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.684 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:04.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.684 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:04.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.684 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:04.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.684 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:04.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.684 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:04.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.684 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:04.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.684 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:04.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.684 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:04.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.684 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:04.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.684 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:04.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.684 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:04.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.684 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:04.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.684 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:04.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.684 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:04.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.684 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:04.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.684 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:04.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.684 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:04.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.684 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:04.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.684 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:04.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.684 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:04.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.684 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:04.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.684 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:04.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.684 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:04.943 [2024-07-13 21:52:24.121520] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.943 [2024-07-13 21:52:24.320760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.321 [2024-07-13 21:52:25.312307] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:09:06.321 [2024-07-13 21:52:25.314663] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000016140 PMD being used: compress_qat 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:06.321 [2024-07-13 21:52:25.322620] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000016220 PMD being used: compress_qat 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.321 [2024-07-13 21:52:25.329333] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000016300 PMD being used: compress_qat 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.321 21:52:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:07.699 21:52:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:07.699 21:52:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:07.699 21:52:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:07.699 21:52:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:07.699 21:52:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:07.699 21:52:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:07.699 21:52:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:07.699 21:52:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:07.699 21:52:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:07.699 21:52:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:07.699 21:52:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:07.699 21:52:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:07.699 21:52:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:07.699 21:52:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:07.699 21:52:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:07.699 21:52:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:07.699 21:52:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:07.699 21:52:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:07.699 21:52:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:07.699 21:52:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:07.699 21:52:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:07.699 21:52:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:07.699 21:52:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:07.699 21:52:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:07.699 21:52:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:07.699 21:52:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:07.699 21:52:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:07.699 21:52:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:07.699 21:52:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:09:07.699 21:52:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:07.699 21:52:26 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:09:07.699 00:09:07.699 real 0m2.997s 00:09:07.700 user 0m2.498s 00:09:07.700 sys 0m0.507s 00:09:07.700 21:52:26 accel.accel_cdev_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:07.700 21:52:26 accel.accel_cdev_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:09:07.700 ************************************ 00:09:07.700 END TEST accel_cdev_decomp_full_mthread 00:09:07.700 ************************************ 00:09:07.700 21:52:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:07.700 21:52:26 accel -- accel/accel.sh@134 -- # unset COMPRESSDEV 00:09:07.700 21:52:26 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:09:07.700 21:52:26 accel -- accel/accel.sh@137 -- # build_accel_config 00:09:07.700 21:52:26 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:07.700 21:52:26 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:07.700 21:52:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:07.700 21:52:26 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:07.700 21:52:26 accel -- common/autotest_common.sh@10 -- # set +x 00:09:07.700 21:52:26 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:07.700 21:52:26 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:07.700 21:52:26 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:07.700 21:52:26 accel -- accel/accel.sh@40 -- # local IFS=, 00:09:07.700 21:52:26 accel -- accel/accel.sh@41 -- # jq -r . 00:09:07.700 ************************************ 00:09:07.700 START TEST accel_dif_functional_tests 00:09:07.700 ************************************ 00:09:07.700 21:52:26 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:09:07.700 [2024-07-13 21:52:27.075163] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:07.700 [2024-07-13 21:52:27.075240] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1313653 ] 00:09:07.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.959 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:07.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.959 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:07.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.959 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:07.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.959 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:07.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.959 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:07.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.959 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:07.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.959 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:07.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.959 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:07.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.959 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:07.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.959 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:07.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.959 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:07.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.959 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:07.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.959 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:07.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.959 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:07.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.959 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:07.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.959 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:07.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.959 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:07.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.959 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:07.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.959 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:07.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.959 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:07.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.959 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:07.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.959 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:07.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.959 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:07.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.959 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:07.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.959 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:07.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.959 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:07.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.959 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:07.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.959 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:07.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.959 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:07.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.959 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:07.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.959 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:07.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.959 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:07.959 [2024-07-13 21:52:27.233312] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:08.219 [2024-07-13 21:52:27.441324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:08.219 [2024-07-13 21:52:27.441390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.219 [2024-07-13 21:52:27.441397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:08.478 00:09:08.478 00:09:08.478 CUnit - A unit testing framework for C - Version 2.1-3 00:09:08.478 http://cunit.sourceforge.net/ 00:09:08.478 00:09:08.478 00:09:08.478 Suite: accel_dif 00:09:08.478 Test: verify: DIF generated, GUARD check ...passed 00:09:08.478 Test: verify: DIF generated, APPTAG check ...passed 00:09:08.478 Test: verify: DIF generated, REFTAG check ...passed 00:09:08.478 Test: verify: DIF not generated, GUARD check ...[2024-07-13 21:52:27.807694] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:09:08.478 passed 00:09:08.478 Test: verify: DIF not generated, APPTAG check ...[2024-07-13 21:52:27.807765] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:09:08.478 passed 00:09:08.479 Test: verify: DIF not generated, REFTAG check ...[2024-07-13 21:52:27.807803] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:09:08.479 passed 00:09:08.479 Test: verify: APPTAG correct, APPTAG check ...passed 00:09:08.479 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-13 21:52:27.807879] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:09:08.479 passed 00:09:08.479 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:09:08.479 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:09:08.479 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:09:08.479 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-13 21:52:27.808045] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:09:08.479 passed 00:09:08.479 Test: verify copy: DIF generated, GUARD check ...passed 00:09:08.479 Test: verify copy: DIF generated, APPTAG check ...passed 00:09:08.479 Test: verify copy: DIF generated, REFTAG check ...passed 00:09:08.479 Test: verify copy: DIF not generated, GUARD check ...[2024-07-13 21:52:27.808233] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:09:08.479 passed 00:09:08.479 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-13 21:52:27.808277] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:09:08.479 passed 00:09:08.479 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-13 21:52:27.808327] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:09:08.479 passed 00:09:08.479 Test: generate copy: DIF generated, GUARD check ...passed 00:09:08.479 Test: generate copy: DIF generated, APTTAG check ...passed 00:09:08.479 Test: generate copy: DIF generated, REFTAG check ...passed 00:09:08.479 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:09:08.479 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:09:08.479 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:09:08.479 Test: generate copy: iovecs-len validate ...[2024-07-13 21:52:27.808616] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:09:08.479 passed 00:09:08.479 Test: generate copy: buffer alignment validate ...passed 00:09:08.479 00:09:08.479 Run Summary: Type Total Ran Passed Failed Inactive 00:09:08.479 suites 1 1 n/a 0 0 00:09:08.479 tests 26 26 26 0 0 00:09:08.479 asserts 115 115 115 0 n/a 00:09:08.479 00:09:08.479 Elapsed time = 0.003 seconds 00:09:09.857 00:09:09.857 real 0m2.066s 00:09:09.857 user 0m4.115s 00:09:09.857 sys 0m0.294s 00:09:09.857 21:52:29 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:09.857 21:52:29 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:09:09.857 ************************************ 00:09:09.857 END TEST accel_dif_functional_tests 00:09:09.857 ************************************ 00:09:09.857 21:52:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:09.857 00:09:09.857 real 1m30.087s 00:09:09.857 user 1m46.166s 00:09:09.857 sys 0m12.867s 00:09:09.857 21:52:29 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:09.857 21:52:29 accel -- common/autotest_common.sh@10 -- # set +x 00:09:09.857 ************************************ 00:09:09.857 END TEST accel 00:09:09.857 ************************************ 00:09:09.857 21:52:29 -- common/autotest_common.sh@1142 -- # return 0 00:09:09.857 21:52:29 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel_rpc.sh 00:09:09.857 21:52:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:09.857 21:52:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:09.857 21:52:29 -- common/autotest_common.sh@10 -- # set +x 00:09:09.857 ************************************ 00:09:09.857 START TEST accel_rpc 00:09:09.857 ************************************ 00:09:09.857 21:52:29 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel_rpc.sh 00:09:10.117 * Looking for test storage... 00:09:10.117 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel 00:09:10.117 21:52:29 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:10.117 21:52:29 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1314039 00:09:10.117 21:52:29 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1314039 00:09:10.117 21:52:29 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:09:10.117 21:52:29 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 1314039 ']' 00:09:10.117 21:52:29 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.117 21:52:29 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:10.117 21:52:29 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.117 21:52:29 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:10.117 21:52:29 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.117 [2024-07-13 21:52:29.371299] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:10.117 [2024-07-13 21:52:29.371402] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1314039 ] 00:09:10.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.117 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:10.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.117 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:10.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.117 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:10.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.117 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:10.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.117 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:10.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.117 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:10.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.117 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:10.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.117 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:10.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.117 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:10.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.117 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:10.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.117 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:10.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.117 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:10.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.117 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:10.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.117 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:10.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.117 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:10.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.117 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:10.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.117 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:10.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.117 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:10.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.117 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:10.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.117 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:10.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.117 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:10.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.117 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:10.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.117 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:10.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.117 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:10.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.117 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:10.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.117 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:10.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.117 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:10.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.117 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:10.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.117 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:10.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.117 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:10.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.117 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:10.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.117 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:10.376 [2024-07-13 21:52:29.532020] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.377 [2024-07-13 21:52:29.729386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.948 21:52:30 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:10.948 21:52:30 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:10.948 21:52:30 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:09:10.948 21:52:30 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:09:10.948 21:52:30 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:09:10.948 21:52:30 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:09:10.948 21:52:30 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:09:10.948 21:52:30 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:10.948 21:52:30 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:10.948 21:52:30 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.948 ************************************ 00:09:10.948 START TEST accel_assign_opcode 00:09:10.948 ************************************ 00:09:10.948 21:52:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:09:10.948 21:52:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:09:10.948 21:52:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.948 21:52:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:10.948 [2024-07-13 21:52:30.159164] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:09:10.948 21:52:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.948 21:52:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:09:10.948 21:52:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.948 21:52:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:10.948 [2024-07-13 21:52:30.167140] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:09:10.948 21:52:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.948 21:52:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:09:10.948 21:52:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.948 21:52:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:11.886 21:52:31 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.886 21:52:31 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:09:11.886 21:52:31 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.886 21:52:31 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:11.886 21:52:31 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:09:11.886 21:52:31 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:09:11.886 21:52:31 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.886 software 00:09:11.886 00:09:11.886 real 0m0.891s 00:09:11.886 user 0m0.027s 00:09:11.886 sys 0m0.009s 00:09:11.886 21:52:31 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:11.886 21:52:31 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:11.886 ************************************ 00:09:11.886 END TEST accel_assign_opcode 00:09:11.886 ************************************ 00:09:11.886 21:52:31 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:09:11.886 21:52:31 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1314039 00:09:11.886 21:52:31 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 1314039 ']' 00:09:11.886 21:52:31 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 1314039 00:09:11.886 21:52:31 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:09:11.886 21:52:31 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:11.886 21:52:31 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1314039 00:09:11.886 21:52:31 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:11.886 21:52:31 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:11.886 21:52:31 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1314039' 00:09:11.886 killing process with pid 1314039 00:09:11.886 21:52:31 accel_rpc -- common/autotest_common.sh@967 -- # kill 1314039 00:09:11.886 21:52:31 accel_rpc -- common/autotest_common.sh@972 -- # wait 1314039 00:09:14.425 00:09:14.425 real 0m4.296s 00:09:14.425 user 0m4.118s 00:09:14.425 sys 0m0.643s 00:09:14.425 21:52:33 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:14.425 21:52:33 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.425 ************************************ 00:09:14.425 END TEST accel_rpc 00:09:14.425 ************************************ 00:09:14.425 21:52:33 -- common/autotest_common.sh@1142 -- # return 0 00:09:14.425 21:52:33 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/cmdline.sh 00:09:14.425 21:52:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:14.425 21:52:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:14.425 21:52:33 -- common/autotest_common.sh@10 -- # set +x 00:09:14.425 ************************************ 00:09:14.425 START TEST app_cmdline 00:09:14.425 ************************************ 00:09:14.425 21:52:33 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/cmdline.sh 00:09:14.425 * Looking for test storage... 00:09:14.425 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:09:14.425 21:52:33 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:14.425 21:52:33 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1314897 00:09:14.425 21:52:33 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1314897 00:09:14.425 21:52:33 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 1314897 ']' 00:09:14.425 21:52:33 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.425 21:52:33 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:14.425 21:52:33 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.425 21:52:33 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:14.425 21:52:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:14.425 21:52:33 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:14.425 [2024-07-13 21:52:33.765995] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:14.425 [2024-07-13 21:52:33.766090] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1314897 ] 00:09:14.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.684 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:14.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.684 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:14.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.684 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:14.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.684 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:14.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.684 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:14.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.684 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:14.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.684 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:14.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.684 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:14.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.684 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:14.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.684 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:14.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.684 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:14.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.684 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:14.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.684 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:14.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.684 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:14.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.684 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:14.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.684 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:14.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.684 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:14.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.684 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:14.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.684 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:14.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.684 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:14.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.684 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:14.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.684 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:14.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.684 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:14.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.684 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:14.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.684 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:14.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.684 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:14.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.684 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:14.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.684 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:14.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.684 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:14.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.684 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:14.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.684 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:14.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.684 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:14.684 [2024-07-13 21:52:33.926233] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.942 [2024-07-13 21:52:34.122834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.879 21:52:34 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:15.879 21:52:34 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:09:15.879 21:52:34 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:09:15.879 { 00:09:15.879 "version": "SPDK v24.09-pre git sha1 719d03c6a", 00:09:15.879 "fields": { 00:09:15.879 "major": 24, 00:09:15.879 "minor": 9, 00:09:15.879 "patch": 0, 00:09:15.879 "suffix": "-pre", 00:09:15.879 "commit": "719d03c6a" 00:09:15.879 } 00:09:15.879 } 00:09:15.879 21:52:35 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:09:15.879 21:52:35 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:15.879 21:52:35 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:15.879 21:52:35 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:15.879 21:52:35 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:15.880 21:52:35 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.880 21:52:35 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:15.880 21:52:35 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:15.880 21:52:35 app_cmdline -- app/cmdline.sh@26 -- # sort 00:09:15.880 21:52:35 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.880 21:52:35 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:15.880 21:52:35 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:15.880 21:52:35 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:15.880 21:52:35 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:09:15.880 21:52:35 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:15.880 21:52:35 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:09:15.880 21:52:35 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:15.880 21:52:35 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:09:15.880 21:52:35 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:15.880 21:52:35 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:09:15.880 21:52:35 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:15.880 21:52:35 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:09:15.880 21:52:35 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:09:15.880 21:52:35 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:16.139 request: 00:09:16.139 { 00:09:16.139 "method": "env_dpdk_get_mem_stats", 00:09:16.139 "req_id": 1 00:09:16.139 } 00:09:16.139 Got JSON-RPC error response 00:09:16.139 response: 00:09:16.139 { 00:09:16.139 "code": -32601, 00:09:16.139 "message": "Method not found" 00:09:16.139 } 00:09:16.139 21:52:35 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:09:16.139 21:52:35 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:16.139 21:52:35 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:16.139 21:52:35 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:16.139 21:52:35 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1314897 00:09:16.139 21:52:35 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 1314897 ']' 00:09:16.139 21:52:35 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 1314897 00:09:16.139 21:52:35 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:09:16.139 21:52:35 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:16.139 21:52:35 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1314897 00:09:16.139 21:52:35 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:16.139 21:52:35 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:16.139 21:52:35 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1314897' 00:09:16.139 killing process with pid 1314897 00:09:16.139 21:52:35 app_cmdline -- common/autotest_common.sh@967 -- # kill 1314897 00:09:16.139 21:52:35 app_cmdline -- common/autotest_common.sh@972 -- # wait 1314897 00:09:18.679 00:09:18.679 real 0m4.198s 00:09:18.679 user 0m4.299s 00:09:18.679 sys 0m0.643s 00:09:18.679 21:52:37 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:18.679 21:52:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:18.679 ************************************ 00:09:18.679 END TEST app_cmdline 00:09:18.679 ************************************ 00:09:18.679 21:52:37 -- common/autotest_common.sh@1142 -- # return 0 00:09:18.679 21:52:37 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/version.sh 00:09:18.679 21:52:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:18.679 21:52:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:18.679 21:52:37 -- common/autotest_common.sh@10 -- # set +x 00:09:18.679 ************************************ 00:09:18.679 START TEST version 00:09:18.679 ************************************ 00:09:18.679 21:52:37 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/version.sh 00:09:18.679 * Looking for test storage... 00:09:18.679 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:09:18.679 21:52:37 version -- app/version.sh@17 -- # get_header_version major 00:09:18.679 21:52:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:09:18.679 21:52:37 version -- app/version.sh@14 -- # cut -f2 00:09:18.679 21:52:37 version -- app/version.sh@14 -- # tr -d '"' 00:09:18.679 21:52:37 version -- app/version.sh@17 -- # major=24 00:09:18.679 21:52:37 version -- app/version.sh@18 -- # get_header_version minor 00:09:18.679 21:52:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:09:18.679 21:52:37 version -- app/version.sh@14 -- # cut -f2 00:09:18.679 21:52:37 version -- app/version.sh@14 -- # tr -d '"' 00:09:18.679 21:52:37 version -- app/version.sh@18 -- # minor=9 00:09:18.679 21:52:37 version -- app/version.sh@19 -- # get_header_version patch 00:09:18.679 21:52:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:09:18.679 21:52:37 version -- app/version.sh@14 -- # cut -f2 00:09:18.679 21:52:37 version -- app/version.sh@14 -- # tr -d '"' 00:09:18.679 21:52:37 version -- app/version.sh@19 -- # patch=0 00:09:18.679 21:52:37 version -- app/version.sh@20 -- # get_header_version suffix 00:09:18.679 21:52:37 version -- app/version.sh@14 -- # tr -d '"' 00:09:18.679 21:52:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:09:18.679 21:52:37 version -- app/version.sh@14 -- # cut -f2 00:09:18.679 21:52:37 version -- app/version.sh@20 -- # suffix=-pre 00:09:18.679 21:52:37 version -- app/version.sh@22 -- # version=24.9 00:09:18.679 21:52:37 version -- app/version.sh@25 -- # (( patch != 0 )) 00:09:18.679 21:52:37 version -- app/version.sh@28 -- # version=24.9rc0 00:09:18.679 21:52:37 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:09:18.679 21:52:37 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:18.679 21:52:38 version -- app/version.sh@30 -- # py_version=24.9rc0 00:09:18.679 21:52:38 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:09:18.679 00:09:18.679 real 0m0.180s 00:09:18.679 user 0m0.090s 00:09:18.679 sys 0m0.135s 00:09:18.679 21:52:38 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:18.679 21:52:38 version -- common/autotest_common.sh@10 -- # set +x 00:09:18.679 ************************************ 00:09:18.679 END TEST version 00:09:18.679 ************************************ 00:09:18.679 21:52:38 -- common/autotest_common.sh@1142 -- # return 0 00:09:18.679 21:52:38 -- spdk/autotest.sh@188 -- # '[' 1 -eq 1 ']' 00:09:18.679 21:52:38 -- spdk/autotest.sh@189 -- # run_test blockdev_general /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh 00:09:18.679 21:52:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:18.679 21:52:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:18.679 21:52:38 -- common/autotest_common.sh@10 -- # set +x 00:09:18.939 ************************************ 00:09:18.939 START TEST blockdev_general 00:09:18.939 ************************************ 00:09:18.939 21:52:38 blockdev_general -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh 00:09:18.939 * Looking for test storage... 00:09:18.939 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:09:18.939 21:52:38 blockdev_general -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:09:18.939 21:52:38 blockdev_general -- bdev/nbd_common.sh@6 -- # set -e 00:09:18.939 21:52:38 blockdev_general -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:09:18.939 21:52:38 blockdev_general -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:09:18.939 21:52:38 blockdev_general -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:09:18.939 21:52:38 blockdev_general -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:09:18.939 21:52:38 blockdev_general -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:09:18.939 21:52:38 blockdev_general -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:09:18.939 21:52:38 blockdev_general -- bdev/blockdev.sh@20 -- # : 00:09:18.939 21:52:38 blockdev_general -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:09:18.939 21:52:38 blockdev_general -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:09:18.939 21:52:38 blockdev_general -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:09:18.939 21:52:38 blockdev_general -- bdev/blockdev.sh@674 -- # uname -s 00:09:18.939 21:52:38 blockdev_general -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:09:18.939 21:52:38 blockdev_general -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:09:18.939 21:52:38 blockdev_general -- bdev/blockdev.sh@682 -- # test_type=bdev 00:09:18.939 21:52:38 blockdev_general -- bdev/blockdev.sh@683 -- # crypto_device= 00:09:18.939 21:52:38 blockdev_general -- bdev/blockdev.sh@684 -- # dek= 00:09:18.939 21:52:38 blockdev_general -- bdev/blockdev.sh@685 -- # env_ctx= 00:09:18.939 21:52:38 blockdev_general -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:09:18.939 21:52:38 blockdev_general -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:09:18.939 21:52:38 blockdev_general -- bdev/blockdev.sh@690 -- # [[ bdev == bdev ]] 00:09:18.939 21:52:38 blockdev_general -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:09:18.939 21:52:38 blockdev_general -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:09:18.939 21:52:38 blockdev_general -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=1315810 00:09:18.939 21:52:38 blockdev_general -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:18.939 21:52:38 blockdev_general -- bdev/blockdev.sh@49 -- # waitforlisten 1315810 00:09:18.939 21:52:38 blockdev_general -- common/autotest_common.sh@829 -- # '[' -z 1315810 ']' 00:09:18.939 21:52:38 blockdev_general -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.939 21:52:38 blockdev_general -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:18.939 21:52:38 blockdev_general -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.939 21:52:38 blockdev_general -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:18.939 21:52:38 blockdev_general -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:09:18.939 21:52:38 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:18.939 [2024-07-13 21:52:38.288514] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:18.939 [2024-07-13 21:52:38.288611] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1315810 ] 00:09:19.198 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.198 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:19.198 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.198 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:19.198 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.198 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:19.198 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.198 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:19.198 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.198 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:19.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.199 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:19.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.199 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:19.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.199 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:19.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.199 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:19.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.199 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:19.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.199 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:19.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.199 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:19.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.199 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:19.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.199 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:19.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.199 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:19.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.199 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:19.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.199 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:19.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.199 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:19.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.199 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:19.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.199 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:19.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.199 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:19.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.199 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:19.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.199 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:19.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.199 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:19.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.199 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:19.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.199 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:19.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.199 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:19.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.199 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:19.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.199 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:19.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.199 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:19.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.199 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:19.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.199 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:19.199 [2024-07-13 21:52:38.449331] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.457 [2024-07-13 21:52:38.651922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.716 21:52:39 blockdev_general -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:19.716 21:52:39 blockdev_general -- common/autotest_common.sh@862 -- # return 0 00:09:19.716 21:52:39 blockdev_general -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:09:19.716 21:52:39 blockdev_general -- bdev/blockdev.sh@696 -- # setup_bdev_conf 00:09:19.716 21:52:39 blockdev_general -- bdev/blockdev.sh@53 -- # rpc_cmd 00:09:19.716 21:52:39 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.716 21:52:39 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:20.651 [2024-07-13 21:52:39.906303] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:09:20.652 [2024-07-13 21:52:39.906359] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:09:20.652 00:09:20.652 [2024-07-13 21:52:39.914284] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:09:20.652 [2024-07-13 21:52:39.914317] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:09:20.652 00:09:20.652 Malloc0 00:09:20.652 Malloc1 00:09:20.909 Malloc2 00:09:20.910 Malloc3 00:09:20.910 Malloc4 00:09:20.910 Malloc5 00:09:20.910 Malloc6 00:09:20.910 Malloc7 00:09:21.167 Malloc8 00:09:21.167 Malloc9 00:09:21.167 [2024-07-13 21:52:40.363378] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:21.167 [2024-07-13 21:52:40.363432] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:21.167 [2024-07-13 21:52:40.363454] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000045080 00:09:21.167 [2024-07-13 21:52:40.363469] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:21.167 [2024-07-13 21:52:40.365508] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:21.167 [2024-07-13 21:52:40.365537] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:09:21.167 TestPT 00:09:21.167 21:52:40 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.167 21:52:40 blockdev_general -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile bs=2048 count=5000 00:09:21.167 5000+0 records in 00:09:21.167 5000+0 records out 00:09:21.167 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0237281 s, 432 MB/s 00:09:21.167 21:52:40 blockdev_general -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile AIO0 2048 00:09:21.167 21:52:40 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.167 21:52:40 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:21.167 AIO0 00:09:21.167 21:52:40 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.167 21:52:40 blockdev_general -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:09:21.167 21:52:40 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.167 21:52:40 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:21.167 21:52:40 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.167 21:52:40 blockdev_general -- bdev/blockdev.sh@740 -- # cat 00:09:21.167 21:52:40 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:09:21.167 21:52:40 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.167 21:52:40 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:21.167 21:52:40 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.167 21:52:40 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:09:21.167 21:52:40 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.167 21:52:40 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:21.167 21:52:40 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.167 21:52:40 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:09:21.167 21:52:40 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.167 21:52:40 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:21.167 21:52:40 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.167 21:52:40 blockdev_general -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:09:21.167 21:52:40 blockdev_general -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:09:21.167 21:52:40 blockdev_general -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:09:21.167 21:52:40 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.426 21:52:40 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:21.426 21:52:40 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.426 21:52:40 blockdev_general -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:09:21.426 21:52:40 blockdev_general -- bdev/blockdev.sh@749 -- # jq -r .name 00:09:21.427 21:52:40 blockdev_general -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "85a9e68f-d93e-4d48-b035-7a4339be4c09"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "85a9e68f-d93e-4d48-b035-7a4339be4c09",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "ad1c08bb-2dc9-571e-b9bb-26e9b6c00284"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "ad1c08bb-2dc9-571e-b9bb-26e9b6c00284",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "f137220c-4aed-5c94-b22e-c5c327c58c07"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "f137220c-4aed-5c94-b22e-c5c327c58c07",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "3c3f0708-ecea-5993-9e1e-677e81ff18ce"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3c3f0708-ecea-5993-9e1e-677e81ff18ce",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "264ce625-e209-5c55-bd1b-ae5ff6abbc23"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "264ce625-e209-5c55-bd1b-ae5ff6abbc23",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "93a0a68c-e9e8-5409-b50c-b7228b9b285d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "93a0a68c-e9e8-5409-b50c-b7228b9b285d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "c4529d0c-715c-535e-96f7-b10a2ebdc8f2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c4529d0c-715c-535e-96f7-b10a2ebdc8f2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "65f0743a-1a72-5150-b10b-9fa135521843"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "65f0743a-1a72-5150-b10b-9fa135521843",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "9ac90f74-aa24-5fb1-a94f-b1b5e2c83a81"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9ac90f74-aa24-5fb1-a94f-b1b5e2c83a81",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "0cf8f958-aefe-5792-a12a-fb6df6656f26"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0cf8f958-aefe-5792-a12a-fb6df6656f26",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "56984f64-12ee-5e99-a391-ae6c00ad4b26"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "56984f64-12ee-5e99-a391-ae6c00ad4b26",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "b412548d-7c71-5a69-93d8-f8e9b111a112"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b412548d-7c71-5a69-93d8-f8e9b111a112",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "954ef0e3-1fb0-4fa6-af76-725f189b737a"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "954ef0e3-1fb0-4fa6-af76-725f189b737a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "954ef0e3-1fb0-4fa6-af76-725f189b737a",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "302a1350-4559-4eae-9726-1f5a492dab95",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "9ec22841-bb8d-4b33-aa8c-6bb0691b8097",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "3bdf1ee9-4491-467f-9e2f-5553e41ad62d"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "3bdf1ee9-4491-467f-9e2f-5553e41ad62d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "3bdf1ee9-4491-467f-9e2f-5553e41ad62d",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "66487e29-732e-4b4c-8520-1dc241caf439",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "a7639838-ada5-44ec-a8b5-a724725944dc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "94556002-43f6-4b50-be1b-a8903f1fb1a6"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "94556002-43f6-4b50-be1b-a8903f1fb1a6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "94556002-43f6-4b50-be1b-a8903f1fb1a6",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "47a067a6-3764-4035-9b1c-06ad866fac2d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "8a0e8e4c-f4c6-465b-bb09-cb806628a420",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "903bc713-2351-455f-a74f-40757d48db51"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "903bc713-2351-455f-a74f-40757d48db51",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:09:21.428 21:52:40 blockdev_general -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:09:21.428 21:52:40 blockdev_general -- bdev/blockdev.sh@752 -- # hello_world_bdev=Malloc0 00:09:21.428 21:52:40 blockdev_general -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:09:21.428 21:52:40 blockdev_general -- bdev/blockdev.sh@754 -- # killprocess 1315810 00:09:21.428 21:52:40 blockdev_general -- common/autotest_common.sh@948 -- # '[' -z 1315810 ']' 00:09:21.428 21:52:40 blockdev_general -- common/autotest_common.sh@952 -- # kill -0 1315810 00:09:21.428 21:52:40 blockdev_general -- common/autotest_common.sh@953 -- # uname 00:09:21.428 21:52:40 blockdev_general -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:21.428 21:52:40 blockdev_general -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1315810 00:09:21.428 21:52:40 blockdev_general -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:21.428 21:52:40 blockdev_general -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:21.428 21:52:40 blockdev_general -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1315810' 00:09:21.428 killing process with pid 1315810 00:09:21.428 21:52:40 blockdev_general -- common/autotest_common.sh@967 -- # kill 1315810 00:09:21.428 21:52:40 blockdev_general -- common/autotest_common.sh@972 -- # wait 1315810 00:09:24.716 21:52:44 blockdev_general -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:24.716 21:52:44 blockdev_general -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b Malloc0 '' 00:09:24.716 21:52:44 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:09:24.716 21:52:44 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:24.716 21:52:44 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:24.716 ************************************ 00:09:24.716 START TEST bdev_hello_world 00:09:24.716 ************************************ 00:09:24.716 21:52:44 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b Malloc0 '' 00:09:24.975 [2024-07-13 21:52:44.138965] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:24.975 [2024-07-13 21:52:44.139054] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1316812 ] 00:09:24.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.975 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:24.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.975 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:24.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.975 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:24.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.975 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:24.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.975 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:24.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.975 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:24.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.975 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:24.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.975 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:24.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.975 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:24.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.975 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:24.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.975 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:24.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.975 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:24.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.975 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:24.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.975 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:24.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.975 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:24.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.975 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:24.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.975 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:24.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.975 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:24.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.975 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:24.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.975 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:24.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.975 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:24.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.975 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:24.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.975 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:24.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.975 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:24.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.975 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:24.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.975 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:24.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.975 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:24.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.975 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:24.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.975 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:24.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.975 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:24.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.975 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:24.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.976 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:24.976 [2024-07-13 21:52:44.299462] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.235 [2024-07-13 21:52:44.499263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.803 [2024-07-13 21:52:44.933814] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:25.803 [2024-07-13 21:52:44.933873] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:09:25.803 [2024-07-13 21:52:44.933893] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:09:25.803 [2024-07-13 21:52:44.941802] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:09:25.803 [2024-07-13 21:52:44.941835] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:09:25.803 [2024-07-13 21:52:44.949809] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:09:25.803 [2024-07-13 21:52:44.949838] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:09:25.803 [2024-07-13 21:52:45.143463] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:25.803 [2024-07-13 21:52:45.143514] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:25.803 [2024-07-13 21:52:45.143529] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042680 00:09:25.803 [2024-07-13 21:52:45.143540] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:25.803 [2024-07-13 21:52:45.145493] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:25.803 [2024-07-13 21:52:45.145522] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:09:26.382 [2024-07-13 21:52:45.514466] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:09:26.382 [2024-07-13 21:52:45.514511] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:09:26.382 [2024-07-13 21:52:45.514551] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:09:26.382 [2024-07-13 21:52:45.514599] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:09:26.382 [2024-07-13 21:52:45.514649] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:09:26.382 [2024-07-13 21:52:45.514666] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:09:26.382 [2024-07-13 21:52:45.514705] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:09:26.382 00:09:26.382 [2024-07-13 21:52:45.514738] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:09:28.286 00:09:28.286 real 0m3.562s 00:09:28.286 user 0m3.095s 00:09:28.286 sys 0m0.392s 00:09:28.286 21:52:47 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:28.286 21:52:47 blockdev_general.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:28.286 ************************************ 00:09:28.286 END TEST bdev_hello_world 00:09:28.286 ************************************ 00:09:28.286 21:52:47 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:09:28.286 21:52:47 blockdev_general -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:09:28.286 21:52:47 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:28.286 21:52:47 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:28.286 21:52:47 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:28.545 ************************************ 00:09:28.545 START TEST bdev_bounds 00:09:28.545 ************************************ 00:09:28.545 21:52:47 blockdev_general.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:09:28.545 21:52:47 blockdev_general.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=1317441 00:09:28.545 21:52:47 blockdev_general.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:09:28.545 21:52:47 blockdev_general.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 1317441' 00:09:28.545 Process bdevio pid: 1317441 00:09:28.545 21:52:47 blockdev_general.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 1317441 00:09:28.545 21:52:47 blockdev_general.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 1317441 ']' 00:09:28.545 21:52:47 blockdev_general.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.545 21:52:47 blockdev_general.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:28.545 21:52:47 blockdev_general.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.545 21:52:47 blockdev_general.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:28.545 21:52:47 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:28.545 21:52:47 blockdev_general.bdev_bounds -- bdev/blockdev.sh@289 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:09:28.545 [2024-07-13 21:52:47.794636] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:28.545 [2024-07-13 21:52:47.794730] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1317441 ] 00:09:28.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.545 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:28.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.545 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:28.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.545 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:28.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.545 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:28.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.545 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:28.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.545 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:28.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.545 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:28.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.545 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:28.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.545 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:28.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.545 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:28.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.545 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:28.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.545 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:28.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.545 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:28.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.545 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:28.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.545 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:28.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.545 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:28.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.545 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:28.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.545 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:28.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.545 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:28.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.545 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:28.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.545 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:28.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.545 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:28.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.545 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:28.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.546 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:28.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.546 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:28.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.546 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:28.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.546 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:28.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.546 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:28.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.546 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:28.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.546 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:28.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.546 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:28.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.546 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:28.805 [2024-07-13 21:52:47.956676] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:28.805 [2024-07-13 21:52:48.157348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:28.805 [2024-07-13 21:52:48.157416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.805 [2024-07-13 21:52:48.157421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:29.372 [2024-07-13 21:52:48.608563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:29.373 [2024-07-13 21:52:48.608624] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:09:29.373 [2024-07-13 21:52:48.608638] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:09:29.373 [2024-07-13 21:52:48.616578] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:09:29.373 [2024-07-13 21:52:48.616611] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:09:29.373 [2024-07-13 21:52:48.624588] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:09:29.373 [2024-07-13 21:52:48.624617] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:09:29.631 [2024-07-13 21:52:48.822639] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:29.631 [2024-07-13 21:52:48.822692] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:29.631 [2024-07-13 21:52:48.822709] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042c80 00:09:29.631 [2024-07-13 21:52:48.822721] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:29.631 [2024-07-13 21:52:48.824889] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:29.631 [2024-07-13 21:52:48.824926] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:09:29.890 21:52:49 blockdev_general.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:29.891 21:52:49 blockdev_general.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:09:29.891 21:52:49 blockdev_general.bdev_bounds -- bdev/blockdev.sh@294 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:09:29.891 I/O targets: 00:09:29.891 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:09:29.891 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:09:29.891 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:09:29.891 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:09:29.891 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:09:29.891 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:09:29.891 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:09:29.891 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:09:29.891 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:09:29.891 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:09:29.891 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:09:29.891 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:09:29.891 raid0: 131072 blocks of 512 bytes (64 MiB) 00:09:29.891 concat0: 131072 blocks of 512 bytes (64 MiB) 00:09:29.891 raid1: 65536 blocks of 512 bytes (32 MiB) 00:09:29.891 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:09:29.891 00:09:29.891 00:09:29.891 CUnit - A unit testing framework for C - Version 2.1-3 00:09:29.891 http://cunit.sourceforge.net/ 00:09:29.891 00:09:29.891 00:09:29.891 Suite: bdevio tests on: AIO0 00:09:29.891 Test: blockdev write read block ...passed 00:09:29.891 Test: blockdev write zeroes read block ...passed 00:09:30.150 Test: blockdev write zeroes read no split ...passed 00:09:30.150 Test: blockdev write zeroes read split ...passed 00:09:30.150 Test: blockdev write zeroes read split partial ...passed 00:09:30.150 Test: blockdev reset ...passed 00:09:30.150 Test: blockdev write read 8 blocks ...passed 00:09:30.150 Test: blockdev write read size > 128k ...passed 00:09:30.150 Test: blockdev write read invalid size ...passed 00:09:30.150 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:30.150 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:30.150 Test: blockdev write read max offset ...passed 00:09:30.150 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:30.150 Test: blockdev writev readv 8 blocks ...passed 00:09:30.150 Test: blockdev writev readv 30 x 1block ...passed 00:09:30.150 Test: blockdev writev readv block ...passed 00:09:30.150 Test: blockdev writev readv size > 128k ...passed 00:09:30.150 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:30.150 Test: blockdev comparev and writev ...passed 00:09:30.150 Test: blockdev nvme passthru rw ...passed 00:09:30.150 Test: blockdev nvme passthru vendor specific ...passed 00:09:30.150 Test: blockdev nvme admin passthru ...passed 00:09:30.150 Test: blockdev copy ...passed 00:09:30.150 Suite: bdevio tests on: raid1 00:09:30.150 Test: blockdev write read block ...passed 00:09:30.150 Test: blockdev write zeroes read block ...passed 00:09:30.150 Test: blockdev write zeroes read no split ...passed 00:09:30.150 Test: blockdev write zeroes read split ...passed 00:09:30.150 Test: blockdev write zeroes read split partial ...passed 00:09:30.150 Test: blockdev reset ...passed 00:09:30.150 Test: blockdev write read 8 blocks ...passed 00:09:30.150 Test: blockdev write read size > 128k ...passed 00:09:30.150 Test: blockdev write read invalid size ...passed 00:09:30.150 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:30.150 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:30.150 Test: blockdev write read max offset ...passed 00:09:30.150 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:30.150 Test: blockdev writev readv 8 blocks ...passed 00:09:30.150 Test: blockdev writev readv 30 x 1block ...passed 00:09:30.150 Test: blockdev writev readv block ...passed 00:09:30.150 Test: blockdev writev readv size > 128k ...passed 00:09:30.150 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:30.150 Test: blockdev comparev and writev ...passed 00:09:30.150 Test: blockdev nvme passthru rw ...passed 00:09:30.150 Test: blockdev nvme passthru vendor specific ...passed 00:09:30.150 Test: blockdev nvme admin passthru ...passed 00:09:30.150 Test: blockdev copy ...passed 00:09:30.150 Suite: bdevio tests on: concat0 00:09:30.150 Test: blockdev write read block ...passed 00:09:30.150 Test: blockdev write zeroes read block ...passed 00:09:30.150 Test: blockdev write zeroes read no split ...passed 00:09:30.150 Test: blockdev write zeroes read split ...passed 00:09:30.150 Test: blockdev write zeroes read split partial ...passed 00:09:30.150 Test: blockdev reset ...passed 00:09:30.150 Test: blockdev write read 8 blocks ...passed 00:09:30.150 Test: blockdev write read size > 128k ...passed 00:09:30.150 Test: blockdev write read invalid size ...passed 00:09:30.150 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:30.150 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:30.150 Test: blockdev write read max offset ...passed 00:09:30.150 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:30.150 Test: blockdev writev readv 8 blocks ...passed 00:09:30.150 Test: blockdev writev readv 30 x 1block ...passed 00:09:30.150 Test: blockdev writev readv block ...passed 00:09:30.150 Test: blockdev writev readv size > 128k ...passed 00:09:30.150 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:30.150 Test: blockdev comparev and writev ...passed 00:09:30.150 Test: blockdev nvme passthru rw ...passed 00:09:30.150 Test: blockdev nvme passthru vendor specific ...passed 00:09:30.150 Test: blockdev nvme admin passthru ...passed 00:09:30.150 Test: blockdev copy ...passed 00:09:30.150 Suite: bdevio tests on: raid0 00:09:30.150 Test: blockdev write read block ...passed 00:09:30.150 Test: blockdev write zeroes read block ...passed 00:09:30.150 Test: blockdev write zeroes read no split ...passed 00:09:30.150 Test: blockdev write zeroes read split ...passed 00:09:30.150 Test: blockdev write zeroes read split partial ...passed 00:09:30.150 Test: blockdev reset ...passed 00:09:30.150 Test: blockdev write read 8 blocks ...passed 00:09:30.150 Test: blockdev write read size > 128k ...passed 00:09:30.150 Test: blockdev write read invalid size ...passed 00:09:30.150 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:30.150 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:30.150 Test: blockdev write read max offset ...passed 00:09:30.150 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:30.150 Test: blockdev writev readv 8 blocks ...passed 00:09:30.150 Test: blockdev writev readv 30 x 1block ...passed 00:09:30.150 Test: blockdev writev readv block ...passed 00:09:30.150 Test: blockdev writev readv size > 128k ...passed 00:09:30.150 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:30.150 Test: blockdev comparev and writev ...passed 00:09:30.150 Test: blockdev nvme passthru rw ...passed 00:09:30.150 Test: blockdev nvme passthru vendor specific ...passed 00:09:30.150 Test: blockdev nvme admin passthru ...passed 00:09:30.150 Test: blockdev copy ...passed 00:09:30.150 Suite: bdevio tests on: TestPT 00:09:30.150 Test: blockdev write read block ...passed 00:09:30.150 Test: blockdev write zeroes read block ...passed 00:09:30.150 Test: blockdev write zeroes read no split ...passed 00:09:30.430 Test: blockdev write zeroes read split ...passed 00:09:30.430 Test: blockdev write zeroes read split partial ...passed 00:09:30.430 Test: blockdev reset ...passed 00:09:30.430 Test: blockdev write read 8 blocks ...passed 00:09:30.430 Test: blockdev write read size > 128k ...passed 00:09:30.430 Test: blockdev write read invalid size ...passed 00:09:30.430 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:30.430 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:30.430 Test: blockdev write read max offset ...passed 00:09:30.430 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:30.430 Test: blockdev writev readv 8 blocks ...passed 00:09:30.430 Test: blockdev writev readv 30 x 1block ...passed 00:09:30.430 Test: blockdev writev readv block ...passed 00:09:30.430 Test: blockdev writev readv size > 128k ...passed 00:09:30.430 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:30.430 Test: blockdev comparev and writev ...passed 00:09:30.430 Test: blockdev nvme passthru rw ...passed 00:09:30.430 Test: blockdev nvme passthru vendor specific ...passed 00:09:30.430 Test: blockdev nvme admin passthru ...passed 00:09:30.430 Test: blockdev copy ...passed 00:09:30.430 Suite: bdevio tests on: Malloc2p7 00:09:30.430 Test: blockdev write read block ...passed 00:09:30.430 Test: blockdev write zeroes read block ...passed 00:09:30.430 Test: blockdev write zeroes read no split ...passed 00:09:30.430 Test: blockdev write zeroes read split ...passed 00:09:30.430 Test: blockdev write zeroes read split partial ...passed 00:09:30.430 Test: blockdev reset ...passed 00:09:30.430 Test: blockdev write read 8 blocks ...passed 00:09:30.430 Test: blockdev write read size > 128k ...passed 00:09:30.430 Test: blockdev write read invalid size ...passed 00:09:30.430 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:30.430 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:30.430 Test: blockdev write read max offset ...passed 00:09:30.430 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:30.430 Test: blockdev writev readv 8 blocks ...passed 00:09:30.430 Test: blockdev writev readv 30 x 1block ...passed 00:09:30.430 Test: blockdev writev readv block ...passed 00:09:30.430 Test: blockdev writev readv size > 128k ...passed 00:09:30.430 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:30.430 Test: blockdev comparev and writev ...passed 00:09:30.430 Test: blockdev nvme passthru rw ...passed 00:09:30.430 Test: blockdev nvme passthru vendor specific ...passed 00:09:30.430 Test: blockdev nvme admin passthru ...passed 00:09:30.430 Test: blockdev copy ...passed 00:09:30.430 Suite: bdevio tests on: Malloc2p6 00:09:30.430 Test: blockdev write read block ...passed 00:09:30.430 Test: blockdev write zeroes read block ...passed 00:09:30.430 Test: blockdev write zeroes read no split ...passed 00:09:30.430 Test: blockdev write zeroes read split ...passed 00:09:30.430 Test: blockdev write zeroes read split partial ...passed 00:09:30.430 Test: blockdev reset ...passed 00:09:30.430 Test: blockdev write read 8 blocks ...passed 00:09:30.430 Test: blockdev write read size > 128k ...passed 00:09:30.430 Test: blockdev write read invalid size ...passed 00:09:30.430 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:30.430 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:30.430 Test: blockdev write read max offset ...passed 00:09:30.430 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:30.430 Test: blockdev writev readv 8 blocks ...passed 00:09:30.430 Test: blockdev writev readv 30 x 1block ...passed 00:09:30.430 Test: blockdev writev readv block ...passed 00:09:30.430 Test: blockdev writev readv size > 128k ...passed 00:09:30.430 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:30.430 Test: blockdev comparev and writev ...passed 00:09:30.430 Test: blockdev nvme passthru rw ...passed 00:09:30.430 Test: blockdev nvme passthru vendor specific ...passed 00:09:30.430 Test: blockdev nvme admin passthru ...passed 00:09:30.430 Test: blockdev copy ...passed 00:09:30.430 Suite: bdevio tests on: Malloc2p5 00:09:30.430 Test: blockdev write read block ...passed 00:09:30.430 Test: blockdev write zeroes read block ...passed 00:09:30.430 Test: blockdev write zeroes read no split ...passed 00:09:30.430 Test: blockdev write zeroes read split ...passed 00:09:30.430 Test: blockdev write zeroes read split partial ...passed 00:09:30.430 Test: blockdev reset ...passed 00:09:30.430 Test: blockdev write read 8 blocks ...passed 00:09:30.430 Test: blockdev write read size > 128k ...passed 00:09:30.430 Test: blockdev write read invalid size ...passed 00:09:30.430 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:30.430 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:30.430 Test: blockdev write read max offset ...passed 00:09:30.430 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:30.430 Test: blockdev writev readv 8 blocks ...passed 00:09:30.430 Test: blockdev writev readv 30 x 1block ...passed 00:09:30.431 Test: blockdev writev readv block ...passed 00:09:30.431 Test: blockdev writev readv size > 128k ...passed 00:09:30.431 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:30.431 Test: blockdev comparev and writev ...passed 00:09:30.431 Test: blockdev nvme passthru rw ...passed 00:09:30.431 Test: blockdev nvme passthru vendor specific ...passed 00:09:30.431 Test: blockdev nvme admin passthru ...passed 00:09:30.431 Test: blockdev copy ...passed 00:09:30.431 Suite: bdevio tests on: Malloc2p4 00:09:30.431 Test: blockdev write read block ...passed 00:09:30.431 Test: blockdev write zeroes read block ...passed 00:09:30.431 Test: blockdev write zeroes read no split ...passed 00:09:30.690 Test: blockdev write zeroes read split ...passed 00:09:30.690 Test: blockdev write zeroes read split partial ...passed 00:09:30.690 Test: blockdev reset ...passed 00:09:30.690 Test: blockdev write read 8 blocks ...passed 00:09:30.690 Test: blockdev write read size > 128k ...passed 00:09:30.690 Test: blockdev write read invalid size ...passed 00:09:30.690 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:30.690 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:30.690 Test: blockdev write read max offset ...passed 00:09:30.690 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:30.690 Test: blockdev writev readv 8 blocks ...passed 00:09:30.690 Test: blockdev writev readv 30 x 1block ...passed 00:09:30.690 Test: blockdev writev readv block ...passed 00:09:30.690 Test: blockdev writev readv size > 128k ...passed 00:09:30.690 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:30.690 Test: blockdev comparev and writev ...passed 00:09:30.690 Test: blockdev nvme passthru rw ...passed 00:09:30.690 Test: blockdev nvme passthru vendor specific ...passed 00:09:30.690 Test: blockdev nvme admin passthru ...passed 00:09:30.690 Test: blockdev copy ...passed 00:09:30.690 Suite: bdevio tests on: Malloc2p3 00:09:30.690 Test: blockdev write read block ...passed 00:09:30.690 Test: blockdev write zeroes read block ...passed 00:09:30.690 Test: blockdev write zeroes read no split ...passed 00:09:30.690 Test: blockdev write zeroes read split ...passed 00:09:30.690 Test: blockdev write zeroes read split partial ...passed 00:09:30.690 Test: blockdev reset ...passed 00:09:30.690 Test: blockdev write read 8 blocks ...passed 00:09:30.690 Test: blockdev write read size > 128k ...passed 00:09:30.690 Test: blockdev write read invalid size ...passed 00:09:30.690 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:30.690 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:30.690 Test: blockdev write read max offset ...passed 00:09:30.690 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:30.690 Test: blockdev writev readv 8 blocks ...passed 00:09:30.690 Test: blockdev writev readv 30 x 1block ...passed 00:09:30.690 Test: blockdev writev readv block ...passed 00:09:30.690 Test: blockdev writev readv size > 128k ...passed 00:09:30.690 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:30.690 Test: blockdev comparev and writev ...passed 00:09:30.690 Test: blockdev nvme passthru rw ...passed 00:09:30.690 Test: blockdev nvme passthru vendor specific ...passed 00:09:30.690 Test: blockdev nvme admin passthru ...passed 00:09:30.690 Test: blockdev copy ...passed 00:09:30.690 Suite: bdevio tests on: Malloc2p2 00:09:30.690 Test: blockdev write read block ...passed 00:09:30.690 Test: blockdev write zeroes read block ...passed 00:09:30.690 Test: blockdev write zeroes read no split ...passed 00:09:30.690 Test: blockdev write zeroes read split ...passed 00:09:30.690 Test: blockdev write zeroes read split partial ...passed 00:09:30.690 Test: blockdev reset ...passed 00:09:30.690 Test: blockdev write read 8 blocks ...passed 00:09:30.690 Test: blockdev write read size > 128k ...passed 00:09:30.690 Test: blockdev write read invalid size ...passed 00:09:30.690 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:30.690 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:30.690 Test: blockdev write read max offset ...passed 00:09:30.690 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:30.690 Test: blockdev writev readv 8 blocks ...passed 00:09:30.690 Test: blockdev writev readv 30 x 1block ...passed 00:09:30.690 Test: blockdev writev readv block ...passed 00:09:30.690 Test: blockdev writev readv size > 128k ...passed 00:09:30.690 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:30.690 Test: blockdev comparev and writev ...passed 00:09:30.690 Test: blockdev nvme passthru rw ...passed 00:09:30.690 Test: blockdev nvme passthru vendor specific ...passed 00:09:30.690 Test: blockdev nvme admin passthru ...passed 00:09:30.690 Test: blockdev copy ...passed 00:09:30.690 Suite: bdevio tests on: Malloc2p1 00:09:30.690 Test: blockdev write read block ...passed 00:09:30.690 Test: blockdev write zeroes read block ...passed 00:09:30.690 Test: blockdev write zeroes read no split ...passed 00:09:30.690 Test: blockdev write zeroes read split ...passed 00:09:30.690 Test: blockdev write zeroes read split partial ...passed 00:09:30.690 Test: blockdev reset ...passed 00:09:30.690 Test: blockdev write read 8 blocks ...passed 00:09:30.690 Test: blockdev write read size > 128k ...passed 00:09:30.690 Test: blockdev write read invalid size ...passed 00:09:30.690 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:30.690 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:30.690 Test: blockdev write read max offset ...passed 00:09:30.690 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:30.690 Test: blockdev writev readv 8 blocks ...passed 00:09:30.690 Test: blockdev writev readv 30 x 1block ...passed 00:09:30.690 Test: blockdev writev readv block ...passed 00:09:30.690 Test: blockdev writev readv size > 128k ...passed 00:09:30.690 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:30.690 Test: blockdev comparev and writev ...passed 00:09:30.690 Test: blockdev nvme passthru rw ...passed 00:09:30.690 Test: blockdev nvme passthru vendor specific ...passed 00:09:30.690 Test: blockdev nvme admin passthru ...passed 00:09:30.690 Test: blockdev copy ...passed 00:09:30.690 Suite: bdevio tests on: Malloc2p0 00:09:30.690 Test: blockdev write read block ...passed 00:09:30.690 Test: blockdev write zeroes read block ...passed 00:09:30.690 Test: blockdev write zeroes read no split ...passed 00:09:30.690 Test: blockdev write zeroes read split ...passed 00:09:30.949 Test: blockdev write zeroes read split partial ...passed 00:09:30.949 Test: blockdev reset ...passed 00:09:30.949 Test: blockdev write read 8 blocks ...passed 00:09:30.949 Test: blockdev write read size > 128k ...passed 00:09:30.949 Test: blockdev write read invalid size ...passed 00:09:30.950 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:30.950 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:30.950 Test: blockdev write read max offset ...passed 00:09:30.950 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:30.950 Test: blockdev writev readv 8 blocks ...passed 00:09:30.950 Test: blockdev writev readv 30 x 1block ...passed 00:09:30.950 Test: blockdev writev readv block ...passed 00:09:30.950 Test: blockdev writev readv size > 128k ...passed 00:09:30.950 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:30.950 Test: blockdev comparev and writev ...passed 00:09:30.950 Test: blockdev nvme passthru rw ...passed 00:09:30.950 Test: blockdev nvme passthru vendor specific ...passed 00:09:30.950 Test: blockdev nvme admin passthru ...passed 00:09:30.950 Test: blockdev copy ...passed 00:09:30.950 Suite: bdevio tests on: Malloc1p1 00:09:30.950 Test: blockdev write read block ...passed 00:09:30.950 Test: blockdev write zeroes read block ...passed 00:09:30.950 Test: blockdev write zeroes read no split ...passed 00:09:30.950 Test: blockdev write zeroes read split ...passed 00:09:30.950 Test: blockdev write zeroes read split partial ...passed 00:09:30.950 Test: blockdev reset ...passed 00:09:30.950 Test: blockdev write read 8 blocks ...passed 00:09:30.950 Test: blockdev write read size > 128k ...passed 00:09:30.950 Test: blockdev write read invalid size ...passed 00:09:30.950 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:30.950 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:30.950 Test: blockdev write read max offset ...passed 00:09:30.950 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:30.950 Test: blockdev writev readv 8 blocks ...passed 00:09:30.950 Test: blockdev writev readv 30 x 1block ...passed 00:09:30.950 Test: blockdev writev readv block ...passed 00:09:30.950 Test: blockdev writev readv size > 128k ...passed 00:09:30.950 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:30.950 Test: blockdev comparev and writev ...passed 00:09:30.950 Test: blockdev nvme passthru rw ...passed 00:09:30.950 Test: blockdev nvme passthru vendor specific ...passed 00:09:30.950 Test: blockdev nvme admin passthru ...passed 00:09:30.950 Test: blockdev copy ...passed 00:09:30.950 Suite: bdevio tests on: Malloc1p0 00:09:30.950 Test: blockdev write read block ...passed 00:09:30.950 Test: blockdev write zeroes read block ...passed 00:09:30.950 Test: blockdev write zeroes read no split ...passed 00:09:30.950 Test: blockdev write zeroes read split ...passed 00:09:30.950 Test: blockdev write zeroes read split partial ...passed 00:09:30.950 Test: blockdev reset ...passed 00:09:30.950 Test: blockdev write read 8 blocks ...passed 00:09:30.950 Test: blockdev write read size > 128k ...passed 00:09:30.950 Test: blockdev write read invalid size ...passed 00:09:30.950 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:30.950 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:30.950 Test: blockdev write read max offset ...passed 00:09:30.950 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:30.950 Test: blockdev writev readv 8 blocks ...passed 00:09:30.950 Test: blockdev writev readv 30 x 1block ...passed 00:09:30.950 Test: blockdev writev readv block ...passed 00:09:30.950 Test: blockdev writev readv size > 128k ...passed 00:09:30.950 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:30.950 Test: blockdev comparev and writev ...passed 00:09:30.950 Test: blockdev nvme passthru rw ...passed 00:09:30.950 Test: blockdev nvme passthru vendor specific ...passed 00:09:30.950 Test: blockdev nvme admin passthru ...passed 00:09:30.950 Test: blockdev copy ...passed 00:09:30.950 Suite: bdevio tests on: Malloc0 00:09:30.950 Test: blockdev write read block ...passed 00:09:30.950 Test: blockdev write zeroes read block ...passed 00:09:30.950 Test: blockdev write zeroes read no split ...passed 00:09:30.950 Test: blockdev write zeroes read split ...passed 00:09:30.950 Test: blockdev write zeroes read split partial ...passed 00:09:30.950 Test: blockdev reset ...passed 00:09:30.950 Test: blockdev write read 8 blocks ...passed 00:09:30.950 Test: blockdev write read size > 128k ...passed 00:09:30.950 Test: blockdev write read invalid size ...passed 00:09:30.950 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:30.950 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:30.950 Test: blockdev write read max offset ...passed 00:09:30.950 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:30.950 Test: blockdev writev readv 8 blocks ...passed 00:09:30.950 Test: blockdev writev readv 30 x 1block ...passed 00:09:30.950 Test: blockdev writev readv block ...passed 00:09:30.950 Test: blockdev writev readv size > 128k ...passed 00:09:30.950 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:30.950 Test: blockdev comparev and writev ...passed 00:09:30.950 Test: blockdev nvme passthru rw ...passed 00:09:30.950 Test: blockdev nvme passthru vendor specific ...passed 00:09:30.950 Test: blockdev nvme admin passthru ...passed 00:09:30.950 Test: blockdev copy ...passed 00:09:30.950 00:09:30.950 Run Summary: Type Total Ran Passed Failed Inactive 00:09:30.950 suites 16 16 n/a 0 0 00:09:30.950 tests 368 368 368 0 0 00:09:30.950 asserts 2224 2224 2224 0 n/a 00:09:30.950 00:09:30.950 Elapsed time = 2.952 seconds 00:09:30.950 0 00:09:30.950 21:52:50 blockdev_general.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 1317441 00:09:30.950 21:52:50 blockdev_general.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 1317441 ']' 00:09:30.950 21:52:50 blockdev_general.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 1317441 00:09:31.209 21:52:50 blockdev_general.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:09:31.209 21:52:50 blockdev_general.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:31.209 21:52:50 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1317441 00:09:31.209 21:52:50 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:31.209 21:52:50 blockdev_general.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:31.209 21:52:50 blockdev_general.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1317441' 00:09:31.209 killing process with pid 1317441 00:09:31.209 21:52:50 blockdev_general.bdev_bounds -- common/autotest_common.sh@967 -- # kill 1317441 00:09:31.209 21:52:50 blockdev_general.bdev_bounds -- common/autotest_common.sh@972 -- # wait 1317441 00:09:33.113 21:52:52 blockdev_general.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:09:33.113 00:09:33.113 real 0m4.613s 00:09:33.113 user 0m11.778s 00:09:33.113 sys 0m0.582s 00:09:33.113 21:52:52 blockdev_general.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:33.113 21:52:52 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:33.113 ************************************ 00:09:33.113 END TEST bdev_bounds 00:09:33.113 ************************************ 00:09:33.113 21:52:52 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:09:33.113 21:52:52 blockdev_general -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:09:33.113 21:52:52 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:33.113 21:52:52 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:33.113 21:52:52 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:33.113 ************************************ 00:09:33.113 START TEST bdev_nbd 00:09:33.113 ************************************ 00:09:33.113 21:52:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:09:33.113 21:52:52 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:09:33.113 21:52:52 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:09:33.113 21:52:52 blockdev_general.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:33.113 21:52:52 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:09:33.113 21:52:52 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:09:33.113 21:52:52 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:09:33.113 21:52:52 blockdev_general.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=16 00:09:33.113 21:52:52 blockdev_general.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:09:33.113 21:52:52 blockdev_general.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:33.113 21:52:52 blockdev_general.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:09:33.113 21:52:52 blockdev_general.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=16 00:09:33.113 21:52:52 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:33.113 21:52:52 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:09:33.113 21:52:52 blockdev_general.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:09:33.113 21:52:52 blockdev_general.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:09:33.113 21:52:52 blockdev_general.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=1318260 00:09:33.113 21:52:52 blockdev_general.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:09:33.114 21:52:52 blockdev_general.bdev_nbd -- bdev/blockdev.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:09:33.114 21:52:52 blockdev_general.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 1318260 /var/tmp/spdk-nbd.sock 00:09:33.114 21:52:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 1318260 ']' 00:09:33.114 21:52:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:33.114 21:52:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:33.114 21:52:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:33.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:33.114 21:52:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:33.114 21:52:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:33.373 [2024-07-13 21:52:52.506437] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:33.373 [2024-07-13 21:52:52.506528] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:33.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.373 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:33.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.373 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:33.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.373 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:33.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.373 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:33.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.373 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:33.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.373 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:33.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.374 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:33.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.374 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:33.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.374 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:33.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.374 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:33.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.374 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:33.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.374 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:33.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.374 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:33.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.374 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:33.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.374 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:33.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.374 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:33.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.374 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:33.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.374 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:33.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.374 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:33.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.374 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:33.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.374 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:33.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.374 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:33.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.374 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:33.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.374 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:33.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.374 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:33.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.374 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:33.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.374 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:33.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.374 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:33.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.374 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:33.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.374 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:33.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.374 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:33.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.374 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:33.374 [2024-07-13 21:52:52.669115] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.634 [2024-07-13 21:52:52.872715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.202 [2024-07-13 21:52:53.331626] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:34.202 [2024-07-13 21:52:53.331683] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:09:34.202 [2024-07-13 21:52:53.331698] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:09:34.202 [2024-07-13 21:52:53.339599] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:09:34.202 [2024-07-13 21:52:53.339632] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:09:34.202 [2024-07-13 21:52:53.347603] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:09:34.202 [2024-07-13 21:52:53.347632] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:09:34.202 [2024-07-13 21:52:53.538956] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:34.202 [2024-07-13 21:52:53.539011] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:34.202 [2024-07-13 21:52:53.539026] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042980 00:09:34.202 [2024-07-13 21:52:53.539037] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:34.202 [2024-07-13 21:52:53.541037] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:34.202 [2024-07-13 21:52:53.541065] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:09:34.769 21:52:53 blockdev_general.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:34.769 21:52:53 blockdev_general.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:09:34.769 21:52:53 blockdev_general.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:09:34.769 21:52:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:34.769 21:52:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:09:34.769 21:52:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:09:34.769 21:52:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:09:34.769 21:52:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:34.769 21:52:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:09:34.769 21:52:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:09:34.769 21:52:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:09:34.769 21:52:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:09:34.769 21:52:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:09:34.769 21:52:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:09:34.769 21:52:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:09:34.769 21:52:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:09:34.769 21:52:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:09:34.769 21:52:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:09:34.769 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:34.769 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:34.769 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:34.769 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:34.769 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:34.769 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:34.769 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:34.769 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:34.769 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:34.769 1+0 records in 00:09:34.769 1+0 records out 00:09:34.769 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262168 s, 15.6 MB/s 00:09:34.769 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:34.769 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:34.769 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:34.769 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:34.769 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:34.769 21:52:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:34.769 21:52:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:09:34.769 21:52:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:09:35.028 21:52:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:09:35.028 21:52:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:09:35.028 21:52:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:09:35.028 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:35.028 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:35.028 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:35.028 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:35.028 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:35.028 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:35.028 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:35.028 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:35.028 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:35.028 1+0 records in 00:09:35.028 1+0 records out 00:09:35.028 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272337 s, 15.0 MB/s 00:09:35.028 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:35.028 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:35.028 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:35.028 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:35.028 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:35.028 21:52:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:35.028 21:52:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:09:35.028 21:52:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:09:35.286 21:52:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:09:35.286 21:52:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:09:35.286 21:52:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:09:35.286 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:09:35.286 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:35.286 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:35.286 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:35.286 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:09:35.286 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:35.286 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:35.286 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:35.287 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:35.287 1+0 records in 00:09:35.287 1+0 records out 00:09:35.287 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028658 s, 14.3 MB/s 00:09:35.287 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:35.287 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:35.287 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:35.287 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:35.287 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:35.287 21:52:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:35.287 21:52:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:09:35.287 21:52:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:09:35.544 21:52:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:09:35.544 21:52:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:09:35.544 21:52:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:09:35.544 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:09:35.544 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:35.544 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:35.544 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:35.544 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:09:35.545 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:35.545 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:35.545 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:35.545 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:35.545 1+0 records in 00:09:35.545 1+0 records out 00:09:35.545 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285843 s, 14.3 MB/s 00:09:35.545 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:35.545 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:35.545 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:35.545 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:35.545 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:35.545 21:52:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:35.545 21:52:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:09:35.545 21:52:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:09:35.545 21:52:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:09:35.545 21:52:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:09:35.545 21:52:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:09:35.545 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:09:35.545 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:35.545 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:35.545 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:35.545 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:09:35.802 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:35.802 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:35.802 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:35.802 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:35.802 1+0 records in 00:09:35.802 1+0 records out 00:09:35.802 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318602 s, 12.9 MB/s 00:09:35.802 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:35.802 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:35.802 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:35.802 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:35.802 21:52:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:35.802 21:52:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:35.802 21:52:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:09:35.802 21:52:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:09:35.802 21:52:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:09:35.802 21:52:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:09:35.802 21:52:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:09:35.802 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:09:35.802 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:35.802 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:35.802 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:35.802 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:09:35.802 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:35.802 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:35.802 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:35.802 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:35.802 1+0 records in 00:09:35.802 1+0 records out 00:09:35.802 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000387855 s, 10.6 MB/s 00:09:35.802 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:35.802 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:35.802 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:35.802 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:35.802 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:35.802 21:52:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:35.802 21:52:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:09:35.802 21:52:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:09:36.060 21:52:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:09:36.060 21:52:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:09:36.060 21:52:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:09:36.060 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:09:36.060 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:36.060 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:36.060 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:36.060 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:09:36.060 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:36.061 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:36.061 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:36.061 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:36.061 1+0 records in 00:09:36.061 1+0 records out 00:09:36.061 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386399 s, 10.6 MB/s 00:09:36.061 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:36.061 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:36.061 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:36.061 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:36.061 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:36.061 21:52:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:36.061 21:52:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:09:36.061 21:52:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:09:36.319 21:52:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:09:36.319 21:52:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:09:36.319 21:52:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:09:36.319 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd7 00:09:36.319 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:36.319 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:36.319 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:36.319 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd7 /proc/partitions 00:09:36.319 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:36.319 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:36.319 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:36.319 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd7 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:36.319 1+0 records in 00:09:36.319 1+0 records out 00:09:36.319 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000463711 s, 8.8 MB/s 00:09:36.319 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:36.319 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:36.319 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:36.319 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:36.319 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:36.319 21:52:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:36.319 21:52:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:09:36.319 21:52:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:09:36.578 21:52:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:09:36.578 21:52:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:09:36.578 21:52:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:09:36.578 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd8 00:09:36.578 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:36.578 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:36.578 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:36.578 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd8 /proc/partitions 00:09:36.578 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:36.578 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:36.578 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:36.578 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd8 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:36.578 1+0 records in 00:09:36.578 1+0 records out 00:09:36.578 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000482893 s, 8.5 MB/s 00:09:36.578 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:36.578 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:36.578 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:36.578 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:36.578 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:36.578 21:52:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:36.578 21:52:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:09:36.578 21:52:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:09:36.836 21:52:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:09:36.836 21:52:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:09:36.836 21:52:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:09:36.836 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd9 00:09:36.836 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:36.836 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:36.836 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:36.836 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd9 /proc/partitions 00:09:36.836 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:36.836 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:36.836 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:36.836 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd9 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:36.836 1+0 records in 00:09:36.836 1+0 records out 00:09:36.836 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000430694 s, 9.5 MB/s 00:09:36.836 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:36.836 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:36.837 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:36.837 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:36.837 21:52:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:36.837 21:52:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:36.837 21:52:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:09:36.837 21:52:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:09:36.837 21:52:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:09:36.837 21:52:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:09:36.837 21:52:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:09:36.837 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:09:36.837 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:36.837 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:36.837 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:36.837 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:09:36.837 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:36.837 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:36.837 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:36.837 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:36.837 1+0 records in 00:09:36.837 1+0 records out 00:09:36.837 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000434789 s, 9.4 MB/s 00:09:36.837 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:36.837 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:36.837 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:36.837 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:36.837 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:36.837 21:52:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:36.837 21:52:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:09:36.837 21:52:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:09:37.095 21:52:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:09:37.095 21:52:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:09:37.095 21:52:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:09:37.095 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:09:37.095 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:37.095 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:37.095 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:37.095 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:09:37.095 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:37.095 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:37.095 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:37.095 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:37.095 1+0 records in 00:09:37.095 1+0 records out 00:09:37.095 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000458331 s, 8.9 MB/s 00:09:37.095 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:37.095 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:37.095 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:37.095 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:37.095 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:37.095 21:52:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:37.095 21:52:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:09:37.095 21:52:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:09:37.354 21:52:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:09:37.354 21:52:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:09:37.354 21:52:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:09:37.354 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:09:37.354 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:37.354 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:37.354 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:37.354 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:09:37.354 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:37.354 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:37.354 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:37.354 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:37.354 1+0 records in 00:09:37.354 1+0 records out 00:09:37.354 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000458687 s, 8.9 MB/s 00:09:37.354 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:37.354 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:37.354 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:37.355 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:37.355 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:37.355 21:52:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:37.355 21:52:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:09:37.355 21:52:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:09:37.613 21:52:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:09:37.613 21:52:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:09:37.613 21:52:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:09:37.613 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:09:37.613 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:37.613 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:37.613 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:37.613 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:09:37.613 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:37.613 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:37.613 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:37.614 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:37.614 1+0 records in 00:09:37.614 1+0 records out 00:09:37.614 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000491623 s, 8.3 MB/s 00:09:37.614 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:37.614 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:37.614 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:37.614 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:37.614 21:52:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:37.614 21:52:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:37.614 21:52:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:09:37.614 21:52:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:09:37.873 21:52:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:09:37.873 21:52:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:09:37.873 21:52:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:09:37.873 21:52:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:09:37.873 21:52:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:37.873 21:52:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:37.873 21:52:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:37.873 21:52:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:09:37.873 21:52:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:37.873 21:52:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:37.873 21:52:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:37.873 21:52:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:37.873 1+0 records in 00:09:37.873 1+0 records out 00:09:37.873 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00045681 s, 9.0 MB/s 00:09:37.873 21:52:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:37.873 21:52:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:37.873 21:52:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:37.873 21:52:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:37.873 21:52:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:37.873 21:52:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:37.873 21:52:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:09:37.873 21:52:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:09:38.133 21:52:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:09:38.133 21:52:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:09:38.133 21:52:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:09:38.133 21:52:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd15 00:09:38.133 21:52:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:38.133 21:52:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:38.133 21:52:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:38.133 21:52:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd15 /proc/partitions 00:09:38.133 21:52:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:38.133 21:52:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:38.133 21:52:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:38.133 21:52:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd15 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:38.133 1+0 records in 00:09:38.133 1+0 records out 00:09:38.133 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000504576 s, 8.1 MB/s 00:09:38.133 21:52:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:38.133 21:52:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:38.133 21:52:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:38.133 21:52:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:38.133 21:52:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:38.133 21:52:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:38.133 21:52:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:09:38.133 21:52:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:38.133 21:52:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:09:38.133 { 00:09:38.133 "nbd_device": "/dev/nbd0", 00:09:38.133 "bdev_name": "Malloc0" 00:09:38.133 }, 00:09:38.133 { 00:09:38.133 "nbd_device": "/dev/nbd1", 00:09:38.133 "bdev_name": "Malloc1p0" 00:09:38.133 }, 00:09:38.133 { 00:09:38.133 "nbd_device": "/dev/nbd2", 00:09:38.133 "bdev_name": "Malloc1p1" 00:09:38.133 }, 00:09:38.133 { 00:09:38.133 "nbd_device": "/dev/nbd3", 00:09:38.133 "bdev_name": "Malloc2p0" 00:09:38.133 }, 00:09:38.133 { 00:09:38.133 "nbd_device": "/dev/nbd4", 00:09:38.133 "bdev_name": "Malloc2p1" 00:09:38.133 }, 00:09:38.133 { 00:09:38.133 "nbd_device": "/dev/nbd5", 00:09:38.133 "bdev_name": "Malloc2p2" 00:09:38.133 }, 00:09:38.133 { 00:09:38.133 "nbd_device": "/dev/nbd6", 00:09:38.133 "bdev_name": "Malloc2p3" 00:09:38.133 }, 00:09:38.133 { 00:09:38.133 "nbd_device": "/dev/nbd7", 00:09:38.133 "bdev_name": "Malloc2p4" 00:09:38.133 }, 00:09:38.133 { 00:09:38.133 "nbd_device": "/dev/nbd8", 00:09:38.133 "bdev_name": "Malloc2p5" 00:09:38.133 }, 00:09:38.133 { 00:09:38.133 "nbd_device": "/dev/nbd9", 00:09:38.133 "bdev_name": "Malloc2p6" 00:09:38.133 }, 00:09:38.133 { 00:09:38.133 "nbd_device": "/dev/nbd10", 00:09:38.133 "bdev_name": "Malloc2p7" 00:09:38.133 }, 00:09:38.133 { 00:09:38.133 "nbd_device": "/dev/nbd11", 00:09:38.133 "bdev_name": "TestPT" 00:09:38.133 }, 00:09:38.133 { 00:09:38.133 "nbd_device": "/dev/nbd12", 00:09:38.133 "bdev_name": "raid0" 00:09:38.133 }, 00:09:38.133 { 00:09:38.133 "nbd_device": "/dev/nbd13", 00:09:38.133 "bdev_name": "concat0" 00:09:38.133 }, 00:09:38.133 { 00:09:38.133 "nbd_device": "/dev/nbd14", 00:09:38.133 "bdev_name": "raid1" 00:09:38.133 }, 00:09:38.133 { 00:09:38.133 "nbd_device": "/dev/nbd15", 00:09:38.133 "bdev_name": "AIO0" 00:09:38.133 } 00:09:38.133 ]' 00:09:38.133 21:52:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:09:38.133 21:52:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:09:38.133 { 00:09:38.133 "nbd_device": "/dev/nbd0", 00:09:38.133 "bdev_name": "Malloc0" 00:09:38.133 }, 00:09:38.133 { 00:09:38.133 "nbd_device": "/dev/nbd1", 00:09:38.133 "bdev_name": "Malloc1p0" 00:09:38.133 }, 00:09:38.133 { 00:09:38.133 "nbd_device": "/dev/nbd2", 00:09:38.133 "bdev_name": "Malloc1p1" 00:09:38.133 }, 00:09:38.133 { 00:09:38.133 "nbd_device": "/dev/nbd3", 00:09:38.133 "bdev_name": "Malloc2p0" 00:09:38.133 }, 00:09:38.133 { 00:09:38.133 "nbd_device": "/dev/nbd4", 00:09:38.133 "bdev_name": "Malloc2p1" 00:09:38.133 }, 00:09:38.133 { 00:09:38.133 "nbd_device": "/dev/nbd5", 00:09:38.133 "bdev_name": "Malloc2p2" 00:09:38.133 }, 00:09:38.133 { 00:09:38.133 "nbd_device": "/dev/nbd6", 00:09:38.133 "bdev_name": "Malloc2p3" 00:09:38.133 }, 00:09:38.133 { 00:09:38.133 "nbd_device": "/dev/nbd7", 00:09:38.133 "bdev_name": "Malloc2p4" 00:09:38.133 }, 00:09:38.133 { 00:09:38.133 "nbd_device": "/dev/nbd8", 00:09:38.133 "bdev_name": "Malloc2p5" 00:09:38.133 }, 00:09:38.133 { 00:09:38.133 "nbd_device": "/dev/nbd9", 00:09:38.133 "bdev_name": "Malloc2p6" 00:09:38.133 }, 00:09:38.133 { 00:09:38.133 "nbd_device": "/dev/nbd10", 00:09:38.133 "bdev_name": "Malloc2p7" 00:09:38.133 }, 00:09:38.133 { 00:09:38.133 "nbd_device": "/dev/nbd11", 00:09:38.133 "bdev_name": "TestPT" 00:09:38.133 }, 00:09:38.133 { 00:09:38.133 "nbd_device": "/dev/nbd12", 00:09:38.133 "bdev_name": "raid0" 00:09:38.133 }, 00:09:38.133 { 00:09:38.133 "nbd_device": "/dev/nbd13", 00:09:38.133 "bdev_name": "concat0" 00:09:38.133 }, 00:09:38.133 { 00:09:38.133 "nbd_device": "/dev/nbd14", 00:09:38.133 "bdev_name": "raid1" 00:09:38.133 }, 00:09:38.133 { 00:09:38.133 "nbd_device": "/dev/nbd15", 00:09:38.133 "bdev_name": "AIO0" 00:09:38.133 } 00:09:38.133 ]' 00:09:38.133 21:52:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:09:38.393 21:52:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:09:38.393 21:52:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:38.393 21:52:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15') 00:09:38.393 21:52:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:38.393 21:52:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:38.393 21:52:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:38.393 21:52:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:38.393 21:52:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:38.393 21:52:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:38.393 21:52:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:38.393 21:52:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:38.393 21:52:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:38.393 21:52:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:38.393 21:52:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:38.393 21:52:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:38.393 21:52:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:38.393 21:52:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:38.653 21:52:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:38.653 21:52:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:38.653 21:52:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:38.653 21:52:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:38.653 21:52:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:38.653 21:52:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:38.653 21:52:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:38.653 21:52:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:38.653 21:52:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:38.653 21:52:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:38.915 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:38.915 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:38.915 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:38.915 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:38.915 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:38.915 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:38.915 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:38.915 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:38.915 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:38.915 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:38.915 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:38.915 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:38.915 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:38.915 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:38.915 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:38.915 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:38.915 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:38.915 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:38.915 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:38.915 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:39.215 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:39.215 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:39.215 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:39.215 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:39.215 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:39.215 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:39.215 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:39.215 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:39.215 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:39.215 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:39.215 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:39.475 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:39.475 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:39.475 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:39.475 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:39.475 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:39.475 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:39.475 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:39.475 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:39.475 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:09:39.475 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:09:39.475 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:09:39.475 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:09:39.475 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:39.475 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:39.475 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:09:39.475 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:39.475 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:39.475 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:39.475 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:09:39.734 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:09:39.734 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:09:39.734 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:09:39.734 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:39.734 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:39.734 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:09:39.734 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:39.734 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:39.734 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:39.734 21:52:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:09:39.994 21:52:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:09:39.994 21:52:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:09:39.994 21:52:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:09:39.994 21:52:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:39.994 21:52:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:39.994 21:52:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:09:39.994 21:52:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:39.994 21:52:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:39.994 21:52:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:39.994 21:52:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:09:39.994 21:52:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:09:39.994 21:52:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:09:39.994 21:52:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:09:39.994 21:52:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:39.994 21:52:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:39.994 21:52:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:09:39.994 21:52:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:39.994 21:52:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:39.994 21:52:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:39.994 21:52:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:40.253 21:52:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:40.253 21:52:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:40.253 21:52:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:40.253 21:52:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:40.253 21:52:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:40.253 21:52:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:40.253 21:52:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:40.253 21:52:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:40.253 21:52:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:40.253 21:52:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:40.512 21:52:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:40.512 21:52:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:40.512 21:52:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:40.512 21:52:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:40.512 21:52:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:40.512 21:52:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:40.512 21:52:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:40.512 21:52:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:40.512 21:52:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:40.512 21:52:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:40.512 21:52:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:40.512 21:52:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:40.512 21:52:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:40.512 21:52:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:40.512 21:52:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:40.512 21:52:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:40.512 21:52:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:40.512 21:52:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:40.512 21:52:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:40.512 21:52:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:40.771 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:40.771 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:40.771 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:40.771 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:40.771 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:40.771 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:40.771 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:40.771 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:40.771 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:40.771 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:09:41.031 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:09:41.031 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:09:41.031 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:09:41.031 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:41.031 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:41.031 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:09:41.031 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:41.031 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:41.031 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:41.031 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:09:41.031 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:09:41.290 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:09:41.290 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:09:41.290 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:41.290 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:41.290 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:09:41.290 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:41.290 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:41.290 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:41.290 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:41.290 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:41.290 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:41.290 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:41.290 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:41.290 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:41.290 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:41.290 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:41.290 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:41.290 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:41.290 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:41.290 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:09:41.290 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:41.290 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:09:41.290 21:53:00 blockdev_general.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:09:41.290 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:41.290 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:09:41.290 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:41.290 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:41.290 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:41.290 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:09:41.290 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:41.290 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:09:41.290 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:41.290 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:41.290 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:41.290 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:09:41.290 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:41.291 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:09:41.291 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:41.550 /dev/nbd0 00:09:41.550 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:41.550 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:41.550 21:53:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:41.550 21:53:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:41.550 21:53:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:41.550 21:53:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:41.550 21:53:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:41.550 21:53:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:41.550 21:53:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:41.550 21:53:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:41.550 21:53:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:41.550 1+0 records in 00:09:41.550 1+0 records out 00:09:41.550 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000250573 s, 16.3 MB/s 00:09:41.550 21:53:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:41.550 21:53:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:41.550 21:53:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:41.550 21:53:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:41.550 21:53:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:41.550 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:41.550 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:09:41.550 21:53:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:09:41.808 /dev/nbd1 00:09:41.808 21:53:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:41.808 21:53:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:41.808 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:41.808 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:41.808 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:41.808 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:41.808 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:41.808 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:41.808 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:41.808 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:41.808 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:41.808 1+0 records in 00:09:41.808 1+0 records out 00:09:41.808 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274839 s, 14.9 MB/s 00:09:41.808 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:41.808 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:41.808 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:41.808 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:41.808 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:41.808 21:53:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:41.808 21:53:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:09:41.808 21:53:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:09:42.066 /dev/nbd10 00:09:42.066 21:53:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:42.066 21:53:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:42.066 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:09:42.066 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:42.066 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:42.066 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:42.066 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:09:42.066 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:42.066 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:42.066 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:42.066 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:42.066 1+0 records in 00:09:42.066 1+0 records out 00:09:42.066 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275302 s, 14.9 MB/s 00:09:42.066 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:42.066 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:42.066 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:42.066 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:42.066 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:42.066 21:53:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:42.066 21:53:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:09:42.066 21:53:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:09:42.325 /dev/nbd11 00:09:42.325 21:53:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:42.325 21:53:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:42.325 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:09:42.325 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:42.325 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:42.325 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:42.325 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:09:42.325 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:42.325 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:42.325 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:42.325 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:42.325 1+0 records in 00:09:42.325 1+0 records out 00:09:42.325 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287899 s, 14.2 MB/s 00:09:42.325 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:42.325 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:42.325 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:42.325 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:42.325 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:42.325 21:53:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:42.325 21:53:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:09:42.325 21:53:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:09:42.325 /dev/nbd12 00:09:42.584 21:53:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:42.584 21:53:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:42.584 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:09:42.584 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:42.584 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:42.584 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:42.584 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:09:42.584 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:42.584 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:42.584 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:42.584 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:42.584 1+0 records in 00:09:42.584 1+0 records out 00:09:42.584 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000342702 s, 12.0 MB/s 00:09:42.584 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:42.584 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:42.584 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:42.584 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:42.584 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:42.584 21:53:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:42.584 21:53:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:09:42.584 21:53:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:09:42.584 /dev/nbd13 00:09:42.584 21:53:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:42.584 21:53:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:42.584 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:09:42.584 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:42.584 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:42.584 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:42.584 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:09:42.584 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:42.584 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:42.584 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:42.584 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:42.584 1+0 records in 00:09:42.584 1+0 records out 00:09:42.584 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000409158 s, 10.0 MB/s 00:09:42.584 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:42.584 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:42.584 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:42.584 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:42.584 21:53:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:42.584 21:53:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:42.584 21:53:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:09:42.584 21:53:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:09:42.843 /dev/nbd14 00:09:42.843 21:53:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:09:42.843 21:53:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:09:42.843 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:09:42.843 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:42.843 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:42.843 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:42.843 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:09:42.843 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:42.843 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:42.843 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:42.843 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:42.843 1+0 records in 00:09:42.843 1+0 records out 00:09:42.843 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000325098 s, 12.6 MB/s 00:09:42.843 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:42.843 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:42.843 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:42.843 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:42.843 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:42.843 21:53:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:42.843 21:53:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:09:42.843 21:53:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:09:43.102 /dev/nbd15 00:09:43.102 21:53:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:09:43.102 21:53:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:09:43.102 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd15 00:09:43.102 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:43.102 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:43.102 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:43.102 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd15 /proc/partitions 00:09:43.102 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:43.102 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:43.102 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:43.102 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd15 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:43.102 1+0 records in 00:09:43.102 1+0 records out 00:09:43.102 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354383 s, 11.6 MB/s 00:09:43.102 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:43.102 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:43.102 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:43.102 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:43.102 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:43.102 21:53:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:43.102 21:53:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:09:43.102 21:53:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:09:43.362 /dev/nbd2 00:09:43.362 21:53:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:09:43.362 21:53:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:09:43.362 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:09:43.362 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:43.362 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:43.362 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:43.362 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:09:43.362 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:43.362 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:43.362 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:43.362 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:43.362 1+0 records in 00:09:43.362 1+0 records out 00:09:43.362 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000436168 s, 9.4 MB/s 00:09:43.362 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:43.362 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:43.362 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:43.362 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:43.362 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:43.362 21:53:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:43.362 21:53:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:09:43.362 21:53:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:09:43.621 /dev/nbd3 00:09:43.621 21:53:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:09:43.621 21:53:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:09:43.621 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:09:43.621 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:43.621 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:43.621 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:43.621 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:09:43.621 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:43.621 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:43.621 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:43.621 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:43.621 1+0 records in 00:09:43.621 1+0 records out 00:09:43.621 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00040988 s, 10.0 MB/s 00:09:43.621 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:43.621 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:43.621 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:43.621 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:43.621 21:53:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:43.621 21:53:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:43.621 21:53:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:09:43.621 21:53:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:09:43.881 /dev/nbd4 00:09:43.881 21:53:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:09:43.881 21:53:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:09:43.881 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:09:43.881 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:43.881 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:43.881 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:43.881 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:09:43.881 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:43.881 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:43.881 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:43.881 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:43.881 1+0 records in 00:09:43.881 1+0 records out 00:09:43.881 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000483199 s, 8.5 MB/s 00:09:43.881 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:43.881 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:43.881 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:43.881 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:43.881 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:43.881 21:53:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:43.881 21:53:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:09:43.881 21:53:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:09:43.881 /dev/nbd5 00:09:43.881 21:53:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:09:43.881 21:53:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:09:43.881 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:09:43.881 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:43.881 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:43.881 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:43.881 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:09:44.140 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:44.140 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:44.140 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:44.140 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:44.140 1+0 records in 00:09:44.140 1+0 records out 00:09:44.140 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00056035 s, 7.3 MB/s 00:09:44.140 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:44.140 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:44.140 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:44.140 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:44.140 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:44.140 21:53:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:44.140 21:53:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:09:44.140 21:53:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:09:44.140 /dev/nbd6 00:09:44.140 21:53:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:09:44.140 21:53:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:09:44.140 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:09:44.140 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:44.140 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:44.140 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:44.140 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:09:44.140 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:44.140 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:44.140 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:44.140 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:44.140 1+0 records in 00:09:44.140 1+0 records out 00:09:44.140 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000430628 s, 9.5 MB/s 00:09:44.140 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:44.140 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:44.140 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:44.140 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:44.140 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:44.140 21:53:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:44.140 21:53:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:09:44.140 21:53:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:09:44.398 /dev/nbd7 00:09:44.398 21:53:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:09:44.398 21:53:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:09:44.398 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd7 00:09:44.398 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:44.398 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:44.398 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:44.398 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd7 /proc/partitions 00:09:44.398 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:44.398 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:44.398 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:44.398 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd7 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:44.398 1+0 records in 00:09:44.398 1+0 records out 00:09:44.398 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000595075 s, 6.9 MB/s 00:09:44.398 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:44.398 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:44.398 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:44.398 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:44.398 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:44.398 21:53:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:44.398 21:53:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:09:44.398 21:53:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:09:44.656 /dev/nbd8 00:09:44.656 21:53:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:09:44.656 21:53:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:09:44.656 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd8 00:09:44.656 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:44.656 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:44.656 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:44.656 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd8 /proc/partitions 00:09:44.656 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:44.656 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:44.657 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:44.657 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd8 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:44.657 1+0 records in 00:09:44.657 1+0 records out 00:09:44.657 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000460983 s, 8.9 MB/s 00:09:44.657 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:44.657 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:44.657 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:44.657 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:44.657 21:53:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:44.657 21:53:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:44.657 21:53:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:09:44.657 21:53:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:09:44.914 /dev/nbd9 00:09:44.914 21:53:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:09:44.914 21:53:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:09:44.914 21:53:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd9 00:09:44.914 21:53:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:44.914 21:53:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:44.914 21:53:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:44.914 21:53:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd9 /proc/partitions 00:09:44.914 21:53:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:44.914 21:53:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:44.914 21:53:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:44.914 21:53:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd9 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:44.914 1+0 records in 00:09:44.914 1+0 records out 00:09:44.914 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000705414 s, 5.8 MB/s 00:09:44.914 21:53:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:44.914 21:53:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:44.914 21:53:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:44.914 21:53:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:44.914 21:53:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:44.914 21:53:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:44.914 21:53:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:09:44.914 21:53:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:44.914 21:53:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:44.915 21:53:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:45.173 21:53:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:45.173 { 00:09:45.173 "nbd_device": "/dev/nbd0", 00:09:45.173 "bdev_name": "Malloc0" 00:09:45.173 }, 00:09:45.173 { 00:09:45.173 "nbd_device": "/dev/nbd1", 00:09:45.173 "bdev_name": "Malloc1p0" 00:09:45.173 }, 00:09:45.173 { 00:09:45.173 "nbd_device": "/dev/nbd10", 00:09:45.173 "bdev_name": "Malloc1p1" 00:09:45.173 }, 00:09:45.173 { 00:09:45.173 "nbd_device": "/dev/nbd11", 00:09:45.173 "bdev_name": "Malloc2p0" 00:09:45.173 }, 00:09:45.173 { 00:09:45.173 "nbd_device": "/dev/nbd12", 00:09:45.173 "bdev_name": "Malloc2p1" 00:09:45.173 }, 00:09:45.173 { 00:09:45.173 "nbd_device": "/dev/nbd13", 00:09:45.173 "bdev_name": "Malloc2p2" 00:09:45.173 }, 00:09:45.173 { 00:09:45.173 "nbd_device": "/dev/nbd14", 00:09:45.173 "bdev_name": "Malloc2p3" 00:09:45.173 }, 00:09:45.173 { 00:09:45.173 "nbd_device": "/dev/nbd15", 00:09:45.173 "bdev_name": "Malloc2p4" 00:09:45.173 }, 00:09:45.173 { 00:09:45.173 "nbd_device": "/dev/nbd2", 00:09:45.173 "bdev_name": "Malloc2p5" 00:09:45.173 }, 00:09:45.173 { 00:09:45.173 "nbd_device": "/dev/nbd3", 00:09:45.173 "bdev_name": "Malloc2p6" 00:09:45.173 }, 00:09:45.173 { 00:09:45.173 "nbd_device": "/dev/nbd4", 00:09:45.173 "bdev_name": "Malloc2p7" 00:09:45.173 }, 00:09:45.173 { 00:09:45.173 "nbd_device": "/dev/nbd5", 00:09:45.173 "bdev_name": "TestPT" 00:09:45.173 }, 00:09:45.173 { 00:09:45.173 "nbd_device": "/dev/nbd6", 00:09:45.173 "bdev_name": "raid0" 00:09:45.173 }, 00:09:45.173 { 00:09:45.173 "nbd_device": "/dev/nbd7", 00:09:45.173 "bdev_name": "concat0" 00:09:45.173 }, 00:09:45.173 { 00:09:45.173 "nbd_device": "/dev/nbd8", 00:09:45.173 "bdev_name": "raid1" 00:09:45.173 }, 00:09:45.173 { 00:09:45.173 "nbd_device": "/dev/nbd9", 00:09:45.173 "bdev_name": "AIO0" 00:09:45.173 } 00:09:45.173 ]' 00:09:45.173 21:53:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:45.173 { 00:09:45.173 "nbd_device": "/dev/nbd0", 00:09:45.173 "bdev_name": "Malloc0" 00:09:45.173 }, 00:09:45.173 { 00:09:45.173 "nbd_device": "/dev/nbd1", 00:09:45.173 "bdev_name": "Malloc1p0" 00:09:45.173 }, 00:09:45.173 { 00:09:45.173 "nbd_device": "/dev/nbd10", 00:09:45.173 "bdev_name": "Malloc1p1" 00:09:45.173 }, 00:09:45.173 { 00:09:45.173 "nbd_device": "/dev/nbd11", 00:09:45.173 "bdev_name": "Malloc2p0" 00:09:45.173 }, 00:09:45.173 { 00:09:45.173 "nbd_device": "/dev/nbd12", 00:09:45.173 "bdev_name": "Malloc2p1" 00:09:45.173 }, 00:09:45.173 { 00:09:45.173 "nbd_device": "/dev/nbd13", 00:09:45.173 "bdev_name": "Malloc2p2" 00:09:45.173 }, 00:09:45.173 { 00:09:45.173 "nbd_device": "/dev/nbd14", 00:09:45.173 "bdev_name": "Malloc2p3" 00:09:45.173 }, 00:09:45.173 { 00:09:45.173 "nbd_device": "/dev/nbd15", 00:09:45.173 "bdev_name": "Malloc2p4" 00:09:45.173 }, 00:09:45.173 { 00:09:45.173 "nbd_device": "/dev/nbd2", 00:09:45.173 "bdev_name": "Malloc2p5" 00:09:45.173 }, 00:09:45.173 { 00:09:45.173 "nbd_device": "/dev/nbd3", 00:09:45.173 "bdev_name": "Malloc2p6" 00:09:45.173 }, 00:09:45.173 { 00:09:45.173 "nbd_device": "/dev/nbd4", 00:09:45.173 "bdev_name": "Malloc2p7" 00:09:45.173 }, 00:09:45.173 { 00:09:45.173 "nbd_device": "/dev/nbd5", 00:09:45.173 "bdev_name": "TestPT" 00:09:45.173 }, 00:09:45.173 { 00:09:45.173 "nbd_device": "/dev/nbd6", 00:09:45.173 "bdev_name": "raid0" 00:09:45.173 }, 00:09:45.173 { 00:09:45.173 "nbd_device": "/dev/nbd7", 00:09:45.173 "bdev_name": "concat0" 00:09:45.173 }, 00:09:45.173 { 00:09:45.173 "nbd_device": "/dev/nbd8", 00:09:45.173 "bdev_name": "raid1" 00:09:45.173 }, 00:09:45.173 { 00:09:45.173 "nbd_device": "/dev/nbd9", 00:09:45.173 "bdev_name": "AIO0" 00:09:45.173 } 00:09:45.173 ]' 00:09:45.173 21:53:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:45.173 21:53:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:45.173 /dev/nbd1 00:09:45.173 /dev/nbd10 00:09:45.173 /dev/nbd11 00:09:45.173 /dev/nbd12 00:09:45.173 /dev/nbd13 00:09:45.173 /dev/nbd14 00:09:45.173 /dev/nbd15 00:09:45.173 /dev/nbd2 00:09:45.173 /dev/nbd3 00:09:45.173 /dev/nbd4 00:09:45.173 /dev/nbd5 00:09:45.173 /dev/nbd6 00:09:45.173 /dev/nbd7 00:09:45.173 /dev/nbd8 00:09:45.173 /dev/nbd9' 00:09:45.173 21:53:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:45.173 /dev/nbd1 00:09:45.173 /dev/nbd10 00:09:45.173 /dev/nbd11 00:09:45.173 /dev/nbd12 00:09:45.173 /dev/nbd13 00:09:45.173 /dev/nbd14 00:09:45.173 /dev/nbd15 00:09:45.173 /dev/nbd2 00:09:45.173 /dev/nbd3 00:09:45.173 /dev/nbd4 00:09:45.173 /dev/nbd5 00:09:45.173 /dev/nbd6 00:09:45.173 /dev/nbd7 00:09:45.173 /dev/nbd8 00:09:45.173 /dev/nbd9' 00:09:45.173 21:53:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:45.173 21:53:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=16 00:09:45.173 21:53:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 16 00:09:45.173 21:53:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=16 00:09:45.173 21:53:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:09:45.173 21:53:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:09:45.173 21:53:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:45.173 21:53:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:45.173 21:53:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:45.173 21:53:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:09:45.173 21:53:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:45.173 21:53:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:45.173 256+0 records in 00:09:45.173 256+0 records out 00:09:45.173 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103864 s, 101 MB/s 00:09:45.173 21:53:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:45.173 21:53:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:45.173 256+0 records in 00:09:45.173 256+0 records out 00:09:45.173 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.120087 s, 8.7 MB/s 00:09:45.173 21:53:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:45.173 21:53:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:45.432 256+0 records in 00:09:45.432 256+0 records out 00:09:45.432 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.123716 s, 8.5 MB/s 00:09:45.432 21:53:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:45.432 21:53:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:45.432 256+0 records in 00:09:45.432 256+0 records out 00:09:45.432 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12584 s, 8.3 MB/s 00:09:45.432 21:53:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:45.432 21:53:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:45.690 256+0 records in 00:09:45.690 256+0 records out 00:09:45.690 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124794 s, 8.4 MB/s 00:09:45.690 21:53:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:45.690 21:53:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:45.690 256+0 records in 00:09:45.690 256+0 records out 00:09:45.690 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.123601 s, 8.5 MB/s 00:09:45.690 21:53:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:45.690 21:53:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:45.948 256+0 records in 00:09:45.948 256+0 records out 00:09:45.948 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124428 s, 8.4 MB/s 00:09:45.948 21:53:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:45.948 21:53:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:09:45.948 256+0 records in 00:09:45.948 256+0 records out 00:09:45.948 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.126486 s, 8.3 MB/s 00:09:45.948 21:53:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:45.948 21:53:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:09:46.205 256+0 records in 00:09:46.205 256+0 records out 00:09:46.205 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0710176 s, 14.8 MB/s 00:09:46.205 21:53:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:46.205 21:53:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:09:46.205 256+0 records in 00:09:46.205 256+0 records out 00:09:46.205 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.120921 s, 8.7 MB/s 00:09:46.205 21:53:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:46.205 21:53:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:09:46.464 256+0 records in 00:09:46.464 256+0 records out 00:09:46.464 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121781 s, 8.6 MB/s 00:09:46.464 21:53:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:46.464 21:53:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:09:46.464 256+0 records in 00:09:46.464 256+0 records out 00:09:46.464 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.116648 s, 9.0 MB/s 00:09:46.464 21:53:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:46.464 21:53:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:09:46.722 256+0 records in 00:09:46.722 256+0 records out 00:09:46.722 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.120875 s, 8.7 MB/s 00:09:46.722 21:53:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:46.722 21:53:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:09:46.722 256+0 records in 00:09:46.722 256+0 records out 00:09:46.722 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.123596 s, 8.5 MB/s 00:09:46.722 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:46.722 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:09:46.981 256+0 records in 00:09:46.981 256+0 records out 00:09:46.981 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.126656 s, 8.3 MB/s 00:09:46.981 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:46.981 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:09:46.981 256+0 records in 00:09:46.981 256+0 records out 00:09:46.981 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.127683 s, 8.2 MB/s 00:09:46.981 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:46.981 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:09:47.239 256+0 records in 00:09:47.240 256+0 records out 00:09:47.240 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.122182 s, 8.6 MB/s 00:09:47.240 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:09:47.240 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:47.240 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:47.240 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:47.240 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:09:47.240 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:47.240 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:47.240 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:47.240 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:47.240 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:47.240 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:47.240 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:47.240 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:47.240 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:47.240 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:47.240 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:47.240 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:47.240 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:47.240 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:47.240 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:47.240 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd14 00:09:47.240 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:47.240 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd15 00:09:47.240 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:47.240 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd2 00:09:47.240 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:47.240 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd3 00:09:47.240 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:47.240 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd4 00:09:47.240 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:47.240 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd5 00:09:47.240 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:47.240 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd6 00:09:47.240 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:47.240 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd7 00:09:47.240 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:47.240 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd8 00:09:47.240 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:47.240 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd9 00:09:47.240 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:09:47.240 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:09:47.240 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:47.240 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:47.240 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:47.240 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:47.240 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:47.240 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:47.499 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:47.499 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:47.499 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:47.499 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:47.499 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:47.499 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:47.499 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:47.499 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:47.499 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:47.499 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:47.758 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:47.758 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:47.758 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:47.758 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:47.758 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:47.758 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:47.758 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:47.758 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:47.758 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:47.758 21:53:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:47.758 21:53:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:47.758 21:53:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:47.758 21:53:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:47.758 21:53:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:47.758 21:53:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:47.758 21:53:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:48.017 21:53:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:48.017 21:53:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:48.017 21:53:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:48.017 21:53:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:48.017 21:53:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:48.017 21:53:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:48.017 21:53:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:48.017 21:53:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:48.017 21:53:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:48.017 21:53:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:48.017 21:53:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:48.017 21:53:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:48.017 21:53:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:48.017 21:53:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:48.277 21:53:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:48.277 21:53:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:48.277 21:53:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:48.277 21:53:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:48.277 21:53:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:48.277 21:53:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:48.277 21:53:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:48.277 21:53:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:48.277 21:53:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:48.277 21:53:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:48.536 21:53:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:48.536 21:53:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:48.536 21:53:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:48.536 21:53:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:48.536 21:53:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:48.536 21:53:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:48.536 21:53:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:48.536 21:53:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:48.536 21:53:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:48.536 21:53:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:09:48.536 21:53:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:09:48.536 21:53:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:09:48.536 21:53:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:09:48.536 21:53:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:48.536 21:53:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:48.536 21:53:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:09:48.795 21:53:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:48.795 21:53:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:48.795 21:53:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:48.795 21:53:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:09:48.795 21:53:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:09:48.795 21:53:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:09:48.795 21:53:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:09:48.795 21:53:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:48.795 21:53:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:48.795 21:53:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:09:48.795 21:53:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:48.795 21:53:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:48.796 21:53:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:48.796 21:53:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:49.055 21:53:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:49.055 21:53:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:49.055 21:53:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:49.055 21:53:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:49.055 21:53:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:49.055 21:53:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:49.055 21:53:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:49.055 21:53:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:49.055 21:53:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:49.055 21:53:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:49.314 21:53:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:49.314 21:53:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:49.314 21:53:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:49.314 21:53:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:49.314 21:53:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:49.314 21:53:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:49.314 21:53:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:49.314 21:53:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:49.314 21:53:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:49.314 21:53:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:49.314 21:53:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:49.314 21:53:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:49.314 21:53:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:49.314 21:53:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:49.314 21:53:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:49.314 21:53:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:49.573 21:53:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:49.573 21:53:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:49.573 21:53:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:49.573 21:53:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:49.573 21:53:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:49.573 21:53:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:49.573 21:53:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:49.573 21:53:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:49.573 21:53:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:49.573 21:53:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:49.573 21:53:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:49.573 21:53:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:49.573 21:53:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:49.573 21:53:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:09:49.831 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:09:49.831 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:09:49.831 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:09:49.831 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:49.831 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:49.831 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:09:49.831 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:49.831 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:49.831 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:49.831 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:09:50.089 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:09:50.089 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:09:50.089 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:09:50.089 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:50.089 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:50.089 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:09:50.089 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:50.089 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:50.089 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:50.089 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:09:50.351 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:09:50.351 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:09:50.351 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:09:50.351 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:50.351 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:50.351 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:09:50.351 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:50.351 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:50.351 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:50.351 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:09:50.351 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:09:50.351 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:09:50.351 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:09:50.351 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:50.351 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:50.351 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:09:50.351 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:50.351 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:50.351 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:50.351 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:50.351 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:50.610 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:50.610 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:50.610 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:50.610 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:50.610 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:50.610 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:50.610 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:50.610 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:50.610 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:50.610 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:09:50.610 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:50.610 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:09:50.610 21:53:09 blockdev_general.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:09:50.610 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:50.610 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:50.610 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:09:50.610 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:09:50.610 21:53:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:50.869 malloc_lvol_verify 00:09:50.869 21:53:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:51.128 bd15f06b-6a6e-47d3-bc46-661b36317dce 00:09:51.128 21:53:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:51.128 6a598798-401f-4a49-82dd-cebb1cfb75ca 00:09:51.128 21:53:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:51.387 /dev/nbd0 00:09:51.387 21:53:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:09:51.387 mke2fs 1.46.5 (30-Dec-2021) 00:09:51.387 Discarding device blocks: 0/4096 done 00:09:51.387 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:51.387 00:09:51.387 Allocating group tables: 0/1 done 00:09:51.387 Writing inode tables: 0/1 done 00:09:51.387 Creating journal (1024 blocks): done 00:09:51.387 Writing superblocks and filesystem accounting information: 0/1 done 00:09:51.387 00:09:51.387 21:53:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:09:51.387 21:53:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:51.387 21:53:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:51.387 21:53:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:51.387 21:53:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:51.387 21:53:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:51.387 21:53:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:51.387 21:53:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:51.647 21:53:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:51.647 21:53:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:51.647 21:53:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:51.647 21:53:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:51.647 21:53:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:51.647 21:53:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:51.647 21:53:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:51.647 21:53:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:51.647 21:53:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:09:51.647 21:53:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:09:51.647 21:53:10 blockdev_general.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 1318260 00:09:51.647 21:53:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 1318260 ']' 00:09:51.647 21:53:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 1318260 00:09:51.647 21:53:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:09:51.647 21:53:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:51.647 21:53:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1318260 00:09:51.647 21:53:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:51.647 21:53:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:51.647 21:53:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1318260' 00:09:51.647 killing process with pid 1318260 00:09:51.647 21:53:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@967 -- # kill 1318260 00:09:51.647 21:53:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@972 -- # wait 1318260 00:09:54.184 21:53:13 blockdev_general.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:09:54.184 00:09:54.184 real 0m20.745s 00:09:54.184 user 0m24.342s 00:09:54.184 sys 0m10.798s 00:09:54.184 21:53:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:54.184 21:53:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:54.184 ************************************ 00:09:54.184 END TEST bdev_nbd 00:09:54.184 ************************************ 00:09:54.184 21:53:13 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:09:54.184 21:53:13 blockdev_general -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:09:54.184 21:53:13 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = nvme ']' 00:09:54.184 21:53:13 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = gpt ']' 00:09:54.184 21:53:13 blockdev_general -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:09:54.184 21:53:13 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:54.184 21:53:13 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:54.184 21:53:13 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:54.184 ************************************ 00:09:54.184 START TEST bdev_fio 00:09:54.184 ************************************ 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:09:54.184 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc0]' 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc0 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p0]' 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p0 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p1]' 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p1 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p0]' 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p0 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p1]' 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p1 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p2]' 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p2 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p3]' 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p3 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p4]' 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p4 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p5]' 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p5 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p6]' 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p6 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p7]' 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p7 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_TestPT]' 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=TestPT 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid0]' 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid0 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_concat0]' 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=concat0 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid1]' 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid1 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_AIO0]' 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=AIO0 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:54.184 21:53:13 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:09:54.184 ************************************ 00:09:54.185 START TEST bdev_fio_rw_verify 00:09:54.185 ************************************ 00:09:54.185 21:53:13 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:09:54.185 21:53:13 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:09:54.185 21:53:13 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:54.185 21:53:13 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:54.185 21:53:13 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:54.185 21:53:13 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:09:54.185 21:53:13 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:09:54.185 21:53:13 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:54.185 21:53:13 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:54.185 21:53:13 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:09:54.185 21:53:13 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:09:54.185 21:53:13 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:54.185 21:53:13 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:54.185 21:53:13 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:54.185 21:53:13 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:09:54.185 21:53:13 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:09:54.185 21:53:13 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:09:54.444 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:54.444 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:54.444 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:54.444 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:54.444 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:54.444 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:54.444 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:54.444 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:54.444 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:54.444 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:54.444 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:54.444 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:54.444 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:54.444 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:54.444 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:54.444 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:54.444 fio-3.35 00:09:54.444 Starting 16 threads 00:09:54.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.703 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:54.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.703 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:54.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.703 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:54.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.703 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:54.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.703 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:54.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.703 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:54.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.703 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:54.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.703 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:54.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.703 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:54.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.703 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:54.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.703 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:54.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.703 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:54.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.703 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:54.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.703 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:54.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.703 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:54.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.703 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:54.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.703 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:54.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.703 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:54.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.703 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:54.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.703 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:54.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.703 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:54.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.703 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:54.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.704 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:54.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.704 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:54.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.704 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:54.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.704 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:54.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.704 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:54.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.704 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:54.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.704 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:54.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.704 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:54.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.704 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:54.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.704 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:06.912 00:10:06.912 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=1322889: Sat Jul 13 21:53:25 2024 00:10:06.912 read: IOPS=111k, BW=433MiB/s (454MB/s)(4329MiB/10001msec) 00:10:06.912 slat (usec): min=2, max=357, avg=29.96, stdev=13.08 00:10:06.912 clat (usec): min=6, max=1370, avg=241.03, stdev=116.86 00:10:06.912 lat (usec): min=13, max=1379, avg=271.00, stdev=123.62 00:10:06.912 clat percentiles (usec): 00:10:06.912 | 50.000th=[ 233], 99.000th=[ 515], 99.900th=[ 611], 99.990th=[ 783], 00:10:06.912 | 99.999th=[ 1037] 00:10:06.912 write: IOPS=174k, BW=680MiB/s (713MB/s)(6693MiB/9849msec); 0 zone resets 00:10:06.912 slat (usec): min=6, max=389, avg=40.36, stdev=13.19 00:10:06.912 clat (usec): min=8, max=1416, avg=277.95, stdev=129.71 00:10:06.912 lat (usec): min=23, max=1537, avg=318.30, stdev=135.98 00:10:06.912 clat percentiles (usec): 00:10:06.912 | 50.000th=[ 265], 99.000th=[ 594], 99.900th=[ 734], 99.990th=[ 938], 00:10:06.912 | 99.999th=[ 1254] 00:10:06.912 bw ( KiB/s): min=576864, max=948025, per=98.95%, avg=688613.95, stdev=6116.42, samples=304 00:10:06.912 iops : min=144216, max=237004, avg=172153.37, stdev=1529.08, samples=304 00:10:06.912 lat (usec) : 10=0.01%, 20=0.06%, 50=1.37%, 100=7.48%, 250=40.68% 00:10:06.912 lat (usec) : 500=46.82%, 750=3.55%, 1000=0.04% 00:10:06.912 lat (msec) : 2=0.01% 00:10:06.912 cpu : usr=98.61%, sys=0.75%, ctx=625, majf=0, minf=135787 00:10:06.912 IO depths : 1=12.4%, 2=24.8%, 4=50.2%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:06.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.912 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.912 issued rwts: total=1108306,1713451,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.912 latency : target=0, window=0, percentile=100.00%, depth=8 00:10:06.912 00:10:06.912 Run status group 0 (all jobs): 00:10:06.912 READ: bw=433MiB/s (454MB/s), 433MiB/s-433MiB/s (454MB/s-454MB/s), io=4329MiB (4540MB), run=10001-10001msec 00:10:06.912 WRITE: bw=680MiB/s (713MB/s), 680MiB/s-680MiB/s (713MB/s-713MB/s), io=6693MiB (7018MB), run=9849-9849msec 00:10:08.823 ----------------------------------------------------- 00:10:08.823 Suppressions used: 00:10:08.823 count bytes template 00:10:08.823 16 140 /usr/src/fio/parse.c 00:10:08.823 12493 1199328 /usr/src/fio/iolog.c 00:10:08.823 1 8 libtcmalloc_minimal.so 00:10:08.823 1 904 libcrypto.so 00:10:08.823 ----------------------------------------------------- 00:10:08.823 00:10:08.823 00:10:08.823 real 0m14.448s 00:10:08.823 user 2m51.113s 00:10:08.823 sys 0m2.281s 00:10:08.823 21:53:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:08.823 21:53:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:10:08.823 ************************************ 00:10:08.823 END TEST bdev_fio_rw_verify 00:10:08.823 ************************************ 00:10:08.823 21:53:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:10:08.823 21:53:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:10:08.823 21:53:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:10:08.823 21:53:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:10:08.823 21:53:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:10:08.823 21:53:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:10:08.823 21:53:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:10:08.823 21:53:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:10:08.823 21:53:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:10:08.823 21:53:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:10:08.823 21:53:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:10:08.823 21:53:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:10:08.823 21:53:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:10:08.823 21:53:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:10:08.823 21:53:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:10:08.823 21:53:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:10:08.823 21:53:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:10:08.823 21:53:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:10:08.824 21:53:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "85a9e68f-d93e-4d48-b035-7a4339be4c09"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "85a9e68f-d93e-4d48-b035-7a4339be4c09",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "ad1c08bb-2dc9-571e-b9bb-26e9b6c00284"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "ad1c08bb-2dc9-571e-b9bb-26e9b6c00284",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "f137220c-4aed-5c94-b22e-c5c327c58c07"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "f137220c-4aed-5c94-b22e-c5c327c58c07",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "3c3f0708-ecea-5993-9e1e-677e81ff18ce"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3c3f0708-ecea-5993-9e1e-677e81ff18ce",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "264ce625-e209-5c55-bd1b-ae5ff6abbc23"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "264ce625-e209-5c55-bd1b-ae5ff6abbc23",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "93a0a68c-e9e8-5409-b50c-b7228b9b285d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "93a0a68c-e9e8-5409-b50c-b7228b9b285d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "c4529d0c-715c-535e-96f7-b10a2ebdc8f2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c4529d0c-715c-535e-96f7-b10a2ebdc8f2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "65f0743a-1a72-5150-b10b-9fa135521843"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "65f0743a-1a72-5150-b10b-9fa135521843",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "9ac90f74-aa24-5fb1-a94f-b1b5e2c83a81"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9ac90f74-aa24-5fb1-a94f-b1b5e2c83a81",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "0cf8f958-aefe-5792-a12a-fb6df6656f26"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0cf8f958-aefe-5792-a12a-fb6df6656f26",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "56984f64-12ee-5e99-a391-ae6c00ad4b26"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "56984f64-12ee-5e99-a391-ae6c00ad4b26",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "b412548d-7c71-5a69-93d8-f8e9b111a112"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b412548d-7c71-5a69-93d8-f8e9b111a112",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "954ef0e3-1fb0-4fa6-af76-725f189b737a"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "954ef0e3-1fb0-4fa6-af76-725f189b737a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "954ef0e3-1fb0-4fa6-af76-725f189b737a",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "302a1350-4559-4eae-9726-1f5a492dab95",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "9ec22841-bb8d-4b33-aa8c-6bb0691b8097",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "3bdf1ee9-4491-467f-9e2f-5553e41ad62d"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "3bdf1ee9-4491-467f-9e2f-5553e41ad62d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "3bdf1ee9-4491-467f-9e2f-5553e41ad62d",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "66487e29-732e-4b4c-8520-1dc241caf439",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "a7639838-ada5-44ec-a8b5-a724725944dc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "94556002-43f6-4b50-be1b-a8903f1fb1a6"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "94556002-43f6-4b50-be1b-a8903f1fb1a6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "94556002-43f6-4b50-be1b-a8903f1fb1a6",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "47a067a6-3764-4035-9b1c-06ad866fac2d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "8a0e8e4c-f4c6-465b-bb09-cb806628a420",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "903bc713-2351-455f-a74f-40757d48db51"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "903bc713-2351-455f-a74f-40757d48db51",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:10:08.825 21:53:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n Malloc0 00:10:08.825 Malloc1p0 00:10:08.825 Malloc1p1 00:10:08.825 Malloc2p0 00:10:08.825 Malloc2p1 00:10:08.825 Malloc2p2 00:10:08.825 Malloc2p3 00:10:08.825 Malloc2p4 00:10:08.825 Malloc2p5 00:10:08.825 Malloc2p6 00:10:08.825 Malloc2p7 00:10:08.825 TestPT 00:10:08.825 raid0 00:10:08.825 concat0 ]] 00:10:08.825 21:53:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:10:08.826 21:53:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "85a9e68f-d93e-4d48-b035-7a4339be4c09"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "85a9e68f-d93e-4d48-b035-7a4339be4c09",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "ad1c08bb-2dc9-571e-b9bb-26e9b6c00284"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "ad1c08bb-2dc9-571e-b9bb-26e9b6c00284",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "f137220c-4aed-5c94-b22e-c5c327c58c07"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "f137220c-4aed-5c94-b22e-c5c327c58c07",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "3c3f0708-ecea-5993-9e1e-677e81ff18ce"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3c3f0708-ecea-5993-9e1e-677e81ff18ce",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "264ce625-e209-5c55-bd1b-ae5ff6abbc23"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "264ce625-e209-5c55-bd1b-ae5ff6abbc23",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "93a0a68c-e9e8-5409-b50c-b7228b9b285d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "93a0a68c-e9e8-5409-b50c-b7228b9b285d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "c4529d0c-715c-535e-96f7-b10a2ebdc8f2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c4529d0c-715c-535e-96f7-b10a2ebdc8f2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "65f0743a-1a72-5150-b10b-9fa135521843"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "65f0743a-1a72-5150-b10b-9fa135521843",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "9ac90f74-aa24-5fb1-a94f-b1b5e2c83a81"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9ac90f74-aa24-5fb1-a94f-b1b5e2c83a81",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "0cf8f958-aefe-5792-a12a-fb6df6656f26"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0cf8f958-aefe-5792-a12a-fb6df6656f26",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "56984f64-12ee-5e99-a391-ae6c00ad4b26"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "56984f64-12ee-5e99-a391-ae6c00ad4b26",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "b412548d-7c71-5a69-93d8-f8e9b111a112"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b412548d-7c71-5a69-93d8-f8e9b111a112",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "954ef0e3-1fb0-4fa6-af76-725f189b737a"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "954ef0e3-1fb0-4fa6-af76-725f189b737a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "954ef0e3-1fb0-4fa6-af76-725f189b737a",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "302a1350-4559-4eae-9726-1f5a492dab95",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "9ec22841-bb8d-4b33-aa8c-6bb0691b8097",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "3bdf1ee9-4491-467f-9e2f-5553e41ad62d"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "3bdf1ee9-4491-467f-9e2f-5553e41ad62d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "3bdf1ee9-4491-467f-9e2f-5553e41ad62d",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "66487e29-732e-4b4c-8520-1dc241caf439",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "a7639838-ada5-44ec-a8b5-a724725944dc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "94556002-43f6-4b50-be1b-a8903f1fb1a6"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "94556002-43f6-4b50-be1b-a8903f1fb1a6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "94556002-43f6-4b50-be1b-a8903f1fb1a6",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "47a067a6-3764-4035-9b1c-06ad866fac2d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "8a0e8e4c-f4c6-465b-bb09-cb806628a420",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "903bc713-2351-455f-a74f-40757d48db51"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "903bc713-2351-455f-a74f-40757d48db51",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:10:08.826 21:53:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:10:08.826 21:53:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc0]' 00:10:08.826 21:53:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc0 00:10:08.826 21:53:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:10:08.826 21:53:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p0]' 00:10:08.826 21:53:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p0 00:10:08.826 21:53:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:10:08.826 21:53:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p1]' 00:10:08.826 21:53:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p1 00:10:08.826 21:53:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:10:08.826 21:53:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p0]' 00:10:08.826 21:53:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p0 00:10:08.826 21:53:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:10:08.826 21:53:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p1]' 00:10:08.826 21:53:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p1 00:10:08.826 21:53:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:10:08.826 21:53:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p2]' 00:10:08.826 21:53:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p2 00:10:08.826 21:53:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:10:08.826 21:53:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p3]' 00:10:08.826 21:53:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p3 00:10:08.826 21:53:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:10:08.826 21:53:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p4]' 00:10:08.826 21:53:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p4 00:10:08.826 21:53:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:10:08.826 21:53:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p5]' 00:10:08.826 21:53:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p5 00:10:08.827 21:53:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:10:08.827 21:53:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p6]' 00:10:08.827 21:53:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p6 00:10:08.827 21:53:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:10:08.827 21:53:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p7]' 00:10:08.827 21:53:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p7 00:10:08.827 21:53:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:10:08.827 21:53:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_TestPT]' 00:10:08.827 21:53:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=TestPT 00:10:08.827 21:53:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:10:08.827 21:53:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_raid0]' 00:10:08.827 21:53:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=raid0 00:10:08.827 21:53:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:10:08.827 21:53:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_concat0]' 00:10:08.827 21:53:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=concat0 00:10:08.827 21:53:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:10:08.827 21:53:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:10:08.827 21:53:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:08.827 21:53:27 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:10:08.827 ************************************ 00:10:08.827 START TEST bdev_fio_trim 00:10:08.827 ************************************ 00:10:08.827 21:53:28 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:10:08.827 21:53:28 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:10:08.827 21:53:28 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:10:08.827 21:53:28 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:08.827 21:53:28 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:10:08.827 21:53:28 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:10:08.827 21:53:28 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:10:08.827 21:53:28 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:10:08.827 21:53:28 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:10:08.827 21:53:28 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:10:08.827 21:53:28 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:10:08.827 21:53:28 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:10:08.827 21:53:28 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:08.827 21:53:28 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:08.827 21:53:28 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1347 -- # break 00:10:08.827 21:53:28 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:10:08.827 21:53:28 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:10:09.086 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:09.086 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:09.086 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:09.086 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:09.086 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:09.086 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:09.086 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:09.086 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:09.086 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:09.086 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:09.086 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:09.086 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:09.086 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:09.086 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:09.086 fio-3.35 00:10:09.086 Starting 14 threads 00:10:09.345 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.345 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:09.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.346 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:09.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.346 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:09.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.346 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:09.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.346 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:09.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.346 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:09.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.346 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:09.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.346 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:09.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.346 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:09.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.346 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:09.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.346 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:09.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.346 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:09.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.346 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:09.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.346 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:09.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.346 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:09.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.346 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:09.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.346 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:09.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.346 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:09.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.346 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:09.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.346 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:09.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.346 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:09.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.346 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:09.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.346 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:09.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.346 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:09.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.346 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:09.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.346 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:09.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.346 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:09.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.346 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:09.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.346 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:09.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.346 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:09.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.346 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:09.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.346 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:21.558 00:10:21.558 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=1325463: Sat Jul 13 21:53:39 2024 00:10:21.558 write: IOPS=154k, BW=603MiB/s (633MB/s)(6035MiB/10001msec); 0 zone resets 00:10:21.558 slat (usec): min=2, max=433, avg=32.03, stdev= 9.77 00:10:21.558 clat (usec): min=26, max=1367, avg=226.42, stdev=82.27 00:10:21.558 lat (usec): min=33, max=1412, avg=258.45, stdev=86.02 00:10:21.558 clat percentiles (usec): 00:10:21.558 | 50.000th=[ 219], 99.000th=[ 441], 99.900th=[ 510], 99.990th=[ 644], 00:10:21.558 | 99.999th=[ 865] 00:10:21.558 bw ( KiB/s): min=540736, max=874237, per=100.00%, avg=620833.11, stdev=6710.12, samples=266 00:10:21.558 iops : min=135184, max=218558, avg=155208.21, stdev=1677.52, samples=266 00:10:21.558 trim: IOPS=154k, BW=603MiB/s (633MB/s)(6035MiB/10001msec); 0 zone resets 00:10:21.558 slat (usec): min=4, max=454, avg=21.78, stdev= 6.83 00:10:21.558 clat (usec): min=5, max=1019, avg=257.72, stdev=86.25 00:10:21.558 lat (usec): min=14, max=1085, avg=279.50, stdev=89.20 00:10:21.558 clat percentiles (usec): 00:10:21.558 | 50.000th=[ 249], 99.000th=[ 482], 99.900th=[ 553], 99.990th=[ 717], 00:10:21.558 | 99.999th=[ 938] 00:10:21.558 bw ( KiB/s): min=540736, max=874237, per=100.00%, avg=620833.11, stdev=6710.12, samples=266 00:10:21.558 iops : min=135184, max=218558, avg=155208.21, stdev=1677.52, samples=266 00:10:21.558 lat (usec) : 10=0.01%, 20=0.02%, 50=0.15%, 100=2.22%, 250=54.88% 00:10:21.558 lat (usec) : 500=42.38%, 750=0.34%, 1000=0.01% 00:10:21.558 lat (msec) : 2=0.01% 00:10:21.558 cpu : usr=99.60%, sys=0.03%, ctx=520, majf=0, minf=15766 00:10:21.558 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:21.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.558 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.558 issued rwts: total=0,1544930,1544932,0 short=0,0,0,0 dropped=0,0,0,0 00:10:21.558 latency : target=0, window=0, percentile=100.00%, depth=8 00:10:21.558 00:10:21.558 Run status group 0 (all jobs): 00:10:21.558 WRITE: bw=603MiB/s (633MB/s), 603MiB/s-603MiB/s (633MB/s-633MB/s), io=6035MiB (6328MB), run=10001-10001msec 00:10:21.558 TRIM: bw=603MiB/s (633MB/s), 603MiB/s-603MiB/s (633MB/s-633MB/s), io=6035MiB (6328MB), run=10001-10001msec 00:10:22.935 ----------------------------------------------------- 00:10:22.935 Suppressions used: 00:10:22.935 count bytes template 00:10:22.935 14 129 /usr/src/fio/parse.c 00:10:22.935 1 8 libtcmalloc_minimal.so 00:10:22.935 1 904 libcrypto.so 00:10:22.935 ----------------------------------------------------- 00:10:22.935 00:10:22.935 00:10:22.935 real 0m13.970s 00:10:22.935 user 2m32.076s 00:10:22.935 sys 0m1.443s 00:10:22.935 21:53:41 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:22.935 21:53:41 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:10:22.935 ************************************ 00:10:22.935 END TEST bdev_fio_trim 00:10:22.935 ************************************ 00:10:22.935 21:53:42 blockdev_general.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:10:22.935 21:53:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f 00:10:22.935 21:53:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@369 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:10:22.935 21:53:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@370 -- # popd 00:10:22.935 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:10:22.935 21:53:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:10:22.935 00:10:22.935 real 0m28.795s 00:10:22.935 user 5m23.387s 00:10:22.935 sys 0m3.936s 00:10:22.935 21:53:42 blockdev_general.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:22.935 21:53:42 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:10:22.935 ************************************ 00:10:22.935 END TEST bdev_fio 00:10:22.935 ************************************ 00:10:22.935 21:53:42 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:10:22.935 21:53:42 blockdev_general -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:22.935 21:53:42 blockdev_general -- bdev/blockdev.sh@777 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:22.935 21:53:42 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:10:22.935 21:53:42 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:22.935 21:53:42 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:22.935 ************************************ 00:10:22.935 START TEST bdev_verify 00:10:22.935 ************************************ 00:10:22.935 21:53:42 blockdev_general.bdev_verify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:22.935 [2024-07-13 21:53:42.191113] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:22.935 [2024-07-13 21:53:42.191200] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1327616 ] 00:10:22.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.935 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:22.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.936 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:22.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.936 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:22.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.936 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:22.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.936 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:22.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.936 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:22.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.936 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:22.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.936 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:22.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.936 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:22.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.936 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:22.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.936 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:22.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.936 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:22.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.936 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:22.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.936 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:22.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.936 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:22.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.936 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:22.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.936 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:22.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.936 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:22.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.936 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:22.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.936 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:22.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.936 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:22.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.936 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:22.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.936 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:22.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.936 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:22.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.936 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:22.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.936 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:22.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.936 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:22.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.936 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:22.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.936 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:22.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.936 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:22.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.936 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:22.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.936 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:23.195 [2024-07-13 21:53:42.348269] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:23.195 [2024-07-13 21:53:42.551234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.195 [2024-07-13 21:53:42.551243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:23.763 [2024-07-13 21:53:42.997990] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:23.763 [2024-07-13 21:53:42.998048] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:10:23.763 [2024-07-13 21:53:42.998066] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:10:23.763 [2024-07-13 21:53:43.005992] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:23.763 [2024-07-13 21:53:43.006031] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:23.763 [2024-07-13 21:53:43.013987] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:23.763 [2024-07-13 21:53:43.014019] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:24.021 [2024-07-13 21:53:43.211635] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:24.021 [2024-07-13 21:53:43.211687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.021 [2024-07-13 21:53:43.211703] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042980 00:10:24.021 [2024-07-13 21:53:43.211715] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.021 [2024-07-13 21:53:43.213830] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.021 [2024-07-13 21:53:43.213860] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:10:24.279 Running I/O for 5 seconds... 00:10:29.576 00:10:29.576 Latency(us) 00:10:29.576 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:29.576 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:29.576 Verification LBA range: start 0x0 length 0x1000 00:10:29.576 Malloc0 : 5.05 1597.60 6.24 0.00 0.00 79997.50 445.64 195454.57 00:10:29.576 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:29.576 Verification LBA range: start 0x1000 length 0x1000 00:10:29.576 Malloc0 : 5.13 1570.60 6.14 0.00 0.00 81373.28 445.64 295279.00 00:10:29.576 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:29.576 Verification LBA range: start 0x0 length 0x800 00:10:29.576 Malloc1p0 : 5.19 814.16 3.18 0.00 0.00 156606.48 2844.26 186227.10 00:10:29.576 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:29.576 Verification LBA range: start 0x800 length 0x800 00:10:29.576 Malloc1p0 : 5.17 816.54 3.19 0.00 0.00 156168.57 2844.26 176160.77 00:10:29.576 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:29.576 Verification LBA range: start 0x0 length 0x800 00:10:29.576 Malloc1p1 : 5.19 813.74 3.18 0.00 0.00 156323.46 2949.12 182032.79 00:10:29.576 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:29.576 Verification LBA range: start 0x800 length 0x800 00:10:29.576 Malloc1p1 : 5.17 816.26 3.19 0.00 0.00 155867.66 2922.91 173644.19 00:10:29.576 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:29.576 Verification LBA range: start 0x0 length 0x200 00:10:29.576 Malloc2p0 : 5.19 813.46 3.18 0.00 0.00 156030.25 2883.58 178677.35 00:10:29.576 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:29.576 Verification LBA range: start 0x200 length 0x200 00:10:29.576 Malloc2p0 : 5.18 816.00 3.19 0.00 0.00 155560.43 2883.58 169449.88 00:10:29.576 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:29.576 Verification LBA range: start 0x0 length 0x200 00:10:29.576 Malloc2p1 : 5.19 813.11 3.18 0.00 0.00 155740.27 2870.48 173644.19 00:10:29.576 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:29.576 Verification LBA range: start 0x200 length 0x200 00:10:29.576 Malloc2p1 : 5.18 815.75 3.19 0.00 0.00 155239.08 2870.48 165255.58 00:10:29.576 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:29.576 Verification LBA range: start 0x0 length 0x200 00:10:29.576 Malloc2p2 : 5.20 812.81 3.18 0.00 0.00 155435.10 2909.80 169449.88 00:10:29.576 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:29.576 Verification LBA range: start 0x200 length 0x200 00:10:29.576 Malloc2p2 : 5.18 815.50 3.19 0.00 0.00 154919.41 2883.58 160222.41 00:10:29.576 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:29.576 Verification LBA range: start 0x0 length 0x200 00:10:29.576 Malloc2p3 : 5.20 812.57 3.17 0.00 0.00 155136.47 2831.16 164416.72 00:10:29.576 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:29.576 Verification LBA range: start 0x200 length 0x200 00:10:29.576 Malloc2p3 : 5.18 815.23 3.18 0.00 0.00 154621.70 2831.16 156028.11 00:10:29.576 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:29.576 Verification LBA range: start 0x0 length 0x200 00:10:29.576 Malloc2p4 : 5.20 812.32 3.17 0.00 0.00 154829.37 2975.33 158544.69 00:10:29.576 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:29.576 Verification LBA range: start 0x200 length 0x200 00:10:29.576 Malloc2p4 : 5.18 814.96 3.18 0.00 0.00 154319.03 2909.80 150994.94 00:10:29.576 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:29.576 Verification LBA range: start 0x0 length 0x200 00:10:29.576 Malloc2p5 : 5.20 812.09 3.17 0.00 0.00 154503.98 2883.58 154350.39 00:10:29.576 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:29.576 Verification LBA range: start 0x200 length 0x200 00:10:29.576 Malloc2p5 : 5.18 814.69 3.18 0.00 0.00 153994.82 2883.58 145961.78 00:10:29.576 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:29.576 Verification LBA range: start 0x0 length 0x200 00:10:29.576 Malloc2p6 : 5.20 811.85 3.17 0.00 0.00 154184.64 2844.26 150156.08 00:10:29.576 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:29.576 Verification LBA range: start 0x200 length 0x200 00:10:29.576 Malloc2p6 : 5.19 814.43 3.18 0.00 0.00 153684.35 2844.26 140928.61 00:10:29.576 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:29.576 Verification LBA range: start 0x0 length 0x200 00:10:29.576 Malloc2p7 : 5.20 811.62 3.17 0.00 0.00 153871.72 2831.16 145961.78 00:10:29.576 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:29.576 Verification LBA range: start 0x200 length 0x200 00:10:29.576 Malloc2p7 : 5.19 814.13 3.18 0.00 0.00 153379.43 2818.05 135895.45 00:10:29.576 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:29.576 Verification LBA range: start 0x0 length 0x1000 00:10:29.576 TestPT : 5.22 809.60 3.16 0.00 0.00 153809.01 9437.18 146800.64 00:10:29.576 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:29.576 Verification LBA range: start 0x1000 length 0x1000 00:10:29.576 TestPT : 5.20 789.60 3.08 0.00 0.00 157359.93 12740.20 195454.57 00:10:29.576 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:29.576 Verification LBA range: start 0x0 length 0x2000 00:10:29.576 raid0 : 5.21 810.91 3.17 0.00 0.00 153150.87 2975.33 127506.84 00:10:29.576 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:29.576 Verification LBA range: start 0x2000 length 0x2000 00:10:29.576 raid0 : 5.19 813.52 3.18 0.00 0.00 152655.94 2962.23 116601.65 00:10:29.576 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:29.576 Verification LBA range: start 0x0 length 0x2000 00:10:29.576 concat0 : 5.21 810.63 3.17 0.00 0.00 152859.13 2988.44 122473.68 00:10:29.576 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:29.576 Verification LBA range: start 0x2000 length 0x2000 00:10:29.576 concat0 : 5.19 813.17 3.18 0.00 0.00 152385.50 2962.23 110729.63 00:10:29.576 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:29.576 Verification LBA range: start 0x0 length 0x1000 00:10:29.576 raid1 : 5.21 810.31 3.17 0.00 0.00 152559.39 3670.02 116601.65 00:10:29.576 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:29.576 Verification LBA range: start 0x1000 length 0x1000 00:10:29.576 raid1 : 5.21 835.25 3.26 0.00 0.00 148005.88 2608.33 114085.07 00:10:29.576 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:29.576 Verification LBA range: start 0x0 length 0x4e2 00:10:29.576 AIO0 : 5.22 833.81 3.26 0.00 0.00 147898.67 507.90 116601.65 00:10:29.576 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:29.576 Verification LBA range: start 0x4e2 length 0x4e2 00:10:29.576 AIO0 : 5.21 834.97 3.26 0.00 0.00 147687.87 1487.67 119118.23 00:10:29.576 =================================================================================================================== 00:10:29.576 Total : 27611.19 107.86 0.00 0.00 145683.18 445.64 295279.00 00:10:32.114 00:10:32.114 real 0m9.124s 00:10:32.114 user 0m16.710s 00:10:32.114 sys 0m0.488s 00:10:32.114 21:53:51 blockdev_general.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:32.114 21:53:51 blockdev_general.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:10:32.114 ************************************ 00:10:32.114 END TEST bdev_verify 00:10:32.114 ************************************ 00:10:32.114 21:53:51 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:10:32.114 21:53:51 blockdev_general -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:32.114 21:53:51 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:10:32.114 21:53:51 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:32.114 21:53:51 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:32.114 ************************************ 00:10:32.114 START TEST bdev_verify_big_io 00:10:32.114 ************************************ 00:10:32.114 21:53:51 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:32.114 [2024-07-13 21:53:51.406974] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:32.114 [2024-07-13 21:53:51.407073] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1329212 ] 00:10:32.114 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.114 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:32.114 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.114 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:32.114 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.114 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:32.114 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.114 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:32.114 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.114 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:32.114 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.114 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:32.114 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.114 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:32.114 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.114 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:32.114 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.114 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:32.114 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.114 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:32.114 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.114 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:32.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.374 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:32.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.374 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:32.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.374 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:32.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.374 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:32.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.374 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:32.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.374 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:32.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.374 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:32.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.374 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:32.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.374 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:32.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.374 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:32.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.374 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:32.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.374 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:32.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.374 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:32.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.374 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:32.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.374 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:32.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.374 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:32.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.374 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:32.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.374 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:32.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.374 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:32.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.374 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:32.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.374 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:32.374 [2024-07-13 21:53:51.567070] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:32.633 [2024-07-13 21:53:51.768572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.633 [2024-07-13 21:53:51.768580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:32.892 [2024-07-13 21:53:52.205374] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:32.892 [2024-07-13 21:53:52.205435] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:10:32.892 [2024-07-13 21:53:52.205450] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:10:32.892 [2024-07-13 21:53:52.213374] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:32.892 [2024-07-13 21:53:52.213413] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:32.892 [2024-07-13 21:53:52.221381] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:32.892 [2024-07-13 21:53:52.221414] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:33.150 [2024-07-13 21:53:52.417919] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:33.150 [2024-07-13 21:53:52.417975] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:33.150 [2024-07-13 21:53:52.417991] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042980 00:10:33.150 [2024-07-13 21:53:52.418003] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:33.150 [2024-07-13 21:53:52.420095] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:33.150 [2024-07-13 21:53:52.420125] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:10:33.721 [2024-07-13 21:53:52.815063] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:10:33.721 [2024-07-13 21:53:52.818668] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:10:33.721 [2024-07-13 21:53:52.822926] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:10:33.721 [2024-07-13 21:53:52.826514] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:10:33.721 [2024-07-13 21:53:52.830564] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:10:33.721 [2024-07-13 21:53:52.834476] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:10:33.722 [2024-07-13 21:53:52.838059] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:10:33.722 [2024-07-13 21:53:52.841973] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:10:33.722 [2024-07-13 21:53:52.845613] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:10:33.722 [2024-07-13 21:53:52.849550] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:10:33.722 [2024-07-13 21:53:52.853199] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:10:33.722 [2024-07-13 21:53:52.857288] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:10:33.722 [2024-07-13 21:53:52.860908] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:10:33.722 [2024-07-13 21:53:52.864759] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:10:33.722 [2024-07-13 21:53:52.868531] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:10:33.722 [2024-07-13 21:53:52.872720] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:10:33.722 [2024-07-13 21:53:52.968320] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:10:33.722 [2024-07-13 21:53:52.975871] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:10:33.722 Running I/O for 5 seconds... 00:10:40.283 00:10:40.283 Latency(us) 00:10:40.283 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:40.283 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:40.283 Verification LBA range: start 0x0 length 0x100 00:10:40.283 Malloc0 : 5.57 275.93 17.25 0.00 0.00 457722.93 589.82 1415997.03 00:10:40.283 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:40.283 Verification LBA range: start 0x100 length 0x100 00:10:40.283 Malloc0 : 5.65 294.45 18.40 0.00 0.00 428868.27 579.99 1583769.19 00:10:40.283 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:40.283 Verification LBA range: start 0x0 length 0x80 00:10:40.283 Malloc1p0 : 5.77 146.88 9.18 0.00 0.00 824655.94 2700.08 1664299.83 00:10:40.283 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:40.283 Verification LBA range: start 0x80 length 0x80 00:10:40.283 Malloc1p0 : 6.15 57.28 3.58 0.00 0.00 2084661.49 1861.22 3180960.15 00:10:40.283 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:40.283 Verification LBA range: start 0x0 length 0x80 00:10:40.283 Malloc1p1 : 6.00 53.37 3.34 0.00 0.00 2206838.33 1821.90 3516504.47 00:10:40.283 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:40.283 Verification LBA range: start 0x80 length 0x80 00:10:40.283 Malloc1p1 : 6.15 57.27 3.58 0.00 0.00 2040167.72 1782.58 3073585.97 00:10:40.283 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:10:40.283 Verification LBA range: start 0x0 length 0x20 00:10:40.283 Malloc2p0 : 5.73 39.11 2.44 0.00 0.00 750375.17 622.59 1207959.55 00:10:40.283 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:10:40.283 Verification LBA range: start 0x20 length 0x20 00:10:40.283 Malloc2p0 : 5.74 44.57 2.79 0.00 0.00 662573.66 602.93 1026765.62 00:10:40.283 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:10:40.283 Verification LBA range: start 0x0 length 0x20 00:10:40.283 Malloc2p1 : 5.77 41.56 2.60 0.00 0.00 709828.70 612.76 1194537.78 00:10:40.283 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:10:40.283 Verification LBA range: start 0x20 length 0x20 00:10:40.283 Malloc2p1 : 5.74 44.57 2.79 0.00 0.00 658576.08 593.10 1006632.96 00:10:40.284 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:10:40.284 Verification LBA range: start 0x0 length 0x20 00:10:40.284 Malloc2p2 : 5.78 41.55 2.60 0.00 0.00 705980.38 616.04 1181116.01 00:10:40.284 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:10:40.284 Verification LBA range: start 0x20 length 0x20 00:10:40.284 Malloc2p2 : 5.75 44.56 2.78 0.00 0.00 654748.59 596.38 986500.30 00:10:40.284 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:10:40.284 Verification LBA range: start 0x0 length 0x20 00:10:40.284 Malloc2p3 : 5.78 41.54 2.60 0.00 0.00 701993.06 616.04 1160983.35 00:10:40.284 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:10:40.284 Verification LBA range: start 0x20 length 0x20 00:10:40.284 Malloc2p3 : 5.75 44.55 2.78 0.00 0.00 651092.87 593.10 973078.53 00:10:40.284 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:10:40.284 Verification LBA range: start 0x0 length 0x20 00:10:40.284 Malloc2p4 : 5.78 41.54 2.60 0.00 0.00 697965.47 570.16 1147561.57 00:10:40.284 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:10:40.284 Verification LBA range: start 0x20 length 0x20 00:10:40.284 Malloc2p4 : 5.75 44.54 2.78 0.00 0.00 647317.35 553.78 959656.76 00:10:40.284 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:10:40.284 Verification LBA range: start 0x0 length 0x20 00:10:40.284 Malloc2p5 : 5.78 41.53 2.60 0.00 0.00 693816.65 537.40 1134139.80 00:10:40.284 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:10:40.284 Verification LBA range: start 0x20 length 0x20 00:10:40.284 Malloc2p5 : 5.75 44.54 2.78 0.00 0.00 643614.94 527.56 946234.98 00:10:40.284 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:10:40.284 Verification LBA range: start 0x0 length 0x20 00:10:40.284 Malloc2p6 : 5.78 41.52 2.60 0.00 0.00 689874.81 547.23 1120718.03 00:10:40.284 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:10:40.284 Verification LBA range: start 0x20 length 0x20 00:10:40.284 Malloc2p6 : 5.82 46.74 2.92 0.00 0.00 611815.79 550.50 926102.32 00:10:40.284 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:10:40.284 Verification LBA range: start 0x0 length 0x20 00:10:40.284 Malloc2p7 : 5.78 41.52 2.59 0.00 0.00 686168.75 547.23 1100585.37 00:10:40.284 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:10:40.284 Verification LBA range: start 0x20 length 0x20 00:10:40.284 Malloc2p7 : 5.82 46.74 2.92 0.00 0.00 608206.75 540.67 912680.55 00:10:40.284 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:40.284 Verification LBA range: start 0x0 length 0x100 00:10:40.284 TestPT : 6.07 53.19 3.32 0.00 0.00 2065696.02 64172.85 3019898.88 00:10:40.284 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:40.284 Verification LBA range: start 0x100 length 0x100 00:10:40.284 TestPT : 6.15 54.78 3.42 0.00 0.00 2014257.56 53267.66 2724619.88 00:10:40.284 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:40.284 Verification LBA range: start 0x0 length 0x200 00:10:40.284 raid0 : 6.13 59.99 3.75 0.00 0.00 1802134.78 1356.60 3154116.61 00:10:40.284 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:40.284 Verification LBA range: start 0x200 length 0x200 00:10:40.284 raid0 : 6.16 62.39 3.90 0.00 0.00 1743416.39 1330.38 2724619.88 00:10:40.284 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:40.284 Verification LBA range: start 0x0 length 0x200 00:10:40.284 concat0 : 6.14 65.15 4.07 0.00 0.00 1640730.70 1192.76 3046742.43 00:10:40.284 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:40.284 Verification LBA range: start 0x200 length 0x200 00:10:40.284 concat0 : 6.05 73.75 4.61 0.00 0.00 1457881.65 1186.20 2617245.70 00:10:40.284 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:40.284 Verification LBA range: start 0x0 length 0x100 00:10:40.284 raid1 : 6.12 85.63 5.35 0.00 0.00 1231538.04 1553.20 2952790.02 00:10:40.284 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:40.284 Verification LBA range: start 0x100 length 0x100 00:10:40.284 raid1 : 6.15 90.89 5.68 0.00 0.00 1163220.42 1526.99 2523293.29 00:10:40.284 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:10:40.284 Verification LBA range: start 0x0 length 0x4e 00:10:40.284 AIO0 : 6.14 80.16 5.01 0.00 0.00 790380.25 619.32 1758252.24 00:10:40.284 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:10:40.284 Verification LBA range: start 0x4e length 0x4e 00:10:40.284 AIO0 : 6.15 87.42 5.46 0.00 0.00 724407.81 612.76 1436129.69 00:10:40.284 =================================================================================================================== 00:10:40.284 Total : 2289.21 143.08 0.00 0.00 968865.65 527.56 3516504.47 00:10:42.859 00:10:42.859 real 0m10.500s 00:10:42.859 user 0m19.452s 00:10:42.859 sys 0m0.512s 00:10:42.859 21:54:01 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:42.859 21:54:01 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:10:42.859 ************************************ 00:10:42.859 END TEST bdev_verify_big_io 00:10:42.859 ************************************ 00:10:42.859 21:54:01 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:10:42.859 21:54:01 blockdev_general -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:42.859 21:54:01 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:10:42.859 21:54:01 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:42.859 21:54:01 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:42.859 ************************************ 00:10:42.859 START TEST bdev_write_zeroes 00:10:42.859 ************************************ 00:10:42.859 21:54:01 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:42.859 [2024-07-13 21:54:01.992649] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:42.859 [2024-07-13 21:54:01.992751] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1331193 ] 00:10:42.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.859 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:42.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.859 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:42.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.859 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:42.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.859 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:42.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.859 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:42.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.859 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:42.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.859 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:42.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.859 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:42.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.859 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:42.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.859 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:42.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.859 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:42.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.859 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:42.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.859 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:42.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.859 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:42.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.859 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:42.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.859 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:42.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.860 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:42.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.860 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:42.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.860 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:42.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.860 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:42.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.860 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:42.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.860 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:42.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.860 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:42.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.860 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:42.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.860 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:42.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.860 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:42.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.860 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:42.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.860 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:42.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.860 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:42.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.860 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:42.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.860 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:42.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.860 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:42.860 [2024-07-13 21:54:02.150847] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.119 [2024-07-13 21:54:02.356482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.687 [2024-07-13 21:54:02.791497] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:43.687 [2024-07-13 21:54:02.791576] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:10:43.687 [2024-07-13 21:54:02.791591] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:10:43.687 [2024-07-13 21:54:02.799487] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:43.687 [2024-07-13 21:54:02.799520] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:43.687 [2024-07-13 21:54:02.807498] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:43.687 [2024-07-13 21:54:02.807531] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:43.687 [2024-07-13 21:54:03.010999] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:43.687 [2024-07-13 21:54:03.011051] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.687 [2024-07-13 21:54:03.011084] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042680 00:10:43.687 [2024-07-13 21:54:03.011097] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.687 [2024-07-13 21:54:03.013306] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.687 [2024-07-13 21:54:03.013334] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:10:44.259 Running I/O for 1 seconds... 00:10:45.194 00:10:45.195 Latency(us) 00:10:45.195 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:45.195 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:45.195 Malloc0 : 1.04 7161.91 27.98 0.00 0.00 17870.42 494.80 29989.27 00:10:45.195 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:45.195 Malloc1p0 : 1.04 7154.70 27.95 0.00 0.00 17863.58 675.02 29360.13 00:10:45.195 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:45.195 Malloc1p1 : 1.04 7147.55 27.92 0.00 0.00 17849.91 671.74 28521.27 00:10:45.195 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:45.195 Malloc2p0 : 1.04 7140.44 27.89 0.00 0.00 17841.15 668.47 27892.12 00:10:45.195 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:45.195 Malloc2p1 : 1.04 7133.29 27.86 0.00 0.00 17833.23 642.25 27262.98 00:10:45.195 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:45.195 Malloc2p2 : 1.04 7126.23 27.84 0.00 0.00 17823.65 632.42 26528.97 00:10:45.195 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:45.195 Malloc2p3 : 1.04 7119.15 27.81 0.00 0.00 17812.14 645.53 25899.83 00:10:45.195 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:45.195 Malloc2p4 : 1.04 7112.09 27.78 0.00 0.00 17803.75 668.47 25270.68 00:10:45.195 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:45.195 Malloc2p5 : 1.04 7105.01 27.75 0.00 0.00 17794.90 635.70 24641.54 00:10:45.195 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:45.195 Malloc2p6 : 1.05 7098.00 27.73 0.00 0.00 17783.78 671.74 23907.53 00:10:45.195 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:45.195 Malloc2p7 : 1.05 7090.99 27.70 0.00 0.00 17775.82 632.42 23278.39 00:10:45.195 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:45.195 TestPT : 1.05 7083.91 27.67 0.00 0.00 17763.70 665.19 22649.24 00:10:45.195 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:45.195 raid0 : 1.05 7075.73 27.64 0.00 0.00 17753.18 1140.33 21495.81 00:10:45.195 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:45.195 concat0 : 1.05 7067.70 27.61 0.00 0.00 17725.46 1133.77 20342.37 00:10:45.195 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:45.195 raid1 : 1.05 7057.59 27.57 0.00 0.00 17693.91 1835.01 18979.23 00:10:45.195 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:45.195 AIO0 : 1.05 7051.47 27.54 0.00 0.00 17650.43 737.28 18979.23 00:10:45.195 =================================================================================================================== 00:10:45.195 Total : 113725.75 444.24 0.00 0.00 17789.94 494.80 29989.27 00:10:47.731 00:10:47.731 real 0m4.844s 00:10:47.731 user 0m4.280s 00:10:47.731 sys 0m0.439s 00:10:47.731 21:54:06 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:47.731 21:54:06 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:10:47.731 ************************************ 00:10:47.731 END TEST bdev_write_zeroes 00:10:47.731 ************************************ 00:10:47.731 21:54:06 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:10:47.731 21:54:06 blockdev_general -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:47.731 21:54:06 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:10:47.731 21:54:06 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:47.731 21:54:06 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:47.731 ************************************ 00:10:47.731 START TEST bdev_json_nonenclosed 00:10:47.731 ************************************ 00:10:47.731 21:54:06 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:47.731 [2024-07-13 21:54:06.927793] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:47.731 [2024-07-13 21:54:06.927880] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1332425 ] 00:10:47.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.731 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:47.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.731 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:47.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.731 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:47.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.731 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:47.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.731 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:47.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.731 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:47.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.731 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:47.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.731 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:47.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.731 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:47.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.731 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:47.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.731 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:47.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.731 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:47.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.731 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:47.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.731 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:47.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.731 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:47.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.731 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:47.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.731 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:47.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.731 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:47.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.731 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:47.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.731 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:47.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.731 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:47.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.731 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:47.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.731 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:47.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.731 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:47.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.731 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:47.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.731 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:47.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.731 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:47.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.731 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:47.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.731 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:47.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.731 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:47.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.731 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:47.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.732 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:47.732 [2024-07-13 21:54:07.084409] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.007 [2024-07-13 21:54:07.284414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.007 [2024-07-13 21:54:07.284490] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:10:48.007 [2024-07-13 21:54:07.284509] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:48.007 [2024-07-13 21:54:07.284524] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:48.575 00:10:48.575 real 0m0.869s 00:10:48.575 user 0m0.655s 00:10:48.575 sys 0m0.209s 00:10:48.575 21:54:07 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:10:48.575 21:54:07 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:48.575 21:54:07 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:10:48.575 ************************************ 00:10:48.575 END TEST bdev_json_nonenclosed 00:10:48.575 ************************************ 00:10:48.575 21:54:07 blockdev_general -- common/autotest_common.sh@1142 -- # return 234 00:10:48.575 21:54:07 blockdev_general -- bdev/blockdev.sh@782 -- # true 00:10:48.575 21:54:07 blockdev_general -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:48.575 21:54:07 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:10:48.575 21:54:07 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:48.575 21:54:07 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:48.575 ************************************ 00:10:48.575 START TEST bdev_json_nonarray 00:10:48.575 ************************************ 00:10:48.575 21:54:07 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:48.575 [2024-07-13 21:54:07.876340] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:48.575 [2024-07-13 21:54:07.876434] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1332647 ] 00:10:48.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.834 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:48.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.834 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:48.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.834 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:48.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.834 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:48.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.834 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:48.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.834 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:48.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.834 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:48.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.834 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:48.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.834 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:48.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.834 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:48.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.834 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:48.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.834 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:48.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.834 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:48.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.834 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:48.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.834 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:48.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.834 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:48.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.834 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:48.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.834 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:48.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.834 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:48.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.834 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:48.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.834 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:48.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.834 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:48.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.834 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:48.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.834 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:48.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.834 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:48.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.834 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:48.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.834 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:48.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.834 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:48.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.834 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:48.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.834 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:48.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.834 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:48.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.834 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:48.834 [2024-07-13 21:54:08.035229] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.094 [2024-07-13 21:54:08.233011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.094 [2024-07-13 21:54:08.233113] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:10:49.094 [2024-07-13 21:54:08.233134] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:49.094 [2024-07-13 21:54:08.233146] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:49.354 00:10:49.354 real 0m0.854s 00:10:49.354 user 0m0.635s 00:10:49.354 sys 0m0.215s 00:10:49.354 21:54:08 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:10:49.354 21:54:08 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:49.354 21:54:08 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:10:49.354 ************************************ 00:10:49.354 END TEST bdev_json_nonarray 00:10:49.354 ************************************ 00:10:49.354 21:54:08 blockdev_general -- common/autotest_common.sh@1142 -- # return 234 00:10:49.354 21:54:08 blockdev_general -- bdev/blockdev.sh@785 -- # true 00:10:49.354 21:54:08 blockdev_general -- bdev/blockdev.sh@787 -- # [[ bdev == bdev ]] 00:10:49.354 21:54:08 blockdev_general -- bdev/blockdev.sh@788 -- # run_test bdev_qos qos_test_suite '' 00:10:49.354 21:54:08 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:49.354 21:54:08 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:49.354 21:54:08 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:49.354 ************************************ 00:10:49.354 START TEST bdev_qos 00:10:49.354 ************************************ 00:10:49.354 21:54:08 blockdev_general.bdev_qos -- common/autotest_common.sh@1123 -- # qos_test_suite '' 00:10:49.354 21:54:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@446 -- # QOS_PID=1332736 00:10:49.354 21:54:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@447 -- # echo 'Process qos testing pid: 1332736' 00:10:49.354 Process qos testing pid: 1332736 00:10:49.354 21:54:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:10:49.354 21:54:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@448 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:10:49.354 21:54:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@449 -- # waitforlisten 1332736 00:10:49.354 21:54:08 blockdev_general.bdev_qos -- common/autotest_common.sh@829 -- # '[' -z 1332736 ']' 00:10:49.354 21:54:08 blockdev_general.bdev_qos -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:49.354 21:54:08 blockdev_general.bdev_qos -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:49.354 21:54:08 blockdev_general.bdev_qos -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:49.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:49.354 21:54:08 blockdev_general.bdev_qos -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:49.354 21:54:08 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:10:49.613 [2024-07-13 21:54:08.816925] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:49.613 [2024-07-13 21:54:08.817024] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1332736 ] 00:10:49.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.613 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:49.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.613 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:49.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.613 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:49.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.613 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:49.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.613 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:49.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.613 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:49.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.613 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:49.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.613 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:49.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.613 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:49.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.613 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:49.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.613 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:49.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.613 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:49.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.613 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:49.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.613 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:49.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.613 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:49.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.613 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:49.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.613 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:49.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.613 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:49.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.613 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:49.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.614 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:49.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.614 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:49.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.614 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:49.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.614 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:49.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.614 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:49.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.614 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:49.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.614 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:49.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.614 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:49.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.614 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:49.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.614 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:49.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.614 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:49.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.614 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:49.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.614 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:49.614 [2024-07-13 21:54:08.980894] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.872 [2024-07-13 21:54:09.194381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:50.438 21:54:09 blockdev_general.bdev_qos -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:50.438 21:54:09 blockdev_general.bdev_qos -- common/autotest_common.sh@862 -- # return 0 00:10:50.438 21:54:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:10:50.438 21:54:09 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.438 21:54:09 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:10:50.438 Malloc_0 00:10:50.438 21:54:09 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.438 21:54:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@452 -- # waitforbdev Malloc_0 00:10:50.438 21:54:09 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_0 00:10:50.438 21:54:09 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:50.438 21:54:09 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local i 00:10:50.438 21:54:09 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:50.438 21:54:09 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:50.438 21:54:09 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:10:50.438 21:54:09 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.438 21:54:09 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:10:50.438 21:54:09 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.438 21:54:09 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:10:50.438 21:54:09 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.438 21:54:09 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:10:50.438 [ 00:10:50.438 { 00:10:50.438 "name": "Malloc_0", 00:10:50.438 "aliases": [ 00:10:50.438 "5c216f50-5c28-4d9c-b5fa-931c248382f2" 00:10:50.438 ], 00:10:50.438 "product_name": "Malloc disk", 00:10:50.438 "block_size": 512, 00:10:50.438 "num_blocks": 262144, 00:10:50.438 "uuid": "5c216f50-5c28-4d9c-b5fa-931c248382f2", 00:10:50.438 "assigned_rate_limits": { 00:10:50.438 "rw_ios_per_sec": 0, 00:10:50.438 "rw_mbytes_per_sec": 0, 00:10:50.438 "r_mbytes_per_sec": 0, 00:10:50.438 "w_mbytes_per_sec": 0 00:10:50.438 }, 00:10:50.438 "claimed": false, 00:10:50.438 "zoned": false, 00:10:50.438 "supported_io_types": { 00:10:50.438 "read": true, 00:10:50.438 "write": true, 00:10:50.438 "unmap": true, 00:10:50.438 "flush": true, 00:10:50.438 "reset": true, 00:10:50.438 "nvme_admin": false, 00:10:50.438 "nvme_io": false, 00:10:50.438 "nvme_io_md": false, 00:10:50.438 "write_zeroes": true, 00:10:50.438 "zcopy": true, 00:10:50.438 "get_zone_info": false, 00:10:50.438 "zone_management": false, 00:10:50.438 "zone_append": false, 00:10:50.438 "compare": false, 00:10:50.438 "compare_and_write": false, 00:10:50.438 "abort": true, 00:10:50.438 "seek_hole": false, 00:10:50.438 "seek_data": false, 00:10:50.438 "copy": true, 00:10:50.438 "nvme_iov_md": false 00:10:50.438 }, 00:10:50.438 "memory_domains": [ 00:10:50.438 { 00:10:50.438 "dma_device_id": "system", 00:10:50.438 "dma_device_type": 1 00:10:50.438 }, 00:10:50.438 { 00:10:50.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.438 "dma_device_type": 2 00:10:50.438 } 00:10:50.438 ], 00:10:50.438 "driver_specific": {} 00:10:50.438 } 00:10:50.438 ] 00:10:50.438 21:54:09 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.438 21:54:09 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # return 0 00:10:50.438 21:54:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@453 -- # rpc_cmd bdev_null_create Null_1 128 512 00:10:50.438 21:54:09 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.438 21:54:09 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:10:50.438 Null_1 00:10:50.438 21:54:09 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.438 21:54:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@454 -- # waitforbdev Null_1 00:10:50.438 21:54:09 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local bdev_name=Null_1 00:10:50.438 21:54:09 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:50.438 21:54:09 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local i 00:10:50.438 21:54:09 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:50.438 21:54:09 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:50.438 21:54:09 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:10:50.438 21:54:09 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.438 21:54:09 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:10:50.438 21:54:09 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.438 21:54:09 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:10:50.438 21:54:09 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.438 21:54:09 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:10:50.438 [ 00:10:50.438 { 00:10:50.438 "name": "Null_1", 00:10:50.438 "aliases": [ 00:10:50.438 "3a5d0671-f792-49af-a4b1-7be0ac730944" 00:10:50.438 ], 00:10:50.438 "product_name": "Null disk", 00:10:50.438 "block_size": 512, 00:10:50.438 "num_blocks": 262144, 00:10:50.438 "uuid": "3a5d0671-f792-49af-a4b1-7be0ac730944", 00:10:50.438 "assigned_rate_limits": { 00:10:50.438 "rw_ios_per_sec": 0, 00:10:50.438 "rw_mbytes_per_sec": 0, 00:10:50.438 "r_mbytes_per_sec": 0, 00:10:50.438 "w_mbytes_per_sec": 0 00:10:50.438 }, 00:10:50.438 "claimed": false, 00:10:50.438 "zoned": false, 00:10:50.438 "supported_io_types": { 00:10:50.438 "read": true, 00:10:50.438 "write": true, 00:10:50.438 "unmap": false, 00:10:50.438 "flush": false, 00:10:50.438 "reset": true, 00:10:50.438 "nvme_admin": false, 00:10:50.438 "nvme_io": false, 00:10:50.438 "nvme_io_md": false, 00:10:50.438 "write_zeroes": true, 00:10:50.438 "zcopy": false, 00:10:50.438 "get_zone_info": false, 00:10:50.438 "zone_management": false, 00:10:50.438 "zone_append": false, 00:10:50.438 "compare": false, 00:10:50.438 "compare_and_write": false, 00:10:50.438 "abort": true, 00:10:50.438 "seek_hole": false, 00:10:50.438 "seek_data": false, 00:10:50.438 "copy": false, 00:10:50.438 "nvme_iov_md": false 00:10:50.438 }, 00:10:50.438 "driver_specific": {} 00:10:50.438 } 00:10:50.438 ] 00:10:50.438 21:54:09 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.438 21:54:09 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # return 0 00:10:50.438 21:54:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@457 -- # qos_function_test 00:10:50.438 21:54:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@410 -- # local qos_lower_iops_limit=1000 00:10:50.438 21:54:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:50.438 21:54:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@411 -- # local qos_lower_bw_limit=2 00:10:50.438 21:54:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@412 -- # local io_result=0 00:10:50.438 21:54:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@413 -- # local iops_limit=0 00:10:50.438 21:54:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@414 -- # local bw_limit=0 00:10:50.438 21:54:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # get_io_result IOPS Malloc_0 00:10:50.438 21:54:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:10:50.438 21:54:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:10:50.438 21:54:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:10:50.696 21:54:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:10:50.696 21:54:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:10:50.696 21:54:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:10:50.696 Running I/O for 60 seconds... 00:10:55.966 21:54:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 89354.16 357416.65 0.00 0.00 360448.00 0.00 0.00 ' 00:10:55.966 21:54:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:10:55.966 21:54:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:10:55.966 21:54:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # iostat_result=89354.16 00:10:55.966 21:54:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 89354 00:10:55.966 21:54:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # io_result=89354 00:10:55.966 21:54:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@418 -- # iops_limit=22000 00:10:55.966 21:54:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@419 -- # '[' 22000 -gt 1000 ']' 00:10:55.966 21:54:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@422 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 22000 Malloc_0 00:10:55.966 21:54:14 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.966 21:54:14 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:10:55.966 21:54:14 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.966 21:54:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@423 -- # run_test bdev_qos_iops run_qos_test 22000 IOPS Malloc_0 00:10:55.966 21:54:14 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:55.966 21:54:14 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:55.966 21:54:14 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:10:55.966 ************************************ 00:10:55.966 START TEST bdev_qos_iops 00:10:55.966 ************************************ 00:10:55.966 21:54:15 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1123 -- # run_qos_test 22000 IOPS Malloc_0 00:10:55.966 21:54:15 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@389 -- # local qos_limit=22000 00:10:55.966 21:54:15 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@390 -- # local qos_result=0 00:10:55.966 21:54:15 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # get_io_result IOPS Malloc_0 00:10:55.966 21:54:15 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:10:55.966 21:54:15 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:10:55.966 21:54:15 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # local iostat_result 00:10:55.966 21:54:15 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:10:55.966 21:54:15 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:10:55.966 21:54:15 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # tail -1 00:11:01.272 21:54:20 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 21999.49 87997.98 0.00 0.00 89232.00 0.00 0.00 ' 00:11:01.272 21:54:20 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:11:01.272 21:54:20 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:11:01.272 21:54:20 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # iostat_result=21999.49 00:11:01.272 21:54:20 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@385 -- # echo 21999 00:11:01.272 21:54:20 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # qos_result=21999 00:11:01.272 21:54:20 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@393 -- # '[' IOPS = BANDWIDTH ']' 00:11:01.272 21:54:20 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@396 -- # lower_limit=19800 00:11:01.272 21:54:20 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@397 -- # upper_limit=24200 00:11:01.272 21:54:20 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 21999 -lt 19800 ']' 00:11:01.272 21:54:20 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 21999 -gt 24200 ']' 00:11:01.272 00:11:01.272 real 0m5.189s 00:11:01.272 user 0m0.089s 00:11:01.272 sys 0m0.045s 00:11:01.272 21:54:20 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:01.272 21:54:20 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@10 -- # set +x 00:11:01.272 ************************************ 00:11:01.272 END TEST bdev_qos_iops 00:11:01.272 ************************************ 00:11:01.272 21:54:20 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:11:01.272 21:54:20 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # get_io_result BANDWIDTH Null_1 00:11:01.272 21:54:20 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:11:01.272 21:54:20 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:11:01.272 21:54:20 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:11:01.272 21:54:20 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:11:01.272 21:54:20 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Null_1 00:11:01.272 21:54:20 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:11:06.540 21:54:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 33651.77 134607.07 0.00 0.00 136192.00 0.00 0.00 ' 00:11:06.540 21:54:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:11:06.540 21:54:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:11:06.540 21:54:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:11:06.540 21:54:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # iostat_result=136192.00 00:11:06.540 21:54:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 136192 00:11:06.540 21:54:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # bw_limit=136192 00:11:06.540 21:54:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@428 -- # bw_limit=13 00:11:06.540 21:54:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@429 -- # '[' 13 -lt 2 ']' 00:11:06.540 21:54:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@432 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 13 Null_1 00:11:06.540 21:54:25 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.540 21:54:25 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:11:06.540 21:54:25 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.540 21:54:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@433 -- # run_test bdev_qos_bw run_qos_test 13 BANDWIDTH Null_1 00:11:06.540 21:54:25 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:11:06.540 21:54:25 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:06.540 21:54:25 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:11:06.540 ************************************ 00:11:06.540 START TEST bdev_qos_bw 00:11:06.540 ************************************ 00:11:06.540 21:54:25 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1123 -- # run_qos_test 13 BANDWIDTH Null_1 00:11:06.540 21:54:25 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@389 -- # local qos_limit=13 00:11:06.540 21:54:25 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:11:06.540 21:54:25 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Null_1 00:11:06.540 21:54:25 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:11:06.540 21:54:25 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:11:06.540 21:54:25 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:11:06.540 21:54:25 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:11:06.540 21:54:25 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # grep Null_1 00:11:06.540 21:54:25 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # tail -1 00:11:11.812 21:54:30 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 3329.33 13317.31 0.00 0.00 13508.00 0.00 0.00 ' 00:11:11.812 21:54:30 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:11:11.812 21:54:30 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:11:11.812 21:54:30 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:11:11.812 21:54:30 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # iostat_result=13508.00 00:11:11.812 21:54:30 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@385 -- # echo 13508 00:11:11.812 21:54:30 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # qos_result=13508 00:11:11.812 21:54:30 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:11:11.812 21:54:30 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@394 -- # qos_limit=13312 00:11:11.812 21:54:30 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@396 -- # lower_limit=11980 00:11:11.812 21:54:30 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@397 -- # upper_limit=14643 00:11:11.812 21:54:30 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 13508 -lt 11980 ']' 00:11:11.812 21:54:30 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 13508 -gt 14643 ']' 00:11:11.812 00:11:11.812 real 0m5.203s 00:11:11.812 user 0m0.092s 00:11:11.812 sys 0m0.041s 00:11:11.812 21:54:30 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:11.812 21:54:30 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@10 -- # set +x 00:11:11.812 ************************************ 00:11:11.812 END TEST bdev_qos_bw 00:11:11.812 ************************************ 00:11:11.812 21:54:30 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:11:11.812 21:54:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@436 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:11:11.812 21:54:30 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.812 21:54:30 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:11:11.812 21:54:30 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.812 21:54:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@437 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:11:11.812 21:54:30 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:11:11.812 21:54:30 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:11.812 21:54:30 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:11:11.812 ************************************ 00:11:11.812 START TEST bdev_qos_ro_bw 00:11:11.812 ************************************ 00:11:11.812 21:54:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1123 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:11:11.812 21:54:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@389 -- # local qos_limit=2 00:11:11.812 21:54:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:11:11.812 21:54:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Malloc_0 00:11:11.812 21:54:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:11:11.812 21:54:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:11:11.812 21:54:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:11:11.812 21:54:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:11:11.812 21:54:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:11:11.812 21:54:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # tail -1 00:11:17.082 21:54:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 512.09 2048.38 0.00 0.00 2060.00 0.00 0.00 ' 00:11:17.082 21:54:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:11:17.082 21:54:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:11:17.082 21:54:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:11:17.082 21:54:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # iostat_result=2060.00 00:11:17.082 21:54:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@385 -- # echo 2060 00:11:17.082 21:54:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # qos_result=2060 00:11:17.082 21:54:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:11:17.082 21:54:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@394 -- # qos_limit=2048 00:11:17.082 21:54:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@396 -- # lower_limit=1843 00:11:17.082 21:54:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@397 -- # upper_limit=2252 00:11:17.082 21:54:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2060 -lt 1843 ']' 00:11:17.082 21:54:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2060 -gt 2252 ']' 00:11:17.082 00:11:17.082 real 0m5.157s 00:11:17.082 user 0m0.087s 00:11:17.082 sys 0m0.043s 00:11:17.082 21:54:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:17.082 21:54:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@10 -- # set +x 00:11:17.082 ************************************ 00:11:17.082 END TEST bdev_qos_ro_bw 00:11:17.082 ************************************ 00:11:17.082 21:54:36 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:11:17.082 21:54:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:11:17.082 21:54:36 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.082 21:54:36 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:11:17.342 21:54:36 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.342 21:54:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@460 -- # rpc_cmd bdev_null_delete Null_1 00:11:17.342 21:54:36 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.342 21:54:36 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:11:17.342 00:11:17.342 Latency(us) 00:11:17.342 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:17.342 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:11:17.342 Malloc_0 : 26.55 30194.56 117.95 0.00 0.00 8394.51 1507.33 503316.48 00:11:17.342 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:11:17.342 Null_1 : 26.74 30941.93 120.87 0.00 0.00 8257.84 537.40 185388.24 00:11:17.342 =================================================================================================================== 00:11:17.342 Total : 61136.49 238.81 0.00 0.00 8325.10 537.40 503316.48 00:11:17.342 0 00:11:17.342 21:54:36 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.342 21:54:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@461 -- # killprocess 1332736 00:11:17.342 21:54:36 blockdev_general.bdev_qos -- common/autotest_common.sh@948 -- # '[' -z 1332736 ']' 00:11:17.342 21:54:36 blockdev_general.bdev_qos -- common/autotest_common.sh@952 -- # kill -0 1332736 00:11:17.342 21:54:36 blockdev_general.bdev_qos -- common/autotest_common.sh@953 -- # uname 00:11:17.342 21:54:36 blockdev_general.bdev_qos -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:17.600 21:54:36 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1332736 00:11:17.601 21:54:36 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:17.601 21:54:36 blockdev_general.bdev_qos -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:17.601 21:54:36 blockdev_general.bdev_qos -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1332736' 00:11:17.601 killing process with pid 1332736 00:11:17.601 21:54:36 blockdev_general.bdev_qos -- common/autotest_common.sh@967 -- # kill 1332736 00:11:17.601 Received shutdown signal, test time was about 26.805365 seconds 00:11:17.601 00:11:17.601 Latency(us) 00:11:17.601 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:17.601 =================================================================================================================== 00:11:17.601 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:17.601 21:54:36 blockdev_general.bdev_qos -- common/autotest_common.sh@972 -- # wait 1332736 00:11:18.976 21:54:38 blockdev_general.bdev_qos -- bdev/blockdev.sh@462 -- # trap - SIGINT SIGTERM EXIT 00:11:18.976 00:11:18.976 real 0m29.296s 00:11:18.976 user 0m29.715s 00:11:18.976 sys 0m0.908s 00:11:18.976 21:54:38 blockdev_general.bdev_qos -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:18.976 21:54:38 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:11:18.976 ************************************ 00:11:18.976 END TEST bdev_qos 00:11:18.976 ************************************ 00:11:18.976 21:54:38 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:11:18.976 21:54:38 blockdev_general -- bdev/blockdev.sh@789 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:11:18.976 21:54:38 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:18.976 21:54:38 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:18.976 21:54:38 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:18.976 ************************************ 00:11:18.976 START TEST bdev_qd_sampling 00:11:18.976 ************************************ 00:11:18.976 21:54:38 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1123 -- # qd_sampling_test_suite '' 00:11:18.976 21:54:38 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@538 -- # QD_DEV=Malloc_QD 00:11:18.976 21:54:38 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@541 -- # QD_PID=1337862 00:11:18.976 21:54:38 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@542 -- # echo 'Process bdev QD sampling period testing pid: 1337862' 00:11:18.976 Process bdev QD sampling period testing pid: 1337862 00:11:18.976 21:54:38 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@540 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:11:18.976 21:54:38 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@543 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:11:18.976 21:54:38 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@544 -- # waitforlisten 1337862 00:11:18.976 21:54:38 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@829 -- # '[' -z 1337862 ']' 00:11:18.976 21:54:38 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.976 21:54:38 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:18.976 21:54:38 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.976 21:54:38 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:18.976 21:54:38 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:11:18.976 [2024-07-13 21:54:38.200369] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:18.976 [2024-07-13 21:54:38.200466] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1337862 ] 00:11:18.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.976 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:18.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.976 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:18.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.976 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:18.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.976 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:18.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.976 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:18.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.976 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:18.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.976 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:18.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.976 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:18.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.976 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:18.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.976 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:18.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.977 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:18.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.977 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:18.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.977 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:18.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.977 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:18.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.977 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:18.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.977 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:18.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.977 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:18.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.977 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:18.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.977 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:18.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.977 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:18.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.977 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:18.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.977 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:18.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.977 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:18.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.977 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:18.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.977 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:18.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.977 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:18.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.977 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:18.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.977 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:18.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.977 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:18.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.977 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:18.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.977 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:18.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.977 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:18.977 [2024-07-13 21:54:38.362371] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:19.235 [2024-07-13 21:54:38.574832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.235 [2024-07-13 21:54:38.574834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:19.801 21:54:38 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:19.801 21:54:38 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@862 -- # return 0 00:11:19.801 21:54:38 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@546 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:11:19.801 21:54:38 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.801 21:54:38 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:11:19.801 Malloc_QD 00:11:19.801 21:54:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.801 21:54:39 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@547 -- # waitforbdev Malloc_QD 00:11:19.801 21:54:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_QD 00:11:19.801 21:54:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:19.801 21:54:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@899 -- # local i 00:11:19.801 21:54:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:19.802 21:54:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:19.802 21:54:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:11:19.802 21:54:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.802 21:54:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:11:19.802 21:54:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.802 21:54:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:11:19.802 21:54:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.802 21:54:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:11:19.802 [ 00:11:19.802 { 00:11:19.802 "name": "Malloc_QD", 00:11:19.802 "aliases": [ 00:11:19.802 "6e914acb-dbb8-449a-804c-63068d014827" 00:11:19.802 ], 00:11:19.802 "product_name": "Malloc disk", 00:11:19.802 "block_size": 512, 00:11:19.802 "num_blocks": 262144, 00:11:19.802 "uuid": "6e914acb-dbb8-449a-804c-63068d014827", 00:11:19.802 "assigned_rate_limits": { 00:11:19.802 "rw_ios_per_sec": 0, 00:11:19.802 "rw_mbytes_per_sec": 0, 00:11:19.802 "r_mbytes_per_sec": 0, 00:11:19.802 "w_mbytes_per_sec": 0 00:11:19.802 }, 00:11:19.802 "claimed": false, 00:11:19.802 "zoned": false, 00:11:19.802 "supported_io_types": { 00:11:19.802 "read": true, 00:11:19.802 "write": true, 00:11:19.802 "unmap": true, 00:11:19.802 "flush": true, 00:11:19.802 "reset": true, 00:11:19.802 "nvme_admin": false, 00:11:19.802 "nvme_io": false, 00:11:19.802 "nvme_io_md": false, 00:11:19.802 "write_zeroes": true, 00:11:19.802 "zcopy": true, 00:11:19.802 "get_zone_info": false, 00:11:19.802 "zone_management": false, 00:11:19.802 "zone_append": false, 00:11:19.802 "compare": false, 00:11:19.802 "compare_and_write": false, 00:11:19.802 "abort": true, 00:11:19.802 "seek_hole": false, 00:11:19.802 "seek_data": false, 00:11:19.802 "copy": true, 00:11:19.802 "nvme_iov_md": false 00:11:19.802 }, 00:11:19.802 "memory_domains": [ 00:11:19.802 { 00:11:19.802 "dma_device_id": "system", 00:11:19.802 "dma_device_type": 1 00:11:19.802 }, 00:11:19.802 { 00:11:19.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.802 "dma_device_type": 2 00:11:19.802 } 00:11:19.802 ], 00:11:19.802 "driver_specific": {} 00:11:19.802 } 00:11:19.802 ] 00:11:19.802 21:54:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.802 21:54:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@905 -- # return 0 00:11:19.802 21:54:39 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@550 -- # sleep 2 00:11:19.802 21:54:39 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@549 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:20.059 Running I/O for 5 seconds... 00:11:21.959 21:54:41 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@551 -- # qd_sampling_function_test Malloc_QD 00:11:21.959 21:54:41 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@519 -- # local bdev_name=Malloc_QD 00:11:21.959 21:54:41 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@520 -- # local sampling_period=10 00:11:21.959 21:54:41 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@521 -- # local iostats 00:11:21.959 21:54:41 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:11:21.959 21:54:41 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.959 21:54:41 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:11:21.959 21:54:41 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.959 21:54:41 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:11:21.959 21:54:41 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.959 21:54:41 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:11:21.959 21:54:41 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.959 21:54:41 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # iostats='{ 00:11:21.959 "tick_rate": 2500000000, 00:11:21.959 "ticks": 11756641439486468, 00:11:21.959 "bdevs": [ 00:11:21.959 { 00:11:21.959 "name": "Malloc_QD", 00:11:21.959 "bytes_read": 938521088, 00:11:21.959 "num_read_ops": 229124, 00:11:21.959 "bytes_written": 0, 00:11:21.959 "num_write_ops": 0, 00:11:21.959 "bytes_unmapped": 0, 00:11:21.959 "num_unmap_ops": 0, 00:11:21.959 "bytes_copied": 0, 00:11:21.959 "num_copy_ops": 0, 00:11:21.959 "read_latency_ticks": 2468955703130, 00:11:21.959 "max_read_latency_ticks": 11605716, 00:11:21.959 "min_read_latency_ticks": 409028, 00:11:21.959 "write_latency_ticks": 0, 00:11:21.959 "max_write_latency_ticks": 0, 00:11:21.959 "min_write_latency_ticks": 0, 00:11:21.959 "unmap_latency_ticks": 0, 00:11:21.959 "max_unmap_latency_ticks": 0, 00:11:21.959 "min_unmap_latency_ticks": 0, 00:11:21.959 "copy_latency_ticks": 0, 00:11:21.959 "max_copy_latency_ticks": 0, 00:11:21.959 "min_copy_latency_ticks": 0, 00:11:21.959 "io_error": {}, 00:11:21.959 "queue_depth_polling_period": 10, 00:11:21.959 "queue_depth": 512, 00:11:21.959 "io_time": 30, 00:11:21.959 "weighted_io_time": 15360 00:11:21.959 } 00:11:21.959 ] 00:11:21.959 }' 00:11:21.959 21:54:41 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:11:21.959 21:54:41 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # qd_sampling_period=10 00:11:21.959 21:54:41 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 == null ']' 00:11:21.959 21:54:41 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 -ne 10 ']' 00:11:21.959 21:54:41 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@553 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:11:21.959 21:54:41 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.959 21:54:41 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:11:21.959 00:11:21.959 Latency(us) 00:11:21.959 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:21.959 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:11:21.959 Malloc_QD : 2.00 59128.42 230.97 0.00 0.00 4319.51 989.59 4666.16 00:11:21.959 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:11:21.959 Malloc_QD : 2.01 59491.18 232.39 0.00 0.00 4293.24 622.59 4666.16 00:11:21.959 =================================================================================================================== 00:11:21.959 Total : 118619.60 463.36 0.00 0.00 4306.33 622.59 4666.16 00:11:22.217 0 00:11:22.217 21:54:41 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.217 21:54:41 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@554 -- # killprocess 1337862 00:11:22.217 21:54:41 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@948 -- # '[' -z 1337862 ']' 00:11:22.217 21:54:41 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@952 -- # kill -0 1337862 00:11:22.217 21:54:41 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@953 -- # uname 00:11:22.217 21:54:41 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:22.217 21:54:41 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1337862 00:11:22.217 21:54:41 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:22.217 21:54:41 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:22.217 21:54:41 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1337862' 00:11:22.217 killing process with pid 1337862 00:11:22.217 21:54:41 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@967 -- # kill 1337862 00:11:22.217 Received shutdown signal, test time was about 2.186107 seconds 00:11:22.217 00:11:22.217 Latency(us) 00:11:22.217 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:22.217 =================================================================================================================== 00:11:22.217 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:22.217 21:54:41 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@972 -- # wait 1337862 00:11:23.627 21:54:42 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@555 -- # trap - SIGINT SIGTERM EXIT 00:11:23.627 00:11:23.627 real 0m4.553s 00:11:23.627 user 0m8.267s 00:11:23.627 sys 0m0.485s 00:11:23.627 21:54:42 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:23.627 21:54:42 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:11:23.627 ************************************ 00:11:23.627 END TEST bdev_qd_sampling 00:11:23.627 ************************************ 00:11:23.627 21:54:42 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:11:23.627 21:54:42 blockdev_general -- bdev/blockdev.sh@790 -- # run_test bdev_error error_test_suite '' 00:11:23.627 21:54:42 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:23.627 21:54:42 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:23.627 21:54:42 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:23.627 ************************************ 00:11:23.627 START TEST bdev_error 00:11:23.627 ************************************ 00:11:23.627 21:54:42 blockdev_general.bdev_error -- common/autotest_common.sh@1123 -- # error_test_suite '' 00:11:23.627 21:54:42 blockdev_general.bdev_error -- bdev/blockdev.sh@466 -- # DEV_1=Dev_1 00:11:23.627 21:54:42 blockdev_general.bdev_error -- bdev/blockdev.sh@467 -- # DEV_2=Dev_2 00:11:23.627 21:54:42 blockdev_general.bdev_error -- bdev/blockdev.sh@468 -- # ERR_DEV=EE_Dev_1 00:11:23.627 21:54:42 blockdev_general.bdev_error -- bdev/blockdev.sh@472 -- # ERR_PID=1338698 00:11:23.627 21:54:42 blockdev_general.bdev_error -- bdev/blockdev.sh@473 -- # echo 'Process error testing pid: 1338698' 00:11:23.627 Process error testing pid: 1338698 00:11:23.627 21:54:42 blockdev_general.bdev_error -- bdev/blockdev.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:11:23.627 21:54:42 blockdev_general.bdev_error -- bdev/blockdev.sh@474 -- # waitforlisten 1338698 00:11:23.627 21:54:42 blockdev_general.bdev_error -- common/autotest_common.sh@829 -- # '[' -z 1338698 ']' 00:11:23.627 21:54:42 blockdev_general.bdev_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.627 21:54:42 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:23.627 21:54:42 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.627 21:54:42 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:23.627 21:54:42 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:11:23.627 [2024-07-13 21:54:42.839732] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:23.627 [2024-07-13 21:54:42.839829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1338698 ] 00:11:23.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.627 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:23.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.627 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:23.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.627 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:23.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.627 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:23.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.627 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:23.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.627 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:23.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.628 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:23.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.628 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:23.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.628 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:23.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.628 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:23.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.628 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:23.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.628 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:23.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.628 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:23.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.628 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:23.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.628 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:23.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.628 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:23.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.628 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:23.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.628 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:23.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.628 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:23.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.628 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:23.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.628 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:23.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.628 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:23.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.628 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:23.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.628 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:23.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.628 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:23.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.628 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:23.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.628 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:23.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.628 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:23.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.628 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:23.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.628 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:23.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.628 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:23.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.628 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:23.628 [2024-07-13 21:54:43.003829] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.892 [2024-07-13 21:54:43.205469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:24.460 21:54:43 blockdev_general.bdev_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:24.460 21:54:43 blockdev_general.bdev_error -- common/autotest_common.sh@862 -- # return 0 00:11:24.460 21:54:43 blockdev_general.bdev_error -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:11:24.460 21:54:43 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.460 21:54:43 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:11:24.460 Dev_1 00:11:24.460 21:54:43 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.460 21:54:43 blockdev_general.bdev_error -- bdev/blockdev.sh@477 -- # waitforbdev Dev_1 00:11:24.460 21:54:43 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:11:24.460 21:54:43 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:24.460 21:54:43 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:11:24.460 21:54:43 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:24.460 21:54:43 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:24.460 21:54:43 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:11:24.460 21:54:43 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.460 21:54:43 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:11:24.460 21:54:43 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.460 21:54:43 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:11:24.460 21:54:43 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.460 21:54:43 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:11:24.460 [ 00:11:24.460 { 00:11:24.460 "name": "Dev_1", 00:11:24.460 "aliases": [ 00:11:24.460 "64493c7e-ad28-47a7-b448-cd553d3306e1" 00:11:24.460 ], 00:11:24.460 "product_name": "Malloc disk", 00:11:24.460 "block_size": 512, 00:11:24.460 "num_blocks": 262144, 00:11:24.460 "uuid": "64493c7e-ad28-47a7-b448-cd553d3306e1", 00:11:24.460 "assigned_rate_limits": { 00:11:24.460 "rw_ios_per_sec": 0, 00:11:24.460 "rw_mbytes_per_sec": 0, 00:11:24.460 "r_mbytes_per_sec": 0, 00:11:24.460 "w_mbytes_per_sec": 0 00:11:24.460 }, 00:11:24.460 "claimed": false, 00:11:24.460 "zoned": false, 00:11:24.460 "supported_io_types": { 00:11:24.460 "read": true, 00:11:24.460 "write": true, 00:11:24.460 "unmap": true, 00:11:24.460 "flush": true, 00:11:24.460 "reset": true, 00:11:24.460 "nvme_admin": false, 00:11:24.460 "nvme_io": false, 00:11:24.460 "nvme_io_md": false, 00:11:24.460 "write_zeroes": true, 00:11:24.460 "zcopy": true, 00:11:24.460 "get_zone_info": false, 00:11:24.460 "zone_management": false, 00:11:24.460 "zone_append": false, 00:11:24.460 "compare": false, 00:11:24.460 "compare_and_write": false, 00:11:24.460 "abort": true, 00:11:24.460 "seek_hole": false, 00:11:24.460 "seek_data": false, 00:11:24.460 "copy": true, 00:11:24.460 "nvme_iov_md": false 00:11:24.460 }, 00:11:24.460 "memory_domains": [ 00:11:24.460 { 00:11:24.460 "dma_device_id": "system", 00:11:24.460 "dma_device_type": 1 00:11:24.460 }, 00:11:24.460 { 00:11:24.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.460 "dma_device_type": 2 00:11:24.460 } 00:11:24.460 ], 00:11:24.460 "driver_specific": {} 00:11:24.460 } 00:11:24.460 ] 00:11:24.460 21:54:43 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.460 21:54:43 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:11:24.460 21:54:43 blockdev_general.bdev_error -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_error_create Dev_1 00:11:24.460 21:54:43 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.460 21:54:43 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:11:24.460 true 00:11:24.460 21:54:43 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.460 21:54:43 blockdev_general.bdev_error -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:11:24.460 21:54:43 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.460 21:54:43 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:11:24.719 Dev_2 00:11:24.719 21:54:43 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.719 21:54:43 blockdev_general.bdev_error -- bdev/blockdev.sh@480 -- # waitforbdev Dev_2 00:11:24.719 21:54:43 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:11:24.719 21:54:43 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:24.719 21:54:43 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:11:24.719 21:54:43 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:24.719 21:54:43 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:24.719 21:54:43 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:11:24.719 21:54:43 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.719 21:54:43 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:11:24.719 21:54:43 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.719 21:54:43 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:11:24.719 21:54:43 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.719 21:54:43 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:11:24.719 [ 00:11:24.719 { 00:11:24.719 "name": "Dev_2", 00:11:24.719 "aliases": [ 00:11:24.719 "e4d0832a-52b8-47bf-8a38-4d05e79e8976" 00:11:24.719 ], 00:11:24.719 "product_name": "Malloc disk", 00:11:24.719 "block_size": 512, 00:11:24.719 "num_blocks": 262144, 00:11:24.719 "uuid": "e4d0832a-52b8-47bf-8a38-4d05e79e8976", 00:11:24.719 "assigned_rate_limits": { 00:11:24.719 "rw_ios_per_sec": 0, 00:11:24.719 "rw_mbytes_per_sec": 0, 00:11:24.719 "r_mbytes_per_sec": 0, 00:11:24.719 "w_mbytes_per_sec": 0 00:11:24.719 }, 00:11:24.719 "claimed": false, 00:11:24.719 "zoned": false, 00:11:24.719 "supported_io_types": { 00:11:24.719 "read": true, 00:11:24.719 "write": true, 00:11:24.719 "unmap": true, 00:11:24.719 "flush": true, 00:11:24.719 "reset": true, 00:11:24.719 "nvme_admin": false, 00:11:24.719 "nvme_io": false, 00:11:24.719 "nvme_io_md": false, 00:11:24.719 "write_zeroes": true, 00:11:24.719 "zcopy": true, 00:11:24.719 "get_zone_info": false, 00:11:24.719 "zone_management": false, 00:11:24.719 "zone_append": false, 00:11:24.719 "compare": false, 00:11:24.719 "compare_and_write": false, 00:11:24.719 "abort": true, 00:11:24.719 "seek_hole": false, 00:11:24.719 "seek_data": false, 00:11:24.719 "copy": true, 00:11:24.719 "nvme_iov_md": false 00:11:24.719 }, 00:11:24.719 "memory_domains": [ 00:11:24.719 { 00:11:24.719 "dma_device_id": "system", 00:11:24.719 "dma_device_type": 1 00:11:24.719 }, 00:11:24.719 { 00:11:24.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.719 "dma_device_type": 2 00:11:24.719 } 00:11:24.719 ], 00:11:24.719 "driver_specific": {} 00:11:24.719 } 00:11:24.719 ] 00:11:24.719 21:54:43 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.719 21:54:43 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:11:24.719 21:54:43 blockdev_general.bdev_error -- bdev/blockdev.sh@481 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:11:24.719 21:54:43 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.719 21:54:43 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:11:24.719 21:54:43 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.719 21:54:43 blockdev_general.bdev_error -- bdev/blockdev.sh@484 -- # sleep 1 00:11:24.719 21:54:43 blockdev_general.bdev_error -- bdev/blockdev.sh@483 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:11:24.719 Running I/O for 5 seconds... 00:11:25.655 21:54:44 blockdev_general.bdev_error -- bdev/blockdev.sh@487 -- # kill -0 1338698 00:11:25.655 21:54:44 blockdev_general.bdev_error -- bdev/blockdev.sh@488 -- # echo 'Process is existed as continue on error is set. Pid: 1338698' 00:11:25.655 Process is existed as continue on error is set. Pid: 1338698 00:11:25.655 21:54:44 blockdev_general.bdev_error -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:11:25.655 21:54:44 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.655 21:54:44 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:11:25.655 21:54:44 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.655 21:54:44 blockdev_general.bdev_error -- bdev/blockdev.sh@496 -- # rpc_cmd bdev_malloc_delete Dev_1 00:11:25.655 21:54:44 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.655 21:54:44 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:11:25.655 Timeout while waiting for response: 00:11:25.655 00:11:25.655 00:11:25.914 21:54:45 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.914 21:54:45 blockdev_general.bdev_error -- bdev/blockdev.sh@497 -- # sleep 5 00:11:30.103 00:11:30.103 Latency(us) 00:11:30.103 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:30.103 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:11:30.103 EE_Dev_1 : 0.92 52249.61 204.10 5.43 0.00 303.82 107.32 563.61 00:11:30.103 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:11:30.103 Dev_2 : 5.00 112473.76 439.35 0.00 0.00 139.94 46.69 111568.49 00:11:30.103 =================================================================================================================== 00:11:30.103 Total : 164723.37 643.45 5.43 0.00 152.85 46.69 111568.49 00:11:31.039 21:54:50 blockdev_general.bdev_error -- bdev/blockdev.sh@499 -- # killprocess 1338698 00:11:31.039 21:54:50 blockdev_general.bdev_error -- common/autotest_common.sh@948 -- # '[' -z 1338698 ']' 00:11:31.040 21:54:50 blockdev_general.bdev_error -- common/autotest_common.sh@952 -- # kill -0 1338698 00:11:31.040 21:54:50 blockdev_general.bdev_error -- common/autotest_common.sh@953 -- # uname 00:11:31.040 21:54:50 blockdev_general.bdev_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:31.040 21:54:50 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1338698 00:11:31.040 21:54:50 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:31.040 21:54:50 blockdev_general.bdev_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:31.040 21:54:50 blockdev_general.bdev_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1338698' 00:11:31.040 killing process with pid 1338698 00:11:31.040 21:54:50 blockdev_general.bdev_error -- common/autotest_common.sh@967 -- # kill 1338698 00:11:31.040 Received shutdown signal, test time was about 5.000000 seconds 00:11:31.040 00:11:31.040 Latency(us) 00:11:31.040 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:31.040 =================================================================================================================== 00:11:31.040 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:31.040 21:54:50 blockdev_general.bdev_error -- common/autotest_common.sh@972 -- # wait 1338698 00:11:32.417 21:54:51 blockdev_general.bdev_error -- bdev/blockdev.sh@503 -- # ERR_PID=1340287 00:11:32.417 21:54:51 blockdev_general.bdev_error -- bdev/blockdev.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:11:32.417 21:54:51 blockdev_general.bdev_error -- bdev/blockdev.sh@504 -- # echo 'Process error testing pid: 1340287' 00:11:32.417 Process error testing pid: 1340287 00:11:32.417 21:54:51 blockdev_general.bdev_error -- bdev/blockdev.sh@505 -- # waitforlisten 1340287 00:11:32.417 21:54:51 blockdev_general.bdev_error -- common/autotest_common.sh@829 -- # '[' -z 1340287 ']' 00:11:32.417 21:54:51 blockdev_general.bdev_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.417 21:54:51 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:32.417 21:54:51 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.417 21:54:51 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:32.417 21:54:51 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:11:32.417 [2024-07-13 21:54:51.795344] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:32.417 [2024-07-13 21:54:51.795436] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1340287 ] 00:11:32.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:32.677 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:32.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:32.677 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:32.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:32.677 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:32.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:32.677 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:32.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:32.677 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:32.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:32.677 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:32.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:32.677 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:32.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:32.677 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:32.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:32.677 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:32.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:32.677 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:32.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:32.677 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:32.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:32.677 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:32.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:32.677 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:32.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:32.677 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:32.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:32.677 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:32.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:32.677 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:32.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:32.677 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:32.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:32.677 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:32.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:32.677 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:32.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:32.677 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:32.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:32.677 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:32.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:32.677 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:32.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:32.677 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:32.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:32.677 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:32.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:32.677 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:32.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:32.677 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:32.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:32.677 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:32.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:32.677 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:32.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:32.677 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:32.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:32.677 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:32.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:32.677 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:32.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:32.677 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:32.677 [2024-07-13 21:54:51.953376] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.936 [2024-07-13 21:54:52.156389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:33.195 21:54:52 blockdev_general.bdev_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:33.195 21:54:52 blockdev_general.bdev_error -- common/autotest_common.sh@862 -- # return 0 00:11:33.195 21:54:52 blockdev_general.bdev_error -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:11:33.195 21:54:52 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.195 21:54:52 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:11:33.455 Dev_1 00:11:33.455 21:54:52 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.455 21:54:52 blockdev_general.bdev_error -- bdev/blockdev.sh@508 -- # waitforbdev Dev_1 00:11:33.455 21:54:52 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:11:33.455 21:54:52 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:33.455 21:54:52 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:11:33.455 21:54:52 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:33.455 21:54:52 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:33.455 21:54:52 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:11:33.455 21:54:52 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.455 21:54:52 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:11:33.455 21:54:52 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.455 21:54:52 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:11:33.455 21:54:52 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.455 21:54:52 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:11:33.455 [ 00:11:33.455 { 00:11:33.455 "name": "Dev_1", 00:11:33.455 "aliases": [ 00:11:33.455 "93e5eb56-093e-440f-8bf5-225aae9e19cb" 00:11:33.455 ], 00:11:33.455 "product_name": "Malloc disk", 00:11:33.455 "block_size": 512, 00:11:33.455 "num_blocks": 262144, 00:11:33.455 "uuid": "93e5eb56-093e-440f-8bf5-225aae9e19cb", 00:11:33.455 "assigned_rate_limits": { 00:11:33.455 "rw_ios_per_sec": 0, 00:11:33.455 "rw_mbytes_per_sec": 0, 00:11:33.455 "r_mbytes_per_sec": 0, 00:11:33.455 "w_mbytes_per_sec": 0 00:11:33.455 }, 00:11:33.455 "claimed": false, 00:11:33.455 "zoned": false, 00:11:33.455 "supported_io_types": { 00:11:33.455 "read": true, 00:11:33.455 "write": true, 00:11:33.455 "unmap": true, 00:11:33.455 "flush": true, 00:11:33.455 "reset": true, 00:11:33.455 "nvme_admin": false, 00:11:33.455 "nvme_io": false, 00:11:33.455 "nvme_io_md": false, 00:11:33.455 "write_zeroes": true, 00:11:33.455 "zcopy": true, 00:11:33.455 "get_zone_info": false, 00:11:33.455 "zone_management": false, 00:11:33.455 "zone_append": false, 00:11:33.455 "compare": false, 00:11:33.455 "compare_and_write": false, 00:11:33.455 "abort": true, 00:11:33.455 "seek_hole": false, 00:11:33.455 "seek_data": false, 00:11:33.455 "copy": true, 00:11:33.455 "nvme_iov_md": false 00:11:33.455 }, 00:11:33.455 "memory_domains": [ 00:11:33.455 { 00:11:33.455 "dma_device_id": "system", 00:11:33.455 "dma_device_type": 1 00:11:33.455 }, 00:11:33.455 { 00:11:33.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.455 "dma_device_type": 2 00:11:33.455 } 00:11:33.455 ], 00:11:33.455 "driver_specific": {} 00:11:33.455 } 00:11:33.455 ] 00:11:33.455 21:54:52 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.455 21:54:52 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:11:33.455 21:54:52 blockdev_general.bdev_error -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_error_create Dev_1 00:11:33.455 21:54:52 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.455 21:54:52 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:11:33.455 true 00:11:33.455 21:54:52 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.455 21:54:52 blockdev_general.bdev_error -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:11:33.455 21:54:52 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.455 21:54:52 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:11:33.715 Dev_2 00:11:33.715 21:54:52 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.715 21:54:52 blockdev_general.bdev_error -- bdev/blockdev.sh@511 -- # waitforbdev Dev_2 00:11:33.715 21:54:52 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:11:33.715 21:54:52 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:33.715 21:54:52 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:11:33.715 21:54:52 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:33.715 21:54:52 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:33.715 21:54:52 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:11:33.715 21:54:52 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.715 21:54:52 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:11:33.715 21:54:52 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.715 21:54:52 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:11:33.715 21:54:52 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.715 21:54:52 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:11:33.715 [ 00:11:33.715 { 00:11:33.715 "name": "Dev_2", 00:11:33.715 "aliases": [ 00:11:33.715 "68f1d19f-8fda-43b7-b893-4f347aa4dc4a" 00:11:33.715 ], 00:11:33.715 "product_name": "Malloc disk", 00:11:33.715 "block_size": 512, 00:11:33.715 "num_blocks": 262144, 00:11:33.715 "uuid": "68f1d19f-8fda-43b7-b893-4f347aa4dc4a", 00:11:33.715 "assigned_rate_limits": { 00:11:33.715 "rw_ios_per_sec": 0, 00:11:33.715 "rw_mbytes_per_sec": 0, 00:11:33.715 "r_mbytes_per_sec": 0, 00:11:33.715 "w_mbytes_per_sec": 0 00:11:33.715 }, 00:11:33.715 "claimed": false, 00:11:33.715 "zoned": false, 00:11:33.715 "supported_io_types": { 00:11:33.715 "read": true, 00:11:33.715 "write": true, 00:11:33.715 "unmap": true, 00:11:33.715 "flush": true, 00:11:33.715 "reset": true, 00:11:33.715 "nvme_admin": false, 00:11:33.715 "nvme_io": false, 00:11:33.715 "nvme_io_md": false, 00:11:33.715 "write_zeroes": true, 00:11:33.715 "zcopy": true, 00:11:33.715 "get_zone_info": false, 00:11:33.715 "zone_management": false, 00:11:33.715 "zone_append": false, 00:11:33.715 "compare": false, 00:11:33.715 "compare_and_write": false, 00:11:33.715 "abort": true, 00:11:33.715 "seek_hole": false, 00:11:33.715 "seek_data": false, 00:11:33.715 "copy": true, 00:11:33.715 "nvme_iov_md": false 00:11:33.715 }, 00:11:33.715 "memory_domains": [ 00:11:33.715 { 00:11:33.715 "dma_device_id": "system", 00:11:33.715 "dma_device_type": 1 00:11:33.715 }, 00:11:33.715 { 00:11:33.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.715 "dma_device_type": 2 00:11:33.715 } 00:11:33.715 ], 00:11:33.715 "driver_specific": {} 00:11:33.715 } 00:11:33.715 ] 00:11:33.715 21:54:52 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.715 21:54:52 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:11:33.715 21:54:52 blockdev_general.bdev_error -- bdev/blockdev.sh@512 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:11:33.715 21:54:52 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.715 21:54:52 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:11:33.715 21:54:52 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.715 21:54:52 blockdev_general.bdev_error -- bdev/blockdev.sh@515 -- # NOT wait 1340287 00:11:33.715 21:54:52 blockdev_general.bdev_error -- common/autotest_common.sh@648 -- # local es=0 00:11:33.716 21:54:52 blockdev_general.bdev_error -- bdev/blockdev.sh@514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:11:33.716 21:54:52 blockdev_general.bdev_error -- common/autotest_common.sh@650 -- # valid_exec_arg wait 1340287 00:11:33.716 21:54:52 blockdev_general.bdev_error -- common/autotest_common.sh@636 -- # local arg=wait 00:11:33.716 21:54:52 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:33.716 21:54:52 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # type -t wait 00:11:33.716 21:54:52 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:33.716 21:54:52 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # wait 1340287 00:11:33.716 Running I/O for 5 seconds... 00:11:33.716 task offset: 240296 on job bdev=EE_Dev_1 fails 00:11:33.716 00:11:33.716 Latency(us) 00:11:33.716 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:33.716 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:11:33.716 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:11:33.716 EE_Dev_1 : 0.00 38596.49 150.77 8771.93 0.00 275.85 102.81 494.80 00:11:33.716 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:11:33.716 Dev_2 : 0.00 25600.00 100.00 0.00 0.00 451.50 101.17 832.31 00:11:33.716 =================================================================================================================== 00:11:33.716 Total : 64196.49 250.77 8771.93 0.00 371.12 101.17 832.31 00:11:33.716 [2024-07-13 21:54:53.020093] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:33.716 request: 00:11:33.716 { 00:11:33.716 "method": "perform_tests", 00:11:33.716 "req_id": 1 00:11:33.716 } 00:11:33.716 Got JSON-RPC error response 00:11:33.716 response: 00:11:33.716 { 00:11:33.716 "code": -32603, 00:11:33.716 "message": "bdevperf failed with error Operation not permitted" 00:11:33.716 } 00:11:35.621 21:54:54 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # es=255 00:11:35.621 21:54:54 blockdev_general.bdev_error -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:35.621 21:54:54 blockdev_general.bdev_error -- common/autotest_common.sh@660 -- # es=127 00:11:35.621 21:54:54 blockdev_general.bdev_error -- common/autotest_common.sh@661 -- # case "$es" in 00:11:35.621 21:54:54 blockdev_general.bdev_error -- common/autotest_common.sh@668 -- # es=1 00:11:35.621 21:54:54 blockdev_general.bdev_error -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:35.621 00:11:35.622 real 0m11.932s 00:11:35.622 user 0m11.868s 00:11:35.622 sys 0m0.976s 00:11:35.622 21:54:54 blockdev_general.bdev_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:35.622 21:54:54 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:11:35.622 ************************************ 00:11:35.622 END TEST bdev_error 00:11:35.622 ************************************ 00:11:35.622 21:54:54 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:11:35.622 21:54:54 blockdev_general -- bdev/blockdev.sh@791 -- # run_test bdev_stat stat_test_suite '' 00:11:35.622 21:54:54 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:35.622 21:54:54 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:35.622 21:54:54 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:35.622 ************************************ 00:11:35.622 START TEST bdev_stat 00:11:35.622 ************************************ 00:11:35.622 21:54:54 blockdev_general.bdev_stat -- common/autotest_common.sh@1123 -- # stat_test_suite '' 00:11:35.622 21:54:54 blockdev_general.bdev_stat -- bdev/blockdev.sh@592 -- # STAT_DEV=Malloc_STAT 00:11:35.622 21:54:54 blockdev_general.bdev_stat -- bdev/blockdev.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:11:35.622 21:54:54 blockdev_general.bdev_stat -- bdev/blockdev.sh@596 -- # STAT_PID=1340836 00:11:35.622 21:54:54 blockdev_general.bdev_stat -- bdev/blockdev.sh@597 -- # echo 'Process Bdev IO statistics testing pid: 1340836' 00:11:35.622 Process Bdev IO statistics testing pid: 1340836 00:11:35.622 21:54:54 blockdev_general.bdev_stat -- bdev/blockdev.sh@598 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:11:35.622 21:54:54 blockdev_general.bdev_stat -- bdev/blockdev.sh@599 -- # waitforlisten 1340836 00:11:35.622 21:54:54 blockdev_general.bdev_stat -- common/autotest_common.sh@829 -- # '[' -z 1340836 ']' 00:11:35.622 21:54:54 blockdev_general.bdev_stat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.622 21:54:54 blockdev_general.bdev_stat -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:35.622 21:54:54 blockdev_general.bdev_stat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.622 21:54:54 blockdev_general.bdev_stat -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:35.622 21:54:54 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:11:35.622 [2024-07-13 21:54:54.840696] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:35.622 [2024-07-13 21:54:54.840794] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1340836 ] 00:11:35.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.622 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:35.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.622 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:35.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.622 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:35.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.622 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:35.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.622 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:35.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.622 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:35.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.622 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:35.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.622 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:35.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.622 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:35.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.622 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:35.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.622 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:35.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.622 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:35.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.622 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:35.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.622 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:35.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.622 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:35.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.622 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:35.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.622 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:35.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.622 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:35.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.622 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:35.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.622 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:35.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.622 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:35.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.622 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:35.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.622 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:35.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.622 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:35.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.622 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:35.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.622 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:35.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.622 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:35.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.622 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:35.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.622 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:35.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.622 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:35.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.622 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:35.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.622 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:35.622 [2024-07-13 21:54:55.003385] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:35.881 [2024-07-13 21:54:55.203647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.881 [2024-07-13 21:54:55.203656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:36.450 21:54:55 blockdev_general.bdev_stat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:36.450 21:54:55 blockdev_general.bdev_stat -- common/autotest_common.sh@862 -- # return 0 00:11:36.450 21:54:55 blockdev_general.bdev_stat -- bdev/blockdev.sh@601 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:11:36.450 21:54:55 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.450 21:54:55 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:11:36.450 Malloc_STAT 00:11:36.450 21:54:55 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.450 21:54:55 blockdev_general.bdev_stat -- bdev/blockdev.sh@602 -- # waitforbdev Malloc_STAT 00:11:36.450 21:54:55 blockdev_general.bdev_stat -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_STAT 00:11:36.450 21:54:55 blockdev_general.bdev_stat -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:36.450 21:54:55 blockdev_general.bdev_stat -- common/autotest_common.sh@899 -- # local i 00:11:36.450 21:54:55 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:36.450 21:54:55 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:36.450 21:54:55 blockdev_general.bdev_stat -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:11:36.450 21:54:55 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.450 21:54:55 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:11:36.450 21:54:55 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.450 21:54:55 blockdev_general.bdev_stat -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:11:36.450 21:54:55 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.450 21:54:55 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:11:36.450 [ 00:11:36.450 { 00:11:36.450 "name": "Malloc_STAT", 00:11:36.450 "aliases": [ 00:11:36.450 "71eb2e2b-889b-4e1d-a74a-ffb4af800a83" 00:11:36.450 ], 00:11:36.450 "product_name": "Malloc disk", 00:11:36.450 "block_size": 512, 00:11:36.450 "num_blocks": 262144, 00:11:36.450 "uuid": "71eb2e2b-889b-4e1d-a74a-ffb4af800a83", 00:11:36.450 "assigned_rate_limits": { 00:11:36.450 "rw_ios_per_sec": 0, 00:11:36.450 "rw_mbytes_per_sec": 0, 00:11:36.450 "r_mbytes_per_sec": 0, 00:11:36.450 "w_mbytes_per_sec": 0 00:11:36.450 }, 00:11:36.450 "claimed": false, 00:11:36.450 "zoned": false, 00:11:36.450 "supported_io_types": { 00:11:36.450 "read": true, 00:11:36.450 "write": true, 00:11:36.450 "unmap": true, 00:11:36.450 "flush": true, 00:11:36.450 "reset": true, 00:11:36.450 "nvme_admin": false, 00:11:36.450 "nvme_io": false, 00:11:36.450 "nvme_io_md": false, 00:11:36.450 "write_zeroes": true, 00:11:36.450 "zcopy": true, 00:11:36.450 "get_zone_info": false, 00:11:36.450 "zone_management": false, 00:11:36.450 "zone_append": false, 00:11:36.450 "compare": false, 00:11:36.450 "compare_and_write": false, 00:11:36.450 "abort": true, 00:11:36.450 "seek_hole": false, 00:11:36.450 "seek_data": false, 00:11:36.450 "copy": true, 00:11:36.450 "nvme_iov_md": false 00:11:36.450 }, 00:11:36.450 "memory_domains": [ 00:11:36.450 { 00:11:36.450 "dma_device_id": "system", 00:11:36.450 "dma_device_type": 1 00:11:36.450 }, 00:11:36.450 { 00:11:36.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.450 "dma_device_type": 2 00:11:36.450 } 00:11:36.450 ], 00:11:36.450 "driver_specific": {} 00:11:36.450 } 00:11:36.450 ] 00:11:36.450 21:54:55 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.450 21:54:55 blockdev_general.bdev_stat -- common/autotest_common.sh@905 -- # return 0 00:11:36.450 21:54:55 blockdev_general.bdev_stat -- bdev/blockdev.sh@605 -- # sleep 2 00:11:36.450 21:54:55 blockdev_general.bdev_stat -- bdev/blockdev.sh@604 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:36.709 Running I/O for 10 seconds... 00:11:38.620 21:54:57 blockdev_general.bdev_stat -- bdev/blockdev.sh@606 -- # stat_function_test Malloc_STAT 00:11:38.620 21:54:57 blockdev_general.bdev_stat -- bdev/blockdev.sh@559 -- # local bdev_name=Malloc_STAT 00:11:38.620 21:54:57 blockdev_general.bdev_stat -- bdev/blockdev.sh@560 -- # local iostats 00:11:38.620 21:54:57 blockdev_general.bdev_stat -- bdev/blockdev.sh@561 -- # local io_count1 00:11:38.620 21:54:57 blockdev_general.bdev_stat -- bdev/blockdev.sh@562 -- # local io_count2 00:11:38.620 21:54:57 blockdev_general.bdev_stat -- bdev/blockdev.sh@563 -- # local iostats_per_channel 00:11:38.620 21:54:57 blockdev_general.bdev_stat -- bdev/blockdev.sh@564 -- # local io_count_per_channel1 00:11:38.620 21:54:57 blockdev_general.bdev_stat -- bdev/blockdev.sh@565 -- # local io_count_per_channel2 00:11:38.620 21:54:57 blockdev_general.bdev_stat -- bdev/blockdev.sh@566 -- # local io_count_per_channel_all=0 00:11:38.620 21:54:57 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:11:38.620 21:54:57 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.620 21:54:57 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:11:38.620 21:54:57 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.620 21:54:57 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # iostats='{ 00:11:38.620 "tick_rate": 2500000000, 00:11:38.620 "ticks": 11756682953137880, 00:11:38.620 "bdevs": [ 00:11:38.620 { 00:11:38.620 "name": "Malloc_STAT", 00:11:38.620 "bytes_read": 945861120, 00:11:38.620 "num_read_ops": 230916, 00:11:38.620 "bytes_written": 0, 00:11:38.620 "num_write_ops": 0, 00:11:38.620 "bytes_unmapped": 0, 00:11:38.620 "num_unmap_ops": 0, 00:11:38.620 "bytes_copied": 0, 00:11:38.620 "num_copy_ops": 0, 00:11:38.620 "read_latency_ticks": 2454269249936, 00:11:38.620 "max_read_latency_ticks": 11434218, 00:11:38.620 "min_read_latency_ticks": 412882, 00:11:38.620 "write_latency_ticks": 0, 00:11:38.620 "max_write_latency_ticks": 0, 00:11:38.620 "min_write_latency_ticks": 0, 00:11:38.620 "unmap_latency_ticks": 0, 00:11:38.620 "max_unmap_latency_ticks": 0, 00:11:38.620 "min_unmap_latency_ticks": 0, 00:11:38.620 "copy_latency_ticks": 0, 00:11:38.620 "max_copy_latency_ticks": 0, 00:11:38.620 "min_copy_latency_ticks": 0, 00:11:38.620 "io_error": {} 00:11:38.620 } 00:11:38.620 ] 00:11:38.620 }' 00:11:38.620 21:54:57 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # jq -r '.bdevs[0].num_read_ops' 00:11:38.620 21:54:57 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # io_count1=230916 00:11:38.620 21:54:57 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:11:38.620 21:54:57 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.620 21:54:57 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:11:38.620 21:54:57 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.620 21:54:57 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # iostats_per_channel='{ 00:11:38.620 "tick_rate": 2500000000, 00:11:38.620 "ticks": 11756683115178626, 00:11:38.620 "name": "Malloc_STAT", 00:11:38.620 "channels": [ 00:11:38.620 { 00:11:38.620 "thread_id": 2, 00:11:38.620 "bytes_read": 486539264, 00:11:38.620 "num_read_ops": 118784, 00:11:38.620 "bytes_written": 0, 00:11:38.620 "num_write_ops": 0, 00:11:38.620 "bytes_unmapped": 0, 00:11:38.620 "num_unmap_ops": 0, 00:11:38.620 "bytes_copied": 0, 00:11:38.620 "num_copy_ops": 0, 00:11:38.620 "read_latency_ticks": 1267236186080, 00:11:38.620 "max_read_latency_ticks": 11434218, 00:11:38.620 "min_read_latency_ticks": 8060110, 00:11:38.620 "write_latency_ticks": 0, 00:11:38.620 "max_write_latency_ticks": 0, 00:11:38.620 "min_write_latency_ticks": 0, 00:11:38.620 "unmap_latency_ticks": 0, 00:11:38.620 "max_unmap_latency_ticks": 0, 00:11:38.620 "min_unmap_latency_ticks": 0, 00:11:38.620 "copy_latency_ticks": 0, 00:11:38.620 "max_copy_latency_ticks": 0, 00:11:38.620 "min_copy_latency_ticks": 0 00:11:38.620 }, 00:11:38.620 { 00:11:38.620 "thread_id": 3, 00:11:38.620 "bytes_read": 490733568, 00:11:38.620 "num_read_ops": 119808, 00:11:38.620 "bytes_written": 0, 00:11:38.620 "num_write_ops": 0, 00:11:38.620 "bytes_unmapped": 0, 00:11:38.620 "num_unmap_ops": 0, 00:11:38.620 "bytes_copied": 0, 00:11:38.620 "num_copy_ops": 0, 00:11:38.620 "read_latency_ticks": 1269290255876, 00:11:38.620 "max_read_latency_ticks": 11212678, 00:11:38.620 "min_read_latency_ticks": 8077596, 00:11:38.620 "write_latency_ticks": 0, 00:11:38.620 "max_write_latency_ticks": 0, 00:11:38.620 "min_write_latency_ticks": 0, 00:11:38.620 "unmap_latency_ticks": 0, 00:11:38.620 "max_unmap_latency_ticks": 0, 00:11:38.620 "min_unmap_latency_ticks": 0, 00:11:38.620 "copy_latency_ticks": 0, 00:11:38.620 "max_copy_latency_ticks": 0, 00:11:38.620 "min_copy_latency_ticks": 0 00:11:38.620 } 00:11:38.620 ] 00:11:38.620 }' 00:11:38.620 21:54:57 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # jq -r '.channels[0].num_read_ops' 00:11:38.620 21:54:57 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # io_count_per_channel1=118784 00:11:38.620 21:54:57 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=118784 00:11:38.620 21:54:57 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # jq -r '.channels[1].num_read_ops' 00:11:38.620 21:54:57 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # io_count_per_channel2=119808 00:11:38.620 21:54:57 blockdev_general.bdev_stat -- bdev/blockdev.sh@575 -- # io_count_per_channel_all=238592 00:11:38.620 21:54:57 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:11:38.621 21:54:57 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.621 21:54:57 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:11:38.880 21:54:58 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.880 21:54:58 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # iostats='{ 00:11:38.880 "tick_rate": 2500000000, 00:11:38.880 "ticks": 11756683415614512, 00:11:38.880 "bdevs": [ 00:11:38.880 { 00:11:38.880 "name": "Malloc_STAT", 00:11:38.880 "bytes_read": 1036038656, 00:11:38.880 "num_read_ops": 252932, 00:11:38.880 "bytes_written": 0, 00:11:38.880 "num_write_ops": 0, 00:11:38.880 "bytes_unmapped": 0, 00:11:38.880 "num_unmap_ops": 0, 00:11:38.880 "bytes_copied": 0, 00:11:38.880 "num_copy_ops": 0, 00:11:38.880 "read_latency_ticks": 2689631220672, 00:11:38.880 "max_read_latency_ticks": 11434218, 00:11:38.880 "min_read_latency_ticks": 412882, 00:11:38.880 "write_latency_ticks": 0, 00:11:38.880 "max_write_latency_ticks": 0, 00:11:38.880 "min_write_latency_ticks": 0, 00:11:38.880 "unmap_latency_ticks": 0, 00:11:38.880 "max_unmap_latency_ticks": 0, 00:11:38.880 "min_unmap_latency_ticks": 0, 00:11:38.880 "copy_latency_ticks": 0, 00:11:38.880 "max_copy_latency_ticks": 0, 00:11:38.880 "min_copy_latency_ticks": 0, 00:11:38.880 "io_error": {} 00:11:38.880 } 00:11:38.880 ] 00:11:38.880 }' 00:11:38.880 21:54:58 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # jq -r '.bdevs[0].num_read_ops' 00:11:38.880 21:54:58 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # io_count2=252932 00:11:38.880 21:54:58 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 238592 -lt 230916 ']' 00:11:38.880 21:54:58 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 238592 -gt 252932 ']' 00:11:38.880 21:54:58 blockdev_general.bdev_stat -- bdev/blockdev.sh@608 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:11:38.880 21:54:58 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.880 21:54:58 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:11:38.880 00:11:38.880 Latency(us) 00:11:38.880 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:38.880 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:11:38.880 Malloc_STAT : 2.18 59846.52 233.78 0.00 0.00 4267.89 930.61 4587.52 00:11:38.880 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:11:38.880 Malloc_STAT : 2.18 60296.69 235.53 0.00 0.00 4236.20 619.32 4508.88 00:11:38.880 =================================================================================================================== 00:11:38.880 Total : 120143.21 469.31 0.00 0.00 4251.98 619.32 4587.52 00:11:38.880 0 00:11:38.880 21:54:58 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.880 21:54:58 blockdev_general.bdev_stat -- bdev/blockdev.sh@609 -- # killprocess 1340836 00:11:38.880 21:54:58 blockdev_general.bdev_stat -- common/autotest_common.sh@948 -- # '[' -z 1340836 ']' 00:11:38.880 21:54:58 blockdev_general.bdev_stat -- common/autotest_common.sh@952 -- # kill -0 1340836 00:11:38.880 21:54:58 blockdev_general.bdev_stat -- common/autotest_common.sh@953 -- # uname 00:11:38.880 21:54:58 blockdev_general.bdev_stat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:38.880 21:54:58 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1340836 00:11:38.880 21:54:58 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:38.880 21:54:58 blockdev_general.bdev_stat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:38.880 21:54:58 blockdev_general.bdev_stat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1340836' 00:11:38.880 killing process with pid 1340836 00:11:38.880 21:54:58 blockdev_general.bdev_stat -- common/autotest_common.sh@967 -- # kill 1340836 00:11:38.880 Received shutdown signal, test time was about 2.354786 seconds 00:11:38.880 00:11:38.880 Latency(us) 00:11:38.880 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:38.880 =================================================================================================================== 00:11:38.880 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:38.880 21:54:58 blockdev_general.bdev_stat -- common/autotest_common.sh@972 -- # wait 1340836 00:11:40.261 21:54:59 blockdev_general.bdev_stat -- bdev/blockdev.sh@610 -- # trap - SIGINT SIGTERM EXIT 00:11:40.261 00:11:40.261 real 0m4.740s 00:11:40.261 user 0m8.765s 00:11:40.261 sys 0m0.536s 00:11:40.261 21:54:59 blockdev_general.bdev_stat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:40.261 21:54:59 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:11:40.261 ************************************ 00:11:40.261 END TEST bdev_stat 00:11:40.261 ************************************ 00:11:40.261 21:54:59 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:11:40.261 21:54:59 blockdev_general -- bdev/blockdev.sh@794 -- # [[ bdev == gpt ]] 00:11:40.261 21:54:59 blockdev_general -- bdev/blockdev.sh@798 -- # [[ bdev == crypto_sw ]] 00:11:40.261 21:54:59 blockdev_general -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:11:40.261 21:54:59 blockdev_general -- bdev/blockdev.sh@811 -- # cleanup 00:11:40.261 21:54:59 blockdev_general -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:11:40.261 21:54:59 blockdev_general -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:11:40.261 21:54:59 blockdev_general -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]] 00:11:40.261 21:54:59 blockdev_general -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]] 00:11:40.261 21:54:59 blockdev_general -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]] 00:11:40.261 21:54:59 blockdev_general -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]] 00:11:40.261 00:11:40.261 real 2m21.471s 00:11:40.261 user 7m49.099s 00:11:40.261 sys 0m22.108s 00:11:40.261 21:54:59 blockdev_general -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:40.261 21:54:59 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:40.261 ************************************ 00:11:40.261 END TEST blockdev_general 00:11:40.261 ************************************ 00:11:40.261 21:54:59 -- common/autotest_common.sh@1142 -- # return 0 00:11:40.261 21:54:59 -- spdk/autotest.sh@190 -- # run_test bdev_raid /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh 00:11:40.261 21:54:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:40.261 21:54:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:40.261 21:54:59 -- common/autotest_common.sh@10 -- # set +x 00:11:40.261 ************************************ 00:11:40.261 START TEST bdev_raid 00:11:40.261 ************************************ 00:11:40.261 21:54:59 bdev_raid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh 00:11:40.520 * Looking for test storage... 00:11:40.520 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:11:40.520 21:54:59 bdev_raid -- bdev/bdev_raid.sh@13 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:11:40.520 21:54:59 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:11:40.520 21:54:59 bdev_raid -- bdev/bdev_raid.sh@15 -- # rpc_py='/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:11:40.520 21:54:59 bdev_raid -- bdev/bdev_raid.sh@851 -- # mkdir -p /raidtest 00:11:40.520 21:54:59 bdev_raid -- bdev/bdev_raid.sh@852 -- # trap 'cleanup; exit 1' EXIT 00:11:40.520 21:54:59 bdev_raid -- bdev/bdev_raid.sh@854 -- # base_blocklen=512 00:11:40.520 21:54:59 bdev_raid -- bdev/bdev_raid.sh@856 -- # uname -s 00:11:40.520 21:54:59 bdev_raid -- bdev/bdev_raid.sh@856 -- # '[' Linux = Linux ']' 00:11:40.520 21:54:59 bdev_raid -- bdev/bdev_raid.sh@856 -- # modprobe -n nbd 00:11:40.520 21:54:59 bdev_raid -- bdev/bdev_raid.sh@857 -- # has_nbd=true 00:11:40.520 21:54:59 bdev_raid -- bdev/bdev_raid.sh@858 -- # modprobe nbd 00:11:40.520 21:54:59 bdev_raid -- bdev/bdev_raid.sh@859 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:11:40.520 21:54:59 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:40.520 21:54:59 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:40.520 21:54:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:40.520 ************************************ 00:11:40.520 START TEST raid_function_test_raid0 00:11:40.520 ************************************ 00:11:40.520 21:54:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1123 -- # raid_function_test raid0 00:11:40.520 21:54:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@80 -- # local raid_level=raid0 00:11:40.520 21:54:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:11:40.520 21:54:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:11:40.520 21:54:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # raid_pid=1341723 00:11:40.520 21:54:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 1341723' 00:11:40.520 Process raid pid: 1341723 00:11:40.520 21:54:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:40.520 21:54:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@87 -- # waitforlisten 1341723 /var/tmp/spdk-raid.sock 00:11:40.520 21:54:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@829 -- # '[' -z 1341723 ']' 00:11:40.520 21:54:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:40.520 21:54:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:40.520 21:54:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:40.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:40.520 21:54:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:40.520 21:54:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:11:40.780 [2024-07-13 21:54:59.913834] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:40.780 [2024-07-13 21:54:59.913932] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:40.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:40.780 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:40.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:40.780 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:40.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:40.780 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:40.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:40.780 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:40.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:40.780 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:40.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:40.780 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:40.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:40.780 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:40.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:40.780 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:40.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:40.780 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:40.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:40.780 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:40.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:40.780 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:40.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:40.780 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:40.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:40.780 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:40.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:40.780 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:40.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:40.780 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:40.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:40.780 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:40.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:40.780 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:40.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:40.780 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:40.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:40.780 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:40.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:40.780 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:40.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:40.780 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:40.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:40.780 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:40.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:40.780 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:40.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:40.780 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:40.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:40.780 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:40.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:40.780 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:40.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:40.780 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:40.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:40.780 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:40.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:40.780 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:40.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:40.780 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:40.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:40.780 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:40.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:40.780 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:40.780 [2024-07-13 21:55:00.080115] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.039 [2024-07-13 21:55:00.298730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.298 [2024-07-13 21:55:00.571565] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:41.298 [2024-07-13 21:55:00.571591] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:41.557 21:55:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:41.557 21:55:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@862 -- # return 0 00:11:41.557 21:55:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev raid0 00:11:41.557 21:55:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_level=raid0 00:11:41.557 21:55:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@67 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:11:41.557 21:55:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # cat 00:11:41.557 21:55:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:11:41.817 [2024-07-13 21:55:00.966380] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:11:41.817 [2024-07-13 21:55:00.968039] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:11:41.817 [2024-07-13 21:55:00.968092] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:11:41.817 [2024-07-13 21:55:00.968109] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:11:41.817 [2024-07-13 21:55:00.968345] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:11:41.817 [2024-07-13 21:55:00.968502] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:11:41.817 [2024-07-13 21:55:00.968512] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x61600003ff80 00:11:41.817 [2024-07-13 21:55:00.968646] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:41.817 Base_1 00:11:41.817 Base_2 00:11:41.817 21:55:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@76 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:11:41.817 21:55:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:11:41.817 21:55:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:11:41.817 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:11:41.817 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:11:41.817 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:11:41.817 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:11:41.817 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:11:41.817 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:41.817 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:41.817 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:41.817 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:11:41.817 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:41.817 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:41.817 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:11:42.076 [2024-07-13 21:55:01.327365] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:11:42.076 /dev/nbd0 00:11:42.076 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:42.076 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:42.076 21:55:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:11:42.076 21:55:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@867 -- # local i 00:11:42.076 21:55:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:42.076 21:55:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:42.076 21:55:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:11:42.076 21:55:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # break 00:11:42.076 21:55:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:42.076 21:55:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:42.076 21:55:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:42.076 1+0 records in 00:11:42.076 1+0 records out 00:11:42.076 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000153299 s, 26.7 MB/s 00:11:42.076 21:55:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:42.076 21:55:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # size=4096 00:11:42.076 21:55:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:42.076 21:55:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:42.076 21:55:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # return 0 00:11:42.076 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:42.076 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:42.076 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:11:42.076 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:11:42.076 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:11:42.336 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:42.336 { 00:11:42.336 "nbd_device": "/dev/nbd0", 00:11:42.336 "bdev_name": "raid" 00:11:42.336 } 00:11:42.336 ]' 00:11:42.336 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:42.336 { 00:11:42.336 "nbd_device": "/dev/nbd0", 00:11:42.336 "bdev_name": "raid" 00:11:42.336 } 00:11:42.336 ]' 00:11:42.336 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:42.336 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:11:42.336 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:11:42.336 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:42.336 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:11:42.336 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:11:42.336 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # count=1 00:11:42.336 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:11:42.336 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:11:42.336 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:11:42.336 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:11:42.336 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:11:42.336 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local blksize 00:11:42.336 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:11:42.336 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:11:42.336 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:11:42.336 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # blksize=512 00:11:42.336 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:11:42.336 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:11:42.336 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:11:42.336 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:11:42.336 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:11:42.336 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:11:42.336 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:11:42.336 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:11:42.336 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:11:42.336 4096+0 records in 00:11:42.336 4096+0 records out 00:11:42.336 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0285213 s, 73.5 MB/s 00:11:42.336 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:11:42.595 4096+0 records in 00:11:42.595 4096+0 records out 00:11:42.595 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.221733 s, 9.5 MB/s 00:11:42.595 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:11:42.595 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:42.595 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:11:42.595 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:11:42.595 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:11:42.595 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:11:42.595 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:11:42.595 128+0 records in 00:11:42.595 128+0 records out 00:11:42.595 65536 bytes (66 kB, 64 KiB) copied, 0.000823073 s, 79.6 MB/s 00:11:42.595 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:11:42.595 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:11:42.595 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:42.595 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:11:42.595 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:11:42.595 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:11:42.595 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:11:42.595 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:11:42.595 2035+0 records in 00:11:42.595 2035+0 records out 00:11:42.595 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0108734 s, 95.8 MB/s 00:11:42.595 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:11:42.595 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:11:42.595 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:42.595 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:11:42.595 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:11:42.595 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:11:42.595 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:11:42.595 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:11:42.595 456+0 records in 00:11:42.595 456+0 records out 00:11:42.595 233472 bytes (233 kB, 228 KiB) copied, 0.0027687 s, 84.3 MB/s 00:11:42.595 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:11:42.595 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:11:42.855 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:42.855 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:11:42.855 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:11:42.855 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@54 -- # return 0 00:11:42.855 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:11:42.855 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:11:42.855 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:42.855 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:42.855 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:11:42.855 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:42.855 21:55:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:11:42.855 21:55:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:42.855 21:55:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:42.855 21:55:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:42.855 21:55:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:42.855 [2024-07-13 21:55:02.179836] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:42.855 21:55:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:42.855 21:55:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:42.855 21:55:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:11:42.855 21:55:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:11:42.855 21:55:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:11:42.855 21:55:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:11:42.855 21:55:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:11:43.114 21:55:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:43.114 21:55:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:43.114 21:55:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:43.114 21:55:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:43.114 21:55:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:11:43.114 21:55:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:43.114 21:55:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:11:43.114 21:55:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:11:43.114 21:55:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:11:43.114 21:55:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # count=0 00:11:43.114 21:55:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:11:43.114 21:55:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@110 -- # killprocess 1341723 00:11:43.114 21:55:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@948 -- # '[' -z 1341723 ']' 00:11:43.115 21:55:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@952 -- # kill -0 1341723 00:11:43.115 21:55:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@953 -- # uname 00:11:43.115 21:55:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:43.115 21:55:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1341723 00:11:43.115 21:55:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:43.115 21:55:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:43.115 21:55:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1341723' 00:11:43.115 killing process with pid 1341723 00:11:43.115 21:55:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@967 -- # kill 1341723 00:11:43.115 [2024-07-13 21:55:02.469446] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:43.115 [2024-07-13 21:55:02.469537] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:43.115 [2024-07-13 21:55:02.469583] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:43.115 [2024-07-13 21:55:02.469598] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name raid, state offline 00:11:43.115 21:55:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # wait 1341723 00:11:43.374 [2024-07-13 21:55:02.605984] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:44.750 21:55:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@112 -- # return 0 00:11:44.751 00:11:44.751 real 0m3.994s 00:11:44.751 user 0m4.557s 00:11:44.751 sys 0m1.178s 00:11:44.751 21:55:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:44.751 21:55:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:11:44.751 ************************************ 00:11:44.751 END TEST raid_function_test_raid0 00:11:44.751 ************************************ 00:11:44.751 21:55:03 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:11:44.751 21:55:03 bdev_raid -- bdev/bdev_raid.sh@860 -- # run_test raid_function_test_concat raid_function_test concat 00:11:44.751 21:55:03 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:44.751 21:55:03 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:44.751 21:55:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:44.751 ************************************ 00:11:44.751 START TEST raid_function_test_concat 00:11:44.751 ************************************ 00:11:44.751 21:55:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1123 -- # raid_function_test concat 00:11:44.751 21:55:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@80 -- # local raid_level=concat 00:11:44.751 21:55:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:11:44.751 21:55:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:11:44.751 21:55:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # raid_pid=1342595 00:11:44.751 21:55:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 1342595' 00:11:44.751 Process raid pid: 1342595 00:11:44.751 21:55:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:44.751 21:55:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@87 -- # waitforlisten 1342595 /var/tmp/spdk-raid.sock 00:11:44.751 21:55:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@829 -- # '[' -z 1342595 ']' 00:11:44.751 21:55:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:44.751 21:55:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:44.751 21:55:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:44.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:44.751 21:55:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:44.751 21:55:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:11:44.751 [2024-07-13 21:55:03.986487] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:44.751 [2024-07-13 21:55:03.986574] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:44.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.751 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:44.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.751 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:44.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.751 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:44.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.751 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:44.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.751 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:44.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.751 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:44.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.751 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:44.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.751 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:44.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.751 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:44.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.751 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:44.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.751 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:44.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.751 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:44.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.751 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:44.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.751 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:44.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.751 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:44.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.751 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:44.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.751 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:44.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.751 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:44.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.751 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:44.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.751 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:44.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.751 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:44.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.751 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:44.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.751 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:44.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.751 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:44.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.751 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:44.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.751 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:44.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.751 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:44.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.751 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:44.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.751 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:44.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.751 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:44.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.751 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:44.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.751 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:45.008 [2024-07-13 21:55:04.147352] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.008 [2024-07-13 21:55:04.354979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.267 [2024-07-13 21:55:04.593408] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:45.267 [2024-07-13 21:55:04.593435] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:45.526 21:55:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:45.526 21:55:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@862 -- # return 0 00:11:45.526 21:55:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev concat 00:11:45.526 21:55:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_level=concat 00:11:45.526 21:55:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@67 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:11:45.526 21:55:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # cat 00:11:45.526 21:55:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:11:45.785 [2024-07-13 21:55:04.989581] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:11:45.785 [2024-07-13 21:55:04.991274] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:11:45.785 [2024-07-13 21:55:04.991331] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:11:45.785 [2024-07-13 21:55:04.991348] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:11:45.785 [2024-07-13 21:55:04.991577] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:11:45.785 [2024-07-13 21:55:04.991739] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:11:45.785 [2024-07-13 21:55:04.991749] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x61600003ff80 00:11:45.785 [2024-07-13 21:55:04.991885] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:45.785 Base_1 00:11:45.785 Base_2 00:11:45.785 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@76 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:11:45.785 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:11:45.785 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:11:46.045 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:11:46.045 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:11:46.045 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:11:46.045 21:55:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:11:46.045 21:55:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:11:46.045 21:55:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:46.045 21:55:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:46.045 21:55:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:46.045 21:55:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:11:46.045 21:55:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:46.045 21:55:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:46.045 21:55:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:11:46.045 [2024-07-13 21:55:05.346523] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:11:46.045 /dev/nbd0 00:11:46.045 21:55:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:46.045 21:55:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:46.045 21:55:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:11:46.045 21:55:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@867 -- # local i 00:11:46.045 21:55:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:46.045 21:55:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:46.045 21:55:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:11:46.045 21:55:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # break 00:11:46.045 21:55:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:46.045 21:55:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:46.045 21:55:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:46.045 1+0 records in 00:11:46.045 1+0 records out 00:11:46.045 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242819 s, 16.9 MB/s 00:11:46.045 21:55:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:46.045 21:55:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # size=4096 00:11:46.045 21:55:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:46.045 21:55:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:46.045 21:55:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # return 0 00:11:46.045 21:55:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:46.045 21:55:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:46.045 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:11:46.045 21:55:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:11:46.045 21:55:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:11:46.304 21:55:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:46.304 { 00:11:46.304 "nbd_device": "/dev/nbd0", 00:11:46.304 "bdev_name": "raid" 00:11:46.304 } 00:11:46.304 ]' 00:11:46.304 21:55:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:46.304 { 00:11:46.304 "nbd_device": "/dev/nbd0", 00:11:46.304 "bdev_name": "raid" 00:11:46.304 } 00:11:46.304 ]' 00:11:46.304 21:55:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:46.304 21:55:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:11:46.304 21:55:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:11:46.304 21:55:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:46.304 21:55:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:11:46.304 21:55:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:11:46.304 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # count=1 00:11:46.304 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:11:46.305 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:11:46.305 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:11:46.305 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:11:46.305 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:11:46.305 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local blksize 00:11:46.305 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:11:46.305 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:11:46.305 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:11:46.305 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # blksize=512 00:11:46.305 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:11:46.305 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:11:46.305 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:11:46.305 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:11:46.305 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:11:46.305 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:11:46.305 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:11:46.305 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:11:46.305 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:11:46.305 4096+0 records in 00:11:46.305 4096+0 records out 00:11:46.305 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0280562 s, 74.7 MB/s 00:11:46.305 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:11:46.564 4096+0 records in 00:11:46.564 4096+0 records out 00:11:46.564 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.149275 s, 14.0 MB/s 00:11:46.564 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:11:46.564 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:46.564 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:11:46.564 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:11:46.564 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:11:46.564 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:11:46.564 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:11:46.564 128+0 records in 00:11:46.564 128+0 records out 00:11:46.564 65536 bytes (66 kB, 64 KiB) copied, 0.000651009 s, 101 MB/s 00:11:46.564 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:11:46.564 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:11:46.564 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:46.564 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:11:46.564 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:11:46.564 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:11:46.564 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:11:46.564 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:11:46.564 2035+0 records in 00:11:46.564 2035+0 records out 00:11:46.564 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0118941 s, 87.6 MB/s 00:11:46.564 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:11:46.564 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:11:46.564 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:46.564 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:11:46.564 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:11:46.564 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:11:46.564 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:11:46.564 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:11:46.564 456+0 records in 00:11:46.564 456+0 records out 00:11:46.564 233472 bytes (233 kB, 228 KiB) copied, 0.00273824 s, 85.3 MB/s 00:11:46.564 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:11:46.564 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:11:46.564 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:46.564 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:11:46.564 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:11:46.564 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@54 -- # return 0 00:11:46.564 21:55:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:11:46.564 21:55:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:11:46.564 21:55:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:46.564 21:55:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:46.564 21:55:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:11:46.564 21:55:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:46.564 21:55:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:11:46.823 21:55:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:46.823 21:55:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:46.823 [2024-07-13 21:55:06.113550] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:46.823 21:55:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:46.823 21:55:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:46.823 21:55:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:46.823 21:55:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:46.823 21:55:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:11:46.823 21:55:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:11:46.823 21:55:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:11:46.823 21:55:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:11:46.823 21:55:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:11:47.082 21:55:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:47.082 21:55:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:47.082 21:55:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:47.082 21:55:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:47.082 21:55:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:47.082 21:55:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:47.082 21:55:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:11:47.082 21:55:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:11:47.082 21:55:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:47.082 21:55:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # count=0 00:11:47.082 21:55:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:11:47.082 21:55:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@110 -- # killprocess 1342595 00:11:47.082 21:55:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@948 -- # '[' -z 1342595 ']' 00:11:47.082 21:55:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@952 -- # kill -0 1342595 00:11:47.082 21:55:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@953 -- # uname 00:11:47.082 21:55:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:47.082 21:55:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1342595 00:11:47.082 21:55:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:47.082 21:55:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:47.082 21:55:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1342595' 00:11:47.082 killing process with pid 1342595 00:11:47.082 21:55:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@967 -- # kill 1342595 00:11:47.082 [2024-07-13 21:55:06.402439] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:47.082 [2024-07-13 21:55:06.402530] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:47.082 [2024-07-13 21:55:06.402581] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:47.082 [2024-07-13 21:55:06.402595] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name raid, state offline 00:11:47.082 21:55:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # wait 1342595 00:11:47.341 [2024-07-13 21:55:06.541155] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:48.719 21:55:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@112 -- # return 0 00:11:48.719 00:11:48.719 real 0m3.867s 00:11:48.719 user 0m4.483s 00:11:48.719 sys 0m1.136s 00:11:48.719 21:55:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:48.719 21:55:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:11:48.719 ************************************ 00:11:48.719 END TEST raid_function_test_concat 00:11:48.719 ************************************ 00:11:48.719 21:55:07 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:11:48.719 21:55:07 bdev_raid -- bdev/bdev_raid.sh@863 -- # run_test raid0_resize_test raid0_resize_test 00:11:48.719 21:55:07 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:48.719 21:55:07 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:48.719 21:55:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:48.719 ************************************ 00:11:48.719 START TEST raid0_resize_test 00:11:48.719 ************************************ 00:11:48.719 21:55:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1123 -- # raid0_resize_test 00:11:48.719 21:55:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # local blksize=512 00:11:48.719 21:55:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@348 -- # local bdev_size_mb=32 00:11:48.719 21:55:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # local new_bdev_size_mb=64 00:11:48.719 21:55:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # local blkcnt 00:11:48.719 21:55:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@351 -- # local raid_size_mb 00:11:48.719 21:55:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@352 -- # local new_raid_size_mb 00:11:48.719 21:55:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@355 -- # raid_pid=1343240 00:11:48.719 21:55:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # echo 'Process raid pid: 1343240' 00:11:48.719 Process raid pid: 1343240 00:11:48.719 21:55:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@354 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:48.719 21:55:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@357 -- # waitforlisten 1343240 /var/tmp/spdk-raid.sock 00:11:48.719 21:55:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@829 -- # '[' -z 1343240 ']' 00:11:48.719 21:55:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:48.719 21:55:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:48.719 21:55:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:48.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:48.719 21:55:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:48.720 21:55:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.720 [2024-07-13 21:55:07.933676] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:48.720 [2024-07-13 21:55:07.933767] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:48.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.720 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:48.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.720 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:48.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.720 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:48.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.720 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:48.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.720 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:48.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.720 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:48.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.720 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:48.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.720 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:48.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.720 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:48.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.720 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:48.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.720 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:48.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.720 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:48.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.720 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:48.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.720 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:48.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.720 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:48.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.720 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:48.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.720 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:48.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.720 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:48.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.720 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:48.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.720 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:48.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.720 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:48.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.720 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:48.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.720 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:48.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.720 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:48.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.720 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:48.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.720 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:48.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.720 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:48.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.720 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:48.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.720 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:48.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.720 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:48.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.720 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:48.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.720 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:48.720 [2024-07-13 21:55:08.094497] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.979 [2024-07-13 21:55:08.298974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.238 [2024-07-13 21:55:08.548206] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:49.238 [2024-07-13 21:55:08.548233] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:49.497 21:55:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:49.497 21:55:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@862 -- # return 0 00:11:49.497 21:55:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:11:49.497 Base_1 00:11:49.497 21:55:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:11:49.756 Base_2 00:11:49.756 21:55:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:11:50.015 [2024-07-13 21:55:09.158966] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:11:50.015 [2024-07-13 21:55:09.160698] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:11:50.015 [2024-07-13 21:55:09.160752] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:11:50.015 [2024-07-13 21:55:09.160765] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:11:50.015 [2024-07-13 21:55:09.161047] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000103d0 00:11:50.015 [2024-07-13 21:55:09.161190] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:11:50.015 [2024-07-13 21:55:09.161200] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x61600003ff80 00:11:50.015 [2024-07-13 21:55:09.161380] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:50.015 21:55:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@365 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:11:50.015 [2024-07-13 21:55:09.323339] bdev_raid.c:2262:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:50.015 [2024-07-13 21:55:09.323367] bdev_raid.c:2275:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:11:50.015 true 00:11:50.015 21:55:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:11:50.015 21:55:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # jq '.[].num_blocks' 00:11:50.275 [2024-07-13 21:55:09.495969] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:50.275 21:55:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # blkcnt=131072 00:11:50.275 21:55:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@369 -- # raid_size_mb=64 00:11:50.275 21:55:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@370 -- # '[' 64 '!=' 64 ']' 00:11:50.275 21:55:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:11:50.534 [2024-07-13 21:55:09.668267] bdev_raid.c:2262:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:50.534 [2024-07-13 21:55:09.668292] bdev_raid.c:2275:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:11:50.534 [2024-07-13 21:55:09.668323] bdev_raid.c:2289:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:11:50.534 true 00:11:50.534 21:55:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:11:50.534 21:55:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # jq '.[].num_blocks' 00:11:50.534 [2024-07-13 21:55:09.836839] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:50.534 21:55:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # blkcnt=262144 00:11:50.534 21:55:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@380 -- # raid_size_mb=128 00:11:50.534 21:55:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@381 -- # '[' 128 '!=' 128 ']' 00:11:50.534 21:55:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@386 -- # killprocess 1343240 00:11:50.534 21:55:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@948 -- # '[' -z 1343240 ']' 00:11:50.534 21:55:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # kill -0 1343240 00:11:50.534 21:55:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@953 -- # uname 00:11:50.534 21:55:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:50.534 21:55:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1343240 00:11:50.534 21:55:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:50.534 21:55:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:50.534 21:55:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1343240' 00:11:50.534 killing process with pid 1343240 00:11:50.534 21:55:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@967 -- # kill 1343240 00:11:50.534 [2024-07-13 21:55:09.902353] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:50.534 [2024-07-13 21:55:09.902432] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:50.534 [2024-07-13 21:55:09.902481] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:50.534 [2024-07-13 21:55:09.902493] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Raid, state offline 00:11:50.534 21:55:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # wait 1343240 00:11:50.534 [2024-07-13 21:55:09.912816] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:51.949 21:55:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@388 -- # return 0 00:11:51.949 00:11:51.949 real 0m3.262s 00:11:51.949 user 0m4.200s 00:11:51.949 sys 0m0.602s 00:11:51.949 21:55:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:51.949 21:55:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.949 ************************************ 00:11:51.949 END TEST raid0_resize_test 00:11:51.949 ************************************ 00:11:51.949 21:55:11 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:11:51.949 21:55:11 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:11:51.949 21:55:11 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:11:51.949 21:55:11 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:11:51.949 21:55:11 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:11:51.949 21:55:11 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:51.949 21:55:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:51.949 ************************************ 00:11:51.949 START TEST raid_state_function_test 00:11:51.949 ************************************ 00:11:51.949 21:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 2 false 00:11:51.949 21:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:11:51.949 21:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:11:51.950 21:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:11:51.950 21:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:11:51.950 21:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:11:51.950 21:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:51.950 21:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:11:51.950 21:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:51.950 21:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:51.950 21:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:11:51.950 21:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:51.950 21:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:51.950 21:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:51.950 21:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:11:51.950 21:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:11:51.950 21:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:11:51.950 21:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:11:51.950 21:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:11:51.950 21:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:11:51.950 21:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:11:51.950 21:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:11:51.950 21:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:11:51.950 21:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:11:51.950 21:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=1343917 00:11:51.950 21:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1343917' 00:11:51.950 Process raid pid: 1343917 00:11:51.950 21:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:51.950 21:55:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 1343917 /var/tmp/spdk-raid.sock 00:11:51.950 21:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 1343917 ']' 00:11:51.950 21:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:51.950 21:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:51.950 21:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:51.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:51.950 21:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:51.950 21:55:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.950 [2024-07-13 21:55:11.287570] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:51.950 [2024-07-13 21:55:11.287662] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:52.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:52.209 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:52.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:52.210 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:52.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:52.210 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:52.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:52.210 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:52.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:52.210 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:52.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:52.210 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:52.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:52.210 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:52.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:52.210 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:52.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:52.210 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:52.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:52.210 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:52.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:52.210 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:52.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:52.210 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:52.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:52.210 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:52.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:52.210 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:52.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:52.210 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:52.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:52.210 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:52.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:52.210 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:52.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:52.210 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:52.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:52.210 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:52.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:52.210 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:52.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:52.210 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:52.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:52.210 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:52.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:52.210 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:52.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:52.210 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:52.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:52.210 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:52.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:52.210 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:52.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:52.210 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:52.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:52.210 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:52.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:52.210 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:52.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:52.210 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:52.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:52.210 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:52.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:52.210 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:52.210 [2024-07-13 21:55:11.452237] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.469 [2024-07-13 21:55:11.657966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.729 [2024-07-13 21:55:11.897779] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:52.729 [2024-07-13 21:55:11.897806] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:52.729 21:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:52.729 21:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:11:52.729 21:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:52.987 [2024-07-13 21:55:12.202577] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:52.987 [2024-07-13 21:55:12.202623] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:52.987 [2024-07-13 21:55:12.202633] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:52.987 [2024-07-13 21:55:12.202644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:52.987 21:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:11:52.987 21:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:52.987 21:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:52.987 21:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:52.987 21:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:52.987 21:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:52.987 21:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:52.987 21:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:52.987 21:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:52.987 21:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:52.987 21:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.987 21:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:53.245 21:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:53.245 "name": "Existed_Raid", 00:11:53.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.245 "strip_size_kb": 64, 00:11:53.245 "state": "configuring", 00:11:53.245 "raid_level": "raid0", 00:11:53.245 "superblock": false, 00:11:53.245 "num_base_bdevs": 2, 00:11:53.245 "num_base_bdevs_discovered": 0, 00:11:53.245 "num_base_bdevs_operational": 2, 00:11:53.245 "base_bdevs_list": [ 00:11:53.245 { 00:11:53.245 "name": "BaseBdev1", 00:11:53.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.245 "is_configured": false, 00:11:53.245 "data_offset": 0, 00:11:53.245 "data_size": 0 00:11:53.245 }, 00:11:53.245 { 00:11:53.245 "name": "BaseBdev2", 00:11:53.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.245 "is_configured": false, 00:11:53.245 "data_offset": 0, 00:11:53.245 "data_size": 0 00:11:53.245 } 00:11:53.245 ] 00:11:53.245 }' 00:11:53.245 21:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:53.245 21:55:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.503 21:55:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:53.762 [2024-07-13 21:55:13.016602] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:53.762 [2024-07-13 21:55:13.016633] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:11:53.762 21:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:54.021 [2024-07-13 21:55:13.185079] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:54.021 [2024-07-13 21:55:13.185117] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:54.021 [2024-07-13 21:55:13.185127] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:54.021 [2024-07-13 21:55:13.185138] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:54.021 21:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:54.021 [2024-07-13 21:55:13.382163] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:54.021 BaseBdev1 00:11:54.021 21:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:11:54.021 21:55:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:11:54.021 21:55:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:54.021 21:55:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:54.021 21:55:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:54.021 21:55:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:54.021 21:55:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:54.280 21:55:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:54.538 [ 00:11:54.539 { 00:11:54.539 "name": "BaseBdev1", 00:11:54.539 "aliases": [ 00:11:54.539 "924944e0-966e-4a0e-8342-f53719bc683b" 00:11:54.539 ], 00:11:54.539 "product_name": "Malloc disk", 00:11:54.539 "block_size": 512, 00:11:54.539 "num_blocks": 65536, 00:11:54.539 "uuid": "924944e0-966e-4a0e-8342-f53719bc683b", 00:11:54.539 "assigned_rate_limits": { 00:11:54.539 "rw_ios_per_sec": 0, 00:11:54.539 "rw_mbytes_per_sec": 0, 00:11:54.539 "r_mbytes_per_sec": 0, 00:11:54.539 "w_mbytes_per_sec": 0 00:11:54.539 }, 00:11:54.539 "claimed": true, 00:11:54.539 "claim_type": "exclusive_write", 00:11:54.539 "zoned": false, 00:11:54.539 "supported_io_types": { 00:11:54.539 "read": true, 00:11:54.539 "write": true, 00:11:54.539 "unmap": true, 00:11:54.539 "flush": true, 00:11:54.539 "reset": true, 00:11:54.539 "nvme_admin": false, 00:11:54.539 "nvme_io": false, 00:11:54.539 "nvme_io_md": false, 00:11:54.539 "write_zeroes": true, 00:11:54.539 "zcopy": true, 00:11:54.539 "get_zone_info": false, 00:11:54.539 "zone_management": false, 00:11:54.539 "zone_append": false, 00:11:54.539 "compare": false, 00:11:54.539 "compare_and_write": false, 00:11:54.539 "abort": true, 00:11:54.539 "seek_hole": false, 00:11:54.539 "seek_data": false, 00:11:54.539 "copy": true, 00:11:54.539 "nvme_iov_md": false 00:11:54.539 }, 00:11:54.539 "memory_domains": [ 00:11:54.539 { 00:11:54.539 "dma_device_id": "system", 00:11:54.539 "dma_device_type": 1 00:11:54.539 }, 00:11:54.539 { 00:11:54.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.539 "dma_device_type": 2 00:11:54.539 } 00:11:54.539 ], 00:11:54.539 "driver_specific": {} 00:11:54.539 } 00:11:54.539 ] 00:11:54.539 21:55:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:54.539 21:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:11:54.539 21:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:54.539 21:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:54.539 21:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:54.539 21:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:54.539 21:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:54.539 21:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:54.539 21:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:54.539 21:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:54.539 21:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:54.539 21:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:54.539 21:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.539 21:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:54.539 "name": "Existed_Raid", 00:11:54.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.539 "strip_size_kb": 64, 00:11:54.539 "state": "configuring", 00:11:54.539 "raid_level": "raid0", 00:11:54.539 "superblock": false, 00:11:54.539 "num_base_bdevs": 2, 00:11:54.539 "num_base_bdevs_discovered": 1, 00:11:54.539 "num_base_bdevs_operational": 2, 00:11:54.539 "base_bdevs_list": [ 00:11:54.539 { 00:11:54.539 "name": "BaseBdev1", 00:11:54.539 "uuid": "924944e0-966e-4a0e-8342-f53719bc683b", 00:11:54.539 "is_configured": true, 00:11:54.539 "data_offset": 0, 00:11:54.539 "data_size": 65536 00:11:54.539 }, 00:11:54.539 { 00:11:54.539 "name": "BaseBdev2", 00:11:54.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.539 "is_configured": false, 00:11:54.539 "data_offset": 0, 00:11:54.539 "data_size": 0 00:11:54.539 } 00:11:54.539 ] 00:11:54.539 }' 00:11:54.539 21:55:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:54.539 21:55:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.107 21:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:55.367 [2024-07-13 21:55:14.541241] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:55.367 [2024-07-13 21:55:14.541289] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:11:55.367 21:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:55.367 [2024-07-13 21:55:14.709750] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:55.367 [2024-07-13 21:55:14.711474] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:55.367 [2024-07-13 21:55:14.711511] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:55.367 21:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:11:55.367 21:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:55.367 21:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:11:55.367 21:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:55.367 21:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:55.367 21:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:55.367 21:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:55.367 21:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:55.367 21:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:55.367 21:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:55.367 21:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:55.367 21:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:55.367 21:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:55.367 21:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.624 21:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:55.624 "name": "Existed_Raid", 00:11:55.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.624 "strip_size_kb": 64, 00:11:55.624 "state": "configuring", 00:11:55.624 "raid_level": "raid0", 00:11:55.624 "superblock": false, 00:11:55.624 "num_base_bdevs": 2, 00:11:55.624 "num_base_bdevs_discovered": 1, 00:11:55.625 "num_base_bdevs_operational": 2, 00:11:55.625 "base_bdevs_list": [ 00:11:55.625 { 00:11:55.625 "name": "BaseBdev1", 00:11:55.625 "uuid": "924944e0-966e-4a0e-8342-f53719bc683b", 00:11:55.625 "is_configured": true, 00:11:55.625 "data_offset": 0, 00:11:55.625 "data_size": 65536 00:11:55.625 }, 00:11:55.625 { 00:11:55.625 "name": "BaseBdev2", 00:11:55.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.625 "is_configured": false, 00:11:55.625 "data_offset": 0, 00:11:55.625 "data_size": 0 00:11:55.625 } 00:11:55.625 ] 00:11:55.625 }' 00:11:55.625 21:55:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:55.625 21:55:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.191 21:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:56.191 [2024-07-13 21:55:15.563071] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:56.191 [2024-07-13 21:55:15.563111] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:11:56.191 [2024-07-13 21:55:15.563121] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:11:56.191 [2024-07-13 21:55:15.563365] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:11:56.191 [2024-07-13 21:55:15.563540] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:11:56.191 [2024-07-13 21:55:15.563552] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:11:56.191 [2024-07-13 21:55:15.563812] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:56.191 BaseBdev2 00:11:56.191 21:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:11:56.191 21:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:11:56.191 21:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:56.448 21:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:56.448 21:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:56.448 21:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:56.448 21:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:56.448 21:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:56.706 [ 00:11:56.706 { 00:11:56.706 "name": "BaseBdev2", 00:11:56.706 "aliases": [ 00:11:56.706 "61870e42-8bfd-445f-967f-801826ddb787" 00:11:56.706 ], 00:11:56.706 "product_name": "Malloc disk", 00:11:56.706 "block_size": 512, 00:11:56.706 "num_blocks": 65536, 00:11:56.706 "uuid": "61870e42-8bfd-445f-967f-801826ddb787", 00:11:56.706 "assigned_rate_limits": { 00:11:56.707 "rw_ios_per_sec": 0, 00:11:56.707 "rw_mbytes_per_sec": 0, 00:11:56.707 "r_mbytes_per_sec": 0, 00:11:56.707 "w_mbytes_per_sec": 0 00:11:56.707 }, 00:11:56.707 "claimed": true, 00:11:56.707 "claim_type": "exclusive_write", 00:11:56.707 "zoned": false, 00:11:56.707 "supported_io_types": { 00:11:56.707 "read": true, 00:11:56.707 "write": true, 00:11:56.707 "unmap": true, 00:11:56.707 "flush": true, 00:11:56.707 "reset": true, 00:11:56.707 "nvme_admin": false, 00:11:56.707 "nvme_io": false, 00:11:56.707 "nvme_io_md": false, 00:11:56.707 "write_zeroes": true, 00:11:56.707 "zcopy": true, 00:11:56.707 "get_zone_info": false, 00:11:56.707 "zone_management": false, 00:11:56.707 "zone_append": false, 00:11:56.707 "compare": false, 00:11:56.707 "compare_and_write": false, 00:11:56.707 "abort": true, 00:11:56.707 "seek_hole": false, 00:11:56.707 "seek_data": false, 00:11:56.707 "copy": true, 00:11:56.707 "nvme_iov_md": false 00:11:56.707 }, 00:11:56.707 "memory_domains": [ 00:11:56.707 { 00:11:56.707 "dma_device_id": "system", 00:11:56.707 "dma_device_type": 1 00:11:56.707 }, 00:11:56.707 { 00:11:56.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.707 "dma_device_type": 2 00:11:56.707 } 00:11:56.707 ], 00:11:56.707 "driver_specific": {} 00:11:56.707 } 00:11:56.707 ] 00:11:56.707 21:55:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:56.707 21:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:56.707 21:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:56.707 21:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:11:56.707 21:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:56.707 21:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:56.707 21:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:56.707 21:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:56.707 21:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:56.707 21:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:56.707 21:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:56.707 21:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:56.707 21:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:56.707 21:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:56.707 21:55:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.965 21:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:56.965 "name": "Existed_Raid", 00:11:56.965 "uuid": "f889513f-46e6-4a3a-876d-802b15e40c29", 00:11:56.965 "strip_size_kb": 64, 00:11:56.965 "state": "online", 00:11:56.965 "raid_level": "raid0", 00:11:56.965 "superblock": false, 00:11:56.965 "num_base_bdevs": 2, 00:11:56.965 "num_base_bdevs_discovered": 2, 00:11:56.965 "num_base_bdevs_operational": 2, 00:11:56.965 "base_bdevs_list": [ 00:11:56.965 { 00:11:56.965 "name": "BaseBdev1", 00:11:56.965 "uuid": "924944e0-966e-4a0e-8342-f53719bc683b", 00:11:56.965 "is_configured": true, 00:11:56.965 "data_offset": 0, 00:11:56.965 "data_size": 65536 00:11:56.965 }, 00:11:56.965 { 00:11:56.965 "name": "BaseBdev2", 00:11:56.965 "uuid": "61870e42-8bfd-445f-967f-801826ddb787", 00:11:56.965 "is_configured": true, 00:11:56.965 "data_offset": 0, 00:11:56.965 "data_size": 65536 00:11:56.965 } 00:11:56.965 ] 00:11:56.965 }' 00:11:56.965 21:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:56.965 21:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.531 21:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:11:57.531 21:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:57.531 21:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:57.531 21:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:57.531 21:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:57.531 21:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:57.531 21:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:57.531 21:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:57.531 [2024-07-13 21:55:16.774561] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:57.531 21:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:57.531 "name": "Existed_Raid", 00:11:57.531 "aliases": [ 00:11:57.531 "f889513f-46e6-4a3a-876d-802b15e40c29" 00:11:57.531 ], 00:11:57.531 "product_name": "Raid Volume", 00:11:57.531 "block_size": 512, 00:11:57.531 "num_blocks": 131072, 00:11:57.531 "uuid": "f889513f-46e6-4a3a-876d-802b15e40c29", 00:11:57.531 "assigned_rate_limits": { 00:11:57.531 "rw_ios_per_sec": 0, 00:11:57.531 "rw_mbytes_per_sec": 0, 00:11:57.531 "r_mbytes_per_sec": 0, 00:11:57.531 "w_mbytes_per_sec": 0 00:11:57.531 }, 00:11:57.531 "claimed": false, 00:11:57.531 "zoned": false, 00:11:57.531 "supported_io_types": { 00:11:57.531 "read": true, 00:11:57.531 "write": true, 00:11:57.531 "unmap": true, 00:11:57.531 "flush": true, 00:11:57.531 "reset": true, 00:11:57.531 "nvme_admin": false, 00:11:57.531 "nvme_io": false, 00:11:57.531 "nvme_io_md": false, 00:11:57.531 "write_zeroes": true, 00:11:57.531 "zcopy": false, 00:11:57.531 "get_zone_info": false, 00:11:57.531 "zone_management": false, 00:11:57.531 "zone_append": false, 00:11:57.531 "compare": false, 00:11:57.531 "compare_and_write": false, 00:11:57.531 "abort": false, 00:11:57.531 "seek_hole": false, 00:11:57.531 "seek_data": false, 00:11:57.531 "copy": false, 00:11:57.531 "nvme_iov_md": false 00:11:57.531 }, 00:11:57.531 "memory_domains": [ 00:11:57.531 { 00:11:57.531 "dma_device_id": "system", 00:11:57.531 "dma_device_type": 1 00:11:57.531 }, 00:11:57.531 { 00:11:57.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.531 "dma_device_type": 2 00:11:57.531 }, 00:11:57.531 { 00:11:57.531 "dma_device_id": "system", 00:11:57.531 "dma_device_type": 1 00:11:57.531 }, 00:11:57.531 { 00:11:57.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.531 "dma_device_type": 2 00:11:57.531 } 00:11:57.531 ], 00:11:57.531 "driver_specific": { 00:11:57.531 "raid": { 00:11:57.531 "uuid": "f889513f-46e6-4a3a-876d-802b15e40c29", 00:11:57.531 "strip_size_kb": 64, 00:11:57.531 "state": "online", 00:11:57.531 "raid_level": "raid0", 00:11:57.531 "superblock": false, 00:11:57.531 "num_base_bdevs": 2, 00:11:57.531 "num_base_bdevs_discovered": 2, 00:11:57.531 "num_base_bdevs_operational": 2, 00:11:57.531 "base_bdevs_list": [ 00:11:57.531 { 00:11:57.531 "name": "BaseBdev1", 00:11:57.531 "uuid": "924944e0-966e-4a0e-8342-f53719bc683b", 00:11:57.531 "is_configured": true, 00:11:57.531 "data_offset": 0, 00:11:57.531 "data_size": 65536 00:11:57.531 }, 00:11:57.531 { 00:11:57.531 "name": "BaseBdev2", 00:11:57.531 "uuid": "61870e42-8bfd-445f-967f-801826ddb787", 00:11:57.531 "is_configured": true, 00:11:57.531 "data_offset": 0, 00:11:57.531 "data_size": 65536 00:11:57.531 } 00:11:57.531 ] 00:11:57.531 } 00:11:57.531 } 00:11:57.531 }' 00:11:57.531 21:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:57.531 21:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:11:57.531 BaseBdev2' 00:11:57.531 21:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:57.531 21:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:11:57.531 21:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:57.789 21:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:57.789 "name": "BaseBdev1", 00:11:57.789 "aliases": [ 00:11:57.789 "924944e0-966e-4a0e-8342-f53719bc683b" 00:11:57.789 ], 00:11:57.789 "product_name": "Malloc disk", 00:11:57.789 "block_size": 512, 00:11:57.789 "num_blocks": 65536, 00:11:57.789 "uuid": "924944e0-966e-4a0e-8342-f53719bc683b", 00:11:57.789 "assigned_rate_limits": { 00:11:57.789 "rw_ios_per_sec": 0, 00:11:57.789 "rw_mbytes_per_sec": 0, 00:11:57.789 "r_mbytes_per_sec": 0, 00:11:57.789 "w_mbytes_per_sec": 0 00:11:57.789 }, 00:11:57.789 "claimed": true, 00:11:57.789 "claim_type": "exclusive_write", 00:11:57.789 "zoned": false, 00:11:57.789 "supported_io_types": { 00:11:57.789 "read": true, 00:11:57.789 "write": true, 00:11:57.789 "unmap": true, 00:11:57.789 "flush": true, 00:11:57.789 "reset": true, 00:11:57.789 "nvme_admin": false, 00:11:57.789 "nvme_io": false, 00:11:57.789 "nvme_io_md": false, 00:11:57.789 "write_zeroes": true, 00:11:57.789 "zcopy": true, 00:11:57.789 "get_zone_info": false, 00:11:57.789 "zone_management": false, 00:11:57.789 "zone_append": false, 00:11:57.789 "compare": false, 00:11:57.789 "compare_and_write": false, 00:11:57.789 "abort": true, 00:11:57.789 "seek_hole": false, 00:11:57.789 "seek_data": false, 00:11:57.789 "copy": true, 00:11:57.789 "nvme_iov_md": false 00:11:57.789 }, 00:11:57.789 "memory_domains": [ 00:11:57.789 { 00:11:57.789 "dma_device_id": "system", 00:11:57.789 "dma_device_type": 1 00:11:57.789 }, 00:11:57.789 { 00:11:57.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.789 "dma_device_type": 2 00:11:57.789 } 00:11:57.789 ], 00:11:57.789 "driver_specific": {} 00:11:57.789 }' 00:11:57.789 21:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:57.789 21:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:57.789 21:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:57.789 21:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:57.789 21:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:57.789 21:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:57.789 21:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:58.047 21:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:58.047 21:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:58.047 21:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:58.047 21:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:58.047 21:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:58.047 21:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:58.047 21:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:58.047 21:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:58.305 21:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:58.305 "name": "BaseBdev2", 00:11:58.305 "aliases": [ 00:11:58.305 "61870e42-8bfd-445f-967f-801826ddb787" 00:11:58.305 ], 00:11:58.305 "product_name": "Malloc disk", 00:11:58.305 "block_size": 512, 00:11:58.305 "num_blocks": 65536, 00:11:58.305 "uuid": "61870e42-8bfd-445f-967f-801826ddb787", 00:11:58.305 "assigned_rate_limits": { 00:11:58.305 "rw_ios_per_sec": 0, 00:11:58.305 "rw_mbytes_per_sec": 0, 00:11:58.305 "r_mbytes_per_sec": 0, 00:11:58.305 "w_mbytes_per_sec": 0 00:11:58.305 }, 00:11:58.305 "claimed": true, 00:11:58.305 "claim_type": "exclusive_write", 00:11:58.305 "zoned": false, 00:11:58.305 "supported_io_types": { 00:11:58.305 "read": true, 00:11:58.305 "write": true, 00:11:58.305 "unmap": true, 00:11:58.305 "flush": true, 00:11:58.305 "reset": true, 00:11:58.305 "nvme_admin": false, 00:11:58.305 "nvme_io": false, 00:11:58.305 "nvme_io_md": false, 00:11:58.305 "write_zeroes": true, 00:11:58.305 "zcopy": true, 00:11:58.305 "get_zone_info": false, 00:11:58.305 "zone_management": false, 00:11:58.305 "zone_append": false, 00:11:58.305 "compare": false, 00:11:58.305 "compare_and_write": false, 00:11:58.305 "abort": true, 00:11:58.305 "seek_hole": false, 00:11:58.305 "seek_data": false, 00:11:58.305 "copy": true, 00:11:58.305 "nvme_iov_md": false 00:11:58.305 }, 00:11:58.305 "memory_domains": [ 00:11:58.305 { 00:11:58.305 "dma_device_id": "system", 00:11:58.305 "dma_device_type": 1 00:11:58.305 }, 00:11:58.305 { 00:11:58.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.305 "dma_device_type": 2 00:11:58.305 } 00:11:58.305 ], 00:11:58.305 "driver_specific": {} 00:11:58.305 }' 00:11:58.305 21:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:58.305 21:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:58.305 21:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:58.305 21:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:58.306 21:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:58.306 21:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:58.306 21:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:58.306 21:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:58.564 21:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:58.564 21:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:58.564 21:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:58.564 21:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:58.564 21:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:58.823 [2024-07-13 21:55:17.961466] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:58.823 [2024-07-13 21:55:17.961495] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:58.823 [2024-07-13 21:55:17.961541] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:58.823 21:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:11:58.823 21:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:11:58.823 21:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:58.823 21:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:11:58.823 21:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:11:58.823 21:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:11:58.823 21:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:58.823 21:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:11:58.823 21:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:58.823 21:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:58.823 21:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:11:58.823 21:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:58.823 21:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:58.823 21:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:58.823 21:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:58.823 21:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:58.823 21:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.823 21:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:58.823 "name": "Existed_Raid", 00:11:58.823 "uuid": "f889513f-46e6-4a3a-876d-802b15e40c29", 00:11:58.823 "strip_size_kb": 64, 00:11:58.823 "state": "offline", 00:11:58.823 "raid_level": "raid0", 00:11:58.823 "superblock": false, 00:11:58.823 "num_base_bdevs": 2, 00:11:58.823 "num_base_bdevs_discovered": 1, 00:11:58.823 "num_base_bdevs_operational": 1, 00:11:58.823 "base_bdevs_list": [ 00:11:58.823 { 00:11:58.823 "name": null, 00:11:58.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.823 "is_configured": false, 00:11:58.823 "data_offset": 0, 00:11:58.823 "data_size": 65536 00:11:58.823 }, 00:11:58.823 { 00:11:58.823 "name": "BaseBdev2", 00:11:58.823 "uuid": "61870e42-8bfd-445f-967f-801826ddb787", 00:11:58.823 "is_configured": true, 00:11:58.823 "data_offset": 0, 00:11:58.823 "data_size": 65536 00:11:58.823 } 00:11:58.823 ] 00:11:58.823 }' 00:11:58.823 21:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:58.823 21:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.391 21:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:11:59.391 21:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:59.391 21:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:59.391 21:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:59.650 21:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:59.650 21:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:59.650 21:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:59.650 [2024-07-13 21:55:18.967332] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:59.650 [2024-07-13 21:55:18.967380] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:11:59.910 21:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:59.910 21:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:59.910 21:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:59.910 21:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:11:59.910 21:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:11:59.911 21:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:11:59.911 21:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:11:59.911 21:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 1343917 00:11:59.911 21:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 1343917 ']' 00:11:59.911 21:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 1343917 00:11:59.911 21:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:11:59.911 21:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:59.911 21:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1343917 00:12:00.170 21:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:00.170 21:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:00.170 21:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1343917' 00:12:00.170 killing process with pid 1343917 00:12:00.170 21:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 1343917 00:12:00.170 [2024-07-13 21:55:19.304257] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:00.170 21:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 1343917 00:12:00.170 [2024-07-13 21:55:19.321768] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:01.547 21:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:12:01.547 00:12:01.547 real 0m9.343s 00:12:01.547 user 0m15.319s 00:12:01.547 sys 0m1.746s 00:12:01.547 21:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:01.547 21:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.547 ************************************ 00:12:01.548 END TEST raid_state_function_test 00:12:01.548 ************************************ 00:12:01.548 21:55:20 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:12:01.548 21:55:20 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:12:01.548 21:55:20 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:12:01.548 21:55:20 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:01.548 21:55:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:01.548 ************************************ 00:12:01.548 START TEST raid_state_function_test_sb 00:12:01.548 ************************************ 00:12:01.548 21:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 2 true 00:12:01.548 21:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:12:01.548 21:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:12:01.548 21:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:12:01.548 21:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:12:01.548 21:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:12:01.548 21:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:01.548 21:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:12:01.548 21:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:01.548 21:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:01.548 21:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:12:01.548 21:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:01.548 21:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:01.548 21:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:01.548 21:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:12:01.548 21:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:12:01.548 21:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:12:01.548 21:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:12:01.548 21:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:12:01.548 21:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:12:01.548 21:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:12:01.548 21:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:12:01.548 21:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:12:01.548 21:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:12:01.548 21:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=1345750 00:12:01.548 21:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1345750' 00:12:01.548 Process raid pid: 1345750 00:12:01.548 21:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:01.548 21:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 1345750 /var/tmp/spdk-raid.sock 00:12:01.548 21:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 1345750 ']' 00:12:01.548 21:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:01.548 21:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:01.548 21:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:01.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:01.548 21:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:01.548 21:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.548 [2024-07-13 21:55:20.692035] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:01.548 [2024-07-13 21:55:20.692116] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:01.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.548 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:01.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.548 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:01.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.548 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:01.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.548 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:01.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.548 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:01.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.548 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:01.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.548 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:01.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.548 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:01.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.548 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:01.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.548 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:01.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.548 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:01.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.548 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:01.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.548 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:01.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.548 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:01.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.548 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:01.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.548 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:01.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.548 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:01.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.548 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:01.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.548 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:01.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.548 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:01.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.548 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:01.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.548 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:01.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.548 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:01.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.548 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:01.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.548 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:01.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.548 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:01.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.548 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:01.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.548 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:01.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.548 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:01.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.548 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:01.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.548 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:01.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.548 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:01.548 [2024-07-13 21:55:20.855099] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.808 [2024-07-13 21:55:21.062278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.067 [2024-07-13 21:55:21.308207] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:02.067 [2024-07-13 21:55:21.308237] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:02.067 21:55:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:02.067 21:55:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:12:02.067 21:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:02.326 [2024-07-13 21:55:21.591934] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:02.326 [2024-07-13 21:55:21.591984] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:02.326 [2024-07-13 21:55:21.591995] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:02.326 [2024-07-13 21:55:21.592008] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:02.326 21:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:02.326 21:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:02.326 21:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:02.326 21:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:02.326 21:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:02.326 21:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:02.326 21:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:02.326 21:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:02.326 21:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:02.326 21:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:02.326 21:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:02.326 21:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.586 21:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:02.586 "name": "Existed_Raid", 00:12:02.586 "uuid": "8499a641-3979-4e2c-b10e-9b7fe1b56e8d", 00:12:02.586 "strip_size_kb": 64, 00:12:02.586 "state": "configuring", 00:12:02.586 "raid_level": "raid0", 00:12:02.586 "superblock": true, 00:12:02.586 "num_base_bdevs": 2, 00:12:02.586 "num_base_bdevs_discovered": 0, 00:12:02.586 "num_base_bdevs_operational": 2, 00:12:02.586 "base_bdevs_list": [ 00:12:02.586 { 00:12:02.586 "name": "BaseBdev1", 00:12:02.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.586 "is_configured": false, 00:12:02.586 "data_offset": 0, 00:12:02.586 "data_size": 0 00:12:02.586 }, 00:12:02.586 { 00:12:02.586 "name": "BaseBdev2", 00:12:02.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.586 "is_configured": false, 00:12:02.586 "data_offset": 0, 00:12:02.586 "data_size": 0 00:12:02.586 } 00:12:02.586 ] 00:12:02.586 }' 00:12:02.586 21:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:02.586 21:55:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.845 21:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:03.104 [2024-07-13 21:55:22.385895] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:03.104 [2024-07-13 21:55:22.385937] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:12:03.104 21:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:03.399 [2024-07-13 21:55:22.558392] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:03.399 [2024-07-13 21:55:22.558435] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:03.399 [2024-07-13 21:55:22.558446] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:03.399 [2024-07-13 21:55:22.558458] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:03.399 21:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:03.399 [2024-07-13 21:55:22.768074] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:03.399 BaseBdev1 00:12:03.659 21:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:12:03.659 21:55:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:12:03.659 21:55:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:03.659 21:55:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:03.659 21:55:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:03.659 21:55:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:03.659 21:55:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:03.659 21:55:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:03.918 [ 00:12:03.918 { 00:12:03.918 "name": "BaseBdev1", 00:12:03.918 "aliases": [ 00:12:03.918 "49ff2c91-a8a6-41ca-9a40-db04fbf3cabb" 00:12:03.918 ], 00:12:03.918 "product_name": "Malloc disk", 00:12:03.918 "block_size": 512, 00:12:03.918 "num_blocks": 65536, 00:12:03.918 "uuid": "49ff2c91-a8a6-41ca-9a40-db04fbf3cabb", 00:12:03.918 "assigned_rate_limits": { 00:12:03.918 "rw_ios_per_sec": 0, 00:12:03.918 "rw_mbytes_per_sec": 0, 00:12:03.918 "r_mbytes_per_sec": 0, 00:12:03.918 "w_mbytes_per_sec": 0 00:12:03.918 }, 00:12:03.918 "claimed": true, 00:12:03.918 "claim_type": "exclusive_write", 00:12:03.918 "zoned": false, 00:12:03.918 "supported_io_types": { 00:12:03.918 "read": true, 00:12:03.918 "write": true, 00:12:03.918 "unmap": true, 00:12:03.918 "flush": true, 00:12:03.918 "reset": true, 00:12:03.918 "nvme_admin": false, 00:12:03.918 "nvme_io": false, 00:12:03.918 "nvme_io_md": false, 00:12:03.918 "write_zeroes": true, 00:12:03.918 "zcopy": true, 00:12:03.918 "get_zone_info": false, 00:12:03.918 "zone_management": false, 00:12:03.918 "zone_append": false, 00:12:03.918 "compare": false, 00:12:03.918 "compare_and_write": false, 00:12:03.918 "abort": true, 00:12:03.918 "seek_hole": false, 00:12:03.918 "seek_data": false, 00:12:03.918 "copy": true, 00:12:03.918 "nvme_iov_md": false 00:12:03.918 }, 00:12:03.918 "memory_domains": [ 00:12:03.918 { 00:12:03.918 "dma_device_id": "system", 00:12:03.918 "dma_device_type": 1 00:12:03.918 }, 00:12:03.918 { 00:12:03.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.918 "dma_device_type": 2 00:12:03.918 } 00:12:03.918 ], 00:12:03.918 "driver_specific": {} 00:12:03.918 } 00:12:03.918 ] 00:12:03.918 21:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:03.918 21:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:03.918 21:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:03.918 21:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:03.918 21:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:03.918 21:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:03.918 21:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:03.918 21:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:03.918 21:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:03.918 21:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:03.918 21:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:03.918 21:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:03.918 21:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.918 21:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:03.918 "name": "Existed_Raid", 00:12:03.918 "uuid": "f9361db2-0db2-4fed-8a8d-a8fd855f35d7", 00:12:03.918 "strip_size_kb": 64, 00:12:03.918 "state": "configuring", 00:12:03.918 "raid_level": "raid0", 00:12:03.918 "superblock": true, 00:12:03.918 "num_base_bdevs": 2, 00:12:03.918 "num_base_bdevs_discovered": 1, 00:12:03.918 "num_base_bdevs_operational": 2, 00:12:03.918 "base_bdevs_list": [ 00:12:03.918 { 00:12:03.918 "name": "BaseBdev1", 00:12:03.918 "uuid": "49ff2c91-a8a6-41ca-9a40-db04fbf3cabb", 00:12:03.918 "is_configured": true, 00:12:03.918 "data_offset": 2048, 00:12:03.918 "data_size": 63488 00:12:03.918 }, 00:12:03.918 { 00:12:03.918 "name": "BaseBdev2", 00:12:03.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.918 "is_configured": false, 00:12:03.918 "data_offset": 0, 00:12:03.918 "data_size": 0 00:12:03.918 } 00:12:03.918 ] 00:12:03.918 }' 00:12:03.918 21:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:03.918 21:55:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.485 21:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:04.744 [2024-07-13 21:55:23.891040] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:04.744 [2024-07-13 21:55:23.891099] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:12:04.744 21:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:04.744 [2024-07-13 21:55:24.063658] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:04.744 [2024-07-13 21:55:24.065390] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:04.744 [2024-07-13 21:55:24.065431] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:04.744 21:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:12:04.744 21:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:04.744 21:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:04.744 21:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:04.744 21:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:04.744 21:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:04.744 21:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:04.744 21:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:04.744 21:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:04.744 21:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:04.744 21:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:04.744 21:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:04.744 21:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:04.745 21:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.003 21:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:05.003 "name": "Existed_Raid", 00:12:05.003 "uuid": "ed304649-85ae-4f13-8b03-4be59a1fe7c5", 00:12:05.003 "strip_size_kb": 64, 00:12:05.003 "state": "configuring", 00:12:05.003 "raid_level": "raid0", 00:12:05.003 "superblock": true, 00:12:05.003 "num_base_bdevs": 2, 00:12:05.003 "num_base_bdevs_discovered": 1, 00:12:05.003 "num_base_bdevs_operational": 2, 00:12:05.003 "base_bdevs_list": [ 00:12:05.003 { 00:12:05.003 "name": "BaseBdev1", 00:12:05.003 "uuid": "49ff2c91-a8a6-41ca-9a40-db04fbf3cabb", 00:12:05.003 "is_configured": true, 00:12:05.003 "data_offset": 2048, 00:12:05.003 "data_size": 63488 00:12:05.003 }, 00:12:05.003 { 00:12:05.003 "name": "BaseBdev2", 00:12:05.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.003 "is_configured": false, 00:12:05.003 "data_offset": 0, 00:12:05.003 "data_size": 0 00:12:05.003 } 00:12:05.003 ] 00:12:05.003 }' 00:12:05.003 21:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:05.003 21:55:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.570 21:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:05.570 [2024-07-13 21:55:24.859789] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:05.570 [2024-07-13 21:55:24.860036] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:12:05.570 [2024-07-13 21:55:24.860055] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:05.570 [2024-07-13 21:55:24.860320] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:12:05.570 [2024-07-13 21:55:24.860500] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:12:05.570 [2024-07-13 21:55:24.860516] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:12:05.570 BaseBdev2 00:12:05.570 [2024-07-13 21:55:24.860661] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:05.570 21:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:12:05.570 21:55:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:12:05.570 21:55:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:05.570 21:55:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:05.570 21:55:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:05.570 21:55:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:05.570 21:55:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:05.829 21:55:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:05.829 [ 00:12:05.829 { 00:12:05.829 "name": "BaseBdev2", 00:12:05.829 "aliases": [ 00:12:05.829 "c24112bc-a3a4-4b62-baf5-0f3ba819d0eb" 00:12:05.829 ], 00:12:05.829 "product_name": "Malloc disk", 00:12:05.829 "block_size": 512, 00:12:05.829 "num_blocks": 65536, 00:12:05.829 "uuid": "c24112bc-a3a4-4b62-baf5-0f3ba819d0eb", 00:12:05.829 "assigned_rate_limits": { 00:12:05.829 "rw_ios_per_sec": 0, 00:12:05.829 "rw_mbytes_per_sec": 0, 00:12:05.829 "r_mbytes_per_sec": 0, 00:12:05.829 "w_mbytes_per_sec": 0 00:12:05.829 }, 00:12:05.829 "claimed": true, 00:12:05.829 "claim_type": "exclusive_write", 00:12:05.829 "zoned": false, 00:12:05.829 "supported_io_types": { 00:12:05.829 "read": true, 00:12:05.829 "write": true, 00:12:05.829 "unmap": true, 00:12:05.829 "flush": true, 00:12:05.829 "reset": true, 00:12:05.829 "nvme_admin": false, 00:12:05.829 "nvme_io": false, 00:12:05.829 "nvme_io_md": false, 00:12:05.829 "write_zeroes": true, 00:12:05.829 "zcopy": true, 00:12:05.829 "get_zone_info": false, 00:12:05.829 "zone_management": false, 00:12:05.829 "zone_append": false, 00:12:05.829 "compare": false, 00:12:05.829 "compare_and_write": false, 00:12:05.829 "abort": true, 00:12:05.829 "seek_hole": false, 00:12:05.829 "seek_data": false, 00:12:05.829 "copy": true, 00:12:05.829 "nvme_iov_md": false 00:12:05.829 }, 00:12:05.829 "memory_domains": [ 00:12:05.829 { 00:12:05.829 "dma_device_id": "system", 00:12:05.829 "dma_device_type": 1 00:12:05.829 }, 00:12:05.829 { 00:12:05.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.829 "dma_device_type": 2 00:12:05.829 } 00:12:05.829 ], 00:12:05.829 "driver_specific": {} 00:12:05.829 } 00:12:05.829 ] 00:12:05.829 21:55:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:05.829 21:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:05.829 21:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:05.829 21:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:12:05.830 21:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:05.830 21:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:05.830 21:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:05.830 21:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:05.830 21:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:05.830 21:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:05.830 21:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:05.830 21:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:05.830 21:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:05.830 21:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:05.830 21:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.089 21:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:06.089 "name": "Existed_Raid", 00:12:06.089 "uuid": "ed304649-85ae-4f13-8b03-4be59a1fe7c5", 00:12:06.089 "strip_size_kb": 64, 00:12:06.089 "state": "online", 00:12:06.089 "raid_level": "raid0", 00:12:06.089 "superblock": true, 00:12:06.089 "num_base_bdevs": 2, 00:12:06.089 "num_base_bdevs_discovered": 2, 00:12:06.089 "num_base_bdevs_operational": 2, 00:12:06.089 "base_bdevs_list": [ 00:12:06.089 { 00:12:06.089 "name": "BaseBdev1", 00:12:06.089 "uuid": "49ff2c91-a8a6-41ca-9a40-db04fbf3cabb", 00:12:06.089 "is_configured": true, 00:12:06.089 "data_offset": 2048, 00:12:06.089 "data_size": 63488 00:12:06.089 }, 00:12:06.089 { 00:12:06.089 "name": "BaseBdev2", 00:12:06.089 "uuid": "c24112bc-a3a4-4b62-baf5-0f3ba819d0eb", 00:12:06.089 "is_configured": true, 00:12:06.089 "data_offset": 2048, 00:12:06.089 "data_size": 63488 00:12:06.089 } 00:12:06.089 ] 00:12:06.089 }' 00:12:06.089 21:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:06.089 21:55:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.656 21:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:12:06.656 21:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:06.656 21:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:06.656 21:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:06.656 21:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:06.656 21:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:12:06.656 21:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:06.656 21:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:06.656 [2024-07-13 21:55:26.015158] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:06.656 21:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:06.656 "name": "Existed_Raid", 00:12:06.656 "aliases": [ 00:12:06.656 "ed304649-85ae-4f13-8b03-4be59a1fe7c5" 00:12:06.656 ], 00:12:06.656 "product_name": "Raid Volume", 00:12:06.656 "block_size": 512, 00:12:06.656 "num_blocks": 126976, 00:12:06.656 "uuid": "ed304649-85ae-4f13-8b03-4be59a1fe7c5", 00:12:06.656 "assigned_rate_limits": { 00:12:06.656 "rw_ios_per_sec": 0, 00:12:06.656 "rw_mbytes_per_sec": 0, 00:12:06.656 "r_mbytes_per_sec": 0, 00:12:06.656 "w_mbytes_per_sec": 0 00:12:06.656 }, 00:12:06.656 "claimed": false, 00:12:06.656 "zoned": false, 00:12:06.656 "supported_io_types": { 00:12:06.656 "read": true, 00:12:06.656 "write": true, 00:12:06.656 "unmap": true, 00:12:06.656 "flush": true, 00:12:06.656 "reset": true, 00:12:06.656 "nvme_admin": false, 00:12:06.656 "nvme_io": false, 00:12:06.656 "nvme_io_md": false, 00:12:06.656 "write_zeroes": true, 00:12:06.657 "zcopy": false, 00:12:06.657 "get_zone_info": false, 00:12:06.657 "zone_management": false, 00:12:06.657 "zone_append": false, 00:12:06.657 "compare": false, 00:12:06.657 "compare_and_write": false, 00:12:06.657 "abort": false, 00:12:06.657 "seek_hole": false, 00:12:06.657 "seek_data": false, 00:12:06.657 "copy": false, 00:12:06.657 "nvme_iov_md": false 00:12:06.657 }, 00:12:06.657 "memory_domains": [ 00:12:06.657 { 00:12:06.657 "dma_device_id": "system", 00:12:06.657 "dma_device_type": 1 00:12:06.657 }, 00:12:06.657 { 00:12:06.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.657 "dma_device_type": 2 00:12:06.657 }, 00:12:06.657 { 00:12:06.657 "dma_device_id": "system", 00:12:06.657 "dma_device_type": 1 00:12:06.657 }, 00:12:06.657 { 00:12:06.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.657 "dma_device_type": 2 00:12:06.657 } 00:12:06.657 ], 00:12:06.657 "driver_specific": { 00:12:06.657 "raid": { 00:12:06.657 "uuid": "ed304649-85ae-4f13-8b03-4be59a1fe7c5", 00:12:06.657 "strip_size_kb": 64, 00:12:06.657 "state": "online", 00:12:06.657 "raid_level": "raid0", 00:12:06.657 "superblock": true, 00:12:06.657 "num_base_bdevs": 2, 00:12:06.657 "num_base_bdevs_discovered": 2, 00:12:06.657 "num_base_bdevs_operational": 2, 00:12:06.657 "base_bdevs_list": [ 00:12:06.657 { 00:12:06.657 "name": "BaseBdev1", 00:12:06.657 "uuid": "49ff2c91-a8a6-41ca-9a40-db04fbf3cabb", 00:12:06.657 "is_configured": true, 00:12:06.657 "data_offset": 2048, 00:12:06.657 "data_size": 63488 00:12:06.657 }, 00:12:06.657 { 00:12:06.657 "name": "BaseBdev2", 00:12:06.657 "uuid": "c24112bc-a3a4-4b62-baf5-0f3ba819d0eb", 00:12:06.657 "is_configured": true, 00:12:06.657 "data_offset": 2048, 00:12:06.657 "data_size": 63488 00:12:06.657 } 00:12:06.657 ] 00:12:06.657 } 00:12:06.657 } 00:12:06.657 }' 00:12:06.657 21:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:06.915 21:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:12:06.915 BaseBdev2' 00:12:06.915 21:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:06.915 21:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:12:06.915 21:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:06.915 21:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:06.915 "name": "BaseBdev1", 00:12:06.915 "aliases": [ 00:12:06.915 "49ff2c91-a8a6-41ca-9a40-db04fbf3cabb" 00:12:06.915 ], 00:12:06.915 "product_name": "Malloc disk", 00:12:06.915 "block_size": 512, 00:12:06.915 "num_blocks": 65536, 00:12:06.915 "uuid": "49ff2c91-a8a6-41ca-9a40-db04fbf3cabb", 00:12:06.915 "assigned_rate_limits": { 00:12:06.915 "rw_ios_per_sec": 0, 00:12:06.915 "rw_mbytes_per_sec": 0, 00:12:06.915 "r_mbytes_per_sec": 0, 00:12:06.915 "w_mbytes_per_sec": 0 00:12:06.915 }, 00:12:06.915 "claimed": true, 00:12:06.915 "claim_type": "exclusive_write", 00:12:06.915 "zoned": false, 00:12:06.915 "supported_io_types": { 00:12:06.915 "read": true, 00:12:06.915 "write": true, 00:12:06.915 "unmap": true, 00:12:06.915 "flush": true, 00:12:06.915 "reset": true, 00:12:06.915 "nvme_admin": false, 00:12:06.915 "nvme_io": false, 00:12:06.915 "nvme_io_md": false, 00:12:06.915 "write_zeroes": true, 00:12:06.915 "zcopy": true, 00:12:06.915 "get_zone_info": false, 00:12:06.915 "zone_management": false, 00:12:06.915 "zone_append": false, 00:12:06.915 "compare": false, 00:12:06.915 "compare_and_write": false, 00:12:06.915 "abort": true, 00:12:06.915 "seek_hole": false, 00:12:06.915 "seek_data": false, 00:12:06.915 "copy": true, 00:12:06.915 "nvme_iov_md": false 00:12:06.915 }, 00:12:06.915 "memory_domains": [ 00:12:06.915 { 00:12:06.915 "dma_device_id": "system", 00:12:06.915 "dma_device_type": 1 00:12:06.915 }, 00:12:06.915 { 00:12:06.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.915 "dma_device_type": 2 00:12:06.915 } 00:12:06.915 ], 00:12:06.915 "driver_specific": {} 00:12:06.915 }' 00:12:06.915 21:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:06.915 21:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:07.173 21:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:07.173 21:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:07.173 21:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:07.173 21:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:07.173 21:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:07.173 21:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:07.173 21:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:07.173 21:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:07.174 21:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:07.174 21:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:07.174 21:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:07.174 21:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:07.174 21:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:07.432 21:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:07.432 "name": "BaseBdev2", 00:12:07.432 "aliases": [ 00:12:07.432 "c24112bc-a3a4-4b62-baf5-0f3ba819d0eb" 00:12:07.432 ], 00:12:07.432 "product_name": "Malloc disk", 00:12:07.432 "block_size": 512, 00:12:07.432 "num_blocks": 65536, 00:12:07.432 "uuid": "c24112bc-a3a4-4b62-baf5-0f3ba819d0eb", 00:12:07.432 "assigned_rate_limits": { 00:12:07.432 "rw_ios_per_sec": 0, 00:12:07.432 "rw_mbytes_per_sec": 0, 00:12:07.432 "r_mbytes_per_sec": 0, 00:12:07.432 "w_mbytes_per_sec": 0 00:12:07.432 }, 00:12:07.432 "claimed": true, 00:12:07.432 "claim_type": "exclusive_write", 00:12:07.432 "zoned": false, 00:12:07.433 "supported_io_types": { 00:12:07.433 "read": true, 00:12:07.433 "write": true, 00:12:07.433 "unmap": true, 00:12:07.433 "flush": true, 00:12:07.433 "reset": true, 00:12:07.433 "nvme_admin": false, 00:12:07.433 "nvme_io": false, 00:12:07.433 "nvme_io_md": false, 00:12:07.433 "write_zeroes": true, 00:12:07.433 "zcopy": true, 00:12:07.433 "get_zone_info": false, 00:12:07.433 "zone_management": false, 00:12:07.433 "zone_append": false, 00:12:07.433 "compare": false, 00:12:07.433 "compare_and_write": false, 00:12:07.433 "abort": true, 00:12:07.433 "seek_hole": false, 00:12:07.433 "seek_data": false, 00:12:07.433 "copy": true, 00:12:07.433 "nvme_iov_md": false 00:12:07.433 }, 00:12:07.433 "memory_domains": [ 00:12:07.433 { 00:12:07.433 "dma_device_id": "system", 00:12:07.433 "dma_device_type": 1 00:12:07.433 }, 00:12:07.433 { 00:12:07.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.433 "dma_device_type": 2 00:12:07.433 } 00:12:07.433 ], 00:12:07.433 "driver_specific": {} 00:12:07.433 }' 00:12:07.433 21:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:07.433 21:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:07.433 21:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:07.433 21:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:07.433 21:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:07.691 21:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:07.691 21:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:07.691 21:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:07.691 21:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:07.691 21:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:07.691 21:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:07.691 21:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:07.691 21:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:07.950 [2024-07-13 21:55:27.125860] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:07.950 [2024-07-13 21:55:27.125894] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:07.950 [2024-07-13 21:55:27.125955] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:07.950 21:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:12:07.950 21:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:12:07.950 21:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:07.950 21:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:12:07.950 21:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:12:07.950 21:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:12:07.950 21:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:07.950 21:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:12:07.950 21:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:07.950 21:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:07.950 21:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:12:07.950 21:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:07.950 21:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:07.950 21:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:07.950 21:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:07.950 21:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:07.950 21:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.209 21:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:08.209 "name": "Existed_Raid", 00:12:08.209 "uuid": "ed304649-85ae-4f13-8b03-4be59a1fe7c5", 00:12:08.209 "strip_size_kb": 64, 00:12:08.209 "state": "offline", 00:12:08.209 "raid_level": "raid0", 00:12:08.209 "superblock": true, 00:12:08.209 "num_base_bdevs": 2, 00:12:08.209 "num_base_bdevs_discovered": 1, 00:12:08.209 "num_base_bdevs_operational": 1, 00:12:08.209 "base_bdevs_list": [ 00:12:08.209 { 00:12:08.209 "name": null, 00:12:08.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.209 "is_configured": false, 00:12:08.209 "data_offset": 2048, 00:12:08.209 "data_size": 63488 00:12:08.209 }, 00:12:08.209 { 00:12:08.209 "name": "BaseBdev2", 00:12:08.209 "uuid": "c24112bc-a3a4-4b62-baf5-0f3ba819d0eb", 00:12:08.209 "is_configured": true, 00:12:08.209 "data_offset": 2048, 00:12:08.209 "data_size": 63488 00:12:08.209 } 00:12:08.209 ] 00:12:08.209 }' 00:12:08.209 21:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:08.209 21:55:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.468 21:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:12:08.468 21:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:08.468 21:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:08.468 21:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:08.727 21:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:08.727 21:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:08.727 21:55:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:08.727 [2024-07-13 21:55:28.113072] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:08.727 [2024-07-13 21:55:28.113130] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:12:08.985 21:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:08.985 21:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:08.985 21:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:08.985 21:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:12:09.244 21:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:12:09.244 21:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:12:09.244 21:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:12:09.244 21:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 1345750 00:12:09.244 21:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 1345750 ']' 00:12:09.244 21:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 1345750 00:12:09.244 21:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:12:09.244 21:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:09.244 21:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1345750 00:12:09.244 21:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:09.244 21:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:09.244 21:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1345750' 00:12:09.244 killing process with pid 1345750 00:12:09.244 21:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 1345750 00:12:09.244 [2024-07-13 21:55:28.440810] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:09.244 21:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 1345750 00:12:09.244 [2024-07-13 21:55:28.457627] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:10.622 21:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:12:10.622 00:12:10.622 real 0m9.074s 00:12:10.622 user 0m14.810s 00:12:10.622 sys 0m1.728s 00:12:10.622 21:55:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:10.622 21:55:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.622 ************************************ 00:12:10.622 END TEST raid_state_function_test_sb 00:12:10.622 ************************************ 00:12:10.622 21:55:29 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:12:10.622 21:55:29 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:12:10.622 21:55:29 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:12:10.622 21:55:29 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:10.622 21:55:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:10.622 ************************************ 00:12:10.622 START TEST raid_superblock_test 00:12:10.622 ************************************ 00:12:10.622 21:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 2 00:12:10.622 21:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:12:10.622 21:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:12:10.622 21:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:12:10.622 21:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:12:10.623 21:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:12:10.623 21:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:12:10.623 21:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:12:10.623 21:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:12:10.623 21:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:12:10.623 21:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:12:10.623 21:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:12:10.623 21:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:12:10.623 21:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:12:10.623 21:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:12:10.623 21:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:12:10.623 21:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:12:10.623 21:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=1347455 00:12:10.623 21:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 1347455 /var/tmp/spdk-raid.sock 00:12:10.623 21:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:12:10.623 21:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 1347455 ']' 00:12:10.623 21:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:10.623 21:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:10.623 21:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:10.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:10.623 21:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:10.623 21:55:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.623 [2024-07-13 21:55:29.844608] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:10.623 [2024-07-13 21:55:29.844699] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1347455 ] 00:12:10.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:10.623 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:10.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:10.623 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:10.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:10.623 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:10.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:10.623 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:10.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:10.623 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:10.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:10.623 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:10.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:10.623 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:10.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:10.623 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:10.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:10.623 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:10.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:10.623 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:10.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:10.623 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:10.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:10.623 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:10.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:10.623 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:10.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:10.623 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:10.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:10.623 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:10.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:10.623 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:10.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:10.623 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:10.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:10.623 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:10.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:10.623 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:10.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:10.623 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:10.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:10.623 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:10.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:10.623 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:10.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:10.623 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:10.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:10.623 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:10.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:10.623 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:10.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:10.623 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:10.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:10.623 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:10.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:10.623 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:10.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:10.623 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:10.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:10.623 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:10.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:10.623 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:10.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:10.623 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:10.623 [2024-07-13 21:55:30.006684] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.882 [2024-07-13 21:55:30.224051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.140 [2024-07-13 21:55:30.462312] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:11.140 [2024-07-13 21:55:30.462338] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:11.399 21:55:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:11.399 21:55:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:12:11.399 21:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:12:11.399 21:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:11.399 21:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:12:11.399 21:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:12:11.399 21:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:11.399 21:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:11.399 21:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:12:11.399 21:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:11.399 21:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:12:11.658 malloc1 00:12:11.658 21:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:11.658 [2024-07-13 21:55:30.957524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:11.658 [2024-07-13 21:55:30.957578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:11.658 [2024-07-13 21:55:30.957604] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:12:11.658 [2024-07-13 21:55:30.957617] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:11.658 [2024-07-13 21:55:30.959676] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:11.658 [2024-07-13 21:55:30.959706] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:11.658 pt1 00:12:11.658 21:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:12:11.658 21:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:11.658 21:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:12:11.658 21:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:12:11.658 21:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:11.658 21:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:11.658 21:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:12:11.658 21:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:11.658 21:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:12:11.917 malloc2 00:12:11.917 21:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:12.176 [2024-07-13 21:55:31.324856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:12.176 [2024-07-13 21:55:31.324909] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:12.176 [2024-07-13 21:55:31.324933] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:12:12.176 [2024-07-13 21:55:31.324944] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:12.176 [2024-07-13 21:55:31.327065] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:12.176 [2024-07-13 21:55:31.327098] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:12.176 pt2 00:12:12.176 21:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:12:12.176 21:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:12.176 21:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:12:12.176 [2024-07-13 21:55:31.509375] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:12.176 [2024-07-13 21:55:31.511096] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:12.176 [2024-07-13 21:55:31.511257] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000040880 00:12:12.176 [2024-07-13 21:55:31.511270] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:12.176 [2024-07-13 21:55:31.511523] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:12:12.176 [2024-07-13 21:55:31.511706] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000040880 00:12:12.176 [2024-07-13 21:55:31.511718] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000040880 00:12:12.176 [2024-07-13 21:55:31.511852] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:12.176 21:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:12:12.176 21:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:12.176 21:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:12.176 21:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:12.176 21:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:12.176 21:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:12.176 21:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:12.176 21:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:12.176 21:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:12.176 21:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:12.176 21:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:12.176 21:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.435 21:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:12.435 "name": "raid_bdev1", 00:12:12.435 "uuid": "17f5b6ee-e5d4-4d1f-a63a-21aff93db0e5", 00:12:12.435 "strip_size_kb": 64, 00:12:12.435 "state": "online", 00:12:12.435 "raid_level": "raid0", 00:12:12.435 "superblock": true, 00:12:12.435 "num_base_bdevs": 2, 00:12:12.435 "num_base_bdevs_discovered": 2, 00:12:12.435 "num_base_bdevs_operational": 2, 00:12:12.435 "base_bdevs_list": [ 00:12:12.435 { 00:12:12.435 "name": "pt1", 00:12:12.435 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:12.435 "is_configured": true, 00:12:12.435 "data_offset": 2048, 00:12:12.435 "data_size": 63488 00:12:12.435 }, 00:12:12.435 { 00:12:12.435 "name": "pt2", 00:12:12.435 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:12.435 "is_configured": true, 00:12:12.435 "data_offset": 2048, 00:12:12.435 "data_size": 63488 00:12:12.435 } 00:12:12.435 ] 00:12:12.435 }' 00:12:12.435 21:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:12.435 21:55:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.002 21:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:12:13.002 21:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:12:13.002 21:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:13.002 21:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:13.002 21:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:13.002 21:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:13.002 21:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:13.002 21:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:13.002 [2024-07-13 21:55:32.323691] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:13.002 21:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:13.002 "name": "raid_bdev1", 00:12:13.002 "aliases": [ 00:12:13.002 "17f5b6ee-e5d4-4d1f-a63a-21aff93db0e5" 00:12:13.002 ], 00:12:13.002 "product_name": "Raid Volume", 00:12:13.002 "block_size": 512, 00:12:13.002 "num_blocks": 126976, 00:12:13.002 "uuid": "17f5b6ee-e5d4-4d1f-a63a-21aff93db0e5", 00:12:13.002 "assigned_rate_limits": { 00:12:13.002 "rw_ios_per_sec": 0, 00:12:13.002 "rw_mbytes_per_sec": 0, 00:12:13.002 "r_mbytes_per_sec": 0, 00:12:13.002 "w_mbytes_per_sec": 0 00:12:13.002 }, 00:12:13.002 "claimed": false, 00:12:13.002 "zoned": false, 00:12:13.002 "supported_io_types": { 00:12:13.002 "read": true, 00:12:13.002 "write": true, 00:12:13.002 "unmap": true, 00:12:13.002 "flush": true, 00:12:13.002 "reset": true, 00:12:13.002 "nvme_admin": false, 00:12:13.002 "nvme_io": false, 00:12:13.002 "nvme_io_md": false, 00:12:13.002 "write_zeroes": true, 00:12:13.002 "zcopy": false, 00:12:13.002 "get_zone_info": false, 00:12:13.002 "zone_management": false, 00:12:13.002 "zone_append": false, 00:12:13.002 "compare": false, 00:12:13.002 "compare_and_write": false, 00:12:13.002 "abort": false, 00:12:13.002 "seek_hole": false, 00:12:13.002 "seek_data": false, 00:12:13.002 "copy": false, 00:12:13.002 "nvme_iov_md": false 00:12:13.002 }, 00:12:13.002 "memory_domains": [ 00:12:13.002 { 00:12:13.002 "dma_device_id": "system", 00:12:13.002 "dma_device_type": 1 00:12:13.002 }, 00:12:13.002 { 00:12:13.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.002 "dma_device_type": 2 00:12:13.002 }, 00:12:13.002 { 00:12:13.002 "dma_device_id": "system", 00:12:13.002 "dma_device_type": 1 00:12:13.002 }, 00:12:13.002 { 00:12:13.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.002 "dma_device_type": 2 00:12:13.002 } 00:12:13.002 ], 00:12:13.002 "driver_specific": { 00:12:13.002 "raid": { 00:12:13.002 "uuid": "17f5b6ee-e5d4-4d1f-a63a-21aff93db0e5", 00:12:13.002 "strip_size_kb": 64, 00:12:13.002 "state": "online", 00:12:13.002 "raid_level": "raid0", 00:12:13.002 "superblock": true, 00:12:13.002 "num_base_bdevs": 2, 00:12:13.002 "num_base_bdevs_discovered": 2, 00:12:13.002 "num_base_bdevs_operational": 2, 00:12:13.002 "base_bdevs_list": [ 00:12:13.002 { 00:12:13.002 "name": "pt1", 00:12:13.002 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:13.002 "is_configured": true, 00:12:13.002 "data_offset": 2048, 00:12:13.002 "data_size": 63488 00:12:13.002 }, 00:12:13.002 { 00:12:13.002 "name": "pt2", 00:12:13.002 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:13.002 "is_configured": true, 00:12:13.002 "data_offset": 2048, 00:12:13.002 "data_size": 63488 00:12:13.002 } 00:12:13.002 ] 00:12:13.002 } 00:12:13.002 } 00:12:13.002 }' 00:12:13.002 21:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:13.002 21:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:12:13.002 pt2' 00:12:13.002 21:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:13.002 21:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:13.002 21:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:12:13.261 21:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:13.261 "name": "pt1", 00:12:13.262 "aliases": [ 00:12:13.262 "00000000-0000-0000-0000-000000000001" 00:12:13.262 ], 00:12:13.262 "product_name": "passthru", 00:12:13.262 "block_size": 512, 00:12:13.262 "num_blocks": 65536, 00:12:13.262 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:13.262 "assigned_rate_limits": { 00:12:13.262 "rw_ios_per_sec": 0, 00:12:13.262 "rw_mbytes_per_sec": 0, 00:12:13.262 "r_mbytes_per_sec": 0, 00:12:13.262 "w_mbytes_per_sec": 0 00:12:13.262 }, 00:12:13.262 "claimed": true, 00:12:13.262 "claim_type": "exclusive_write", 00:12:13.262 "zoned": false, 00:12:13.262 "supported_io_types": { 00:12:13.262 "read": true, 00:12:13.262 "write": true, 00:12:13.262 "unmap": true, 00:12:13.262 "flush": true, 00:12:13.262 "reset": true, 00:12:13.262 "nvme_admin": false, 00:12:13.262 "nvme_io": false, 00:12:13.262 "nvme_io_md": false, 00:12:13.262 "write_zeroes": true, 00:12:13.262 "zcopy": true, 00:12:13.262 "get_zone_info": false, 00:12:13.262 "zone_management": false, 00:12:13.262 "zone_append": false, 00:12:13.262 "compare": false, 00:12:13.262 "compare_and_write": false, 00:12:13.262 "abort": true, 00:12:13.262 "seek_hole": false, 00:12:13.262 "seek_data": false, 00:12:13.262 "copy": true, 00:12:13.262 "nvme_iov_md": false 00:12:13.262 }, 00:12:13.262 "memory_domains": [ 00:12:13.262 { 00:12:13.262 "dma_device_id": "system", 00:12:13.262 "dma_device_type": 1 00:12:13.262 }, 00:12:13.262 { 00:12:13.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.262 "dma_device_type": 2 00:12:13.262 } 00:12:13.262 ], 00:12:13.262 "driver_specific": { 00:12:13.262 "passthru": { 00:12:13.262 "name": "pt1", 00:12:13.262 "base_bdev_name": "malloc1" 00:12:13.262 } 00:12:13.262 } 00:12:13.262 }' 00:12:13.262 21:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:13.262 21:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:13.262 21:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:13.262 21:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:13.521 21:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:13.521 21:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:13.521 21:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:13.521 21:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:13.521 21:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:13.521 21:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:13.521 21:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:13.521 21:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:13.521 21:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:13.521 21:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:12:13.521 21:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:13.781 21:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:13.781 "name": "pt2", 00:12:13.781 "aliases": [ 00:12:13.781 "00000000-0000-0000-0000-000000000002" 00:12:13.781 ], 00:12:13.781 "product_name": "passthru", 00:12:13.781 "block_size": 512, 00:12:13.781 "num_blocks": 65536, 00:12:13.781 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:13.781 "assigned_rate_limits": { 00:12:13.781 "rw_ios_per_sec": 0, 00:12:13.781 "rw_mbytes_per_sec": 0, 00:12:13.781 "r_mbytes_per_sec": 0, 00:12:13.781 "w_mbytes_per_sec": 0 00:12:13.781 }, 00:12:13.781 "claimed": true, 00:12:13.781 "claim_type": "exclusive_write", 00:12:13.781 "zoned": false, 00:12:13.781 "supported_io_types": { 00:12:13.781 "read": true, 00:12:13.781 "write": true, 00:12:13.781 "unmap": true, 00:12:13.781 "flush": true, 00:12:13.781 "reset": true, 00:12:13.781 "nvme_admin": false, 00:12:13.781 "nvme_io": false, 00:12:13.781 "nvme_io_md": false, 00:12:13.781 "write_zeroes": true, 00:12:13.781 "zcopy": true, 00:12:13.781 "get_zone_info": false, 00:12:13.781 "zone_management": false, 00:12:13.781 "zone_append": false, 00:12:13.781 "compare": false, 00:12:13.781 "compare_and_write": false, 00:12:13.781 "abort": true, 00:12:13.781 "seek_hole": false, 00:12:13.781 "seek_data": false, 00:12:13.781 "copy": true, 00:12:13.781 "nvme_iov_md": false 00:12:13.781 }, 00:12:13.781 "memory_domains": [ 00:12:13.781 { 00:12:13.781 "dma_device_id": "system", 00:12:13.781 "dma_device_type": 1 00:12:13.781 }, 00:12:13.781 { 00:12:13.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.781 "dma_device_type": 2 00:12:13.781 } 00:12:13.781 ], 00:12:13.781 "driver_specific": { 00:12:13.781 "passthru": { 00:12:13.781 "name": "pt2", 00:12:13.781 "base_bdev_name": "malloc2" 00:12:13.781 } 00:12:13.781 } 00:12:13.781 }' 00:12:13.781 21:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:13.781 21:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:13.781 21:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:13.781 21:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:13.781 21:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:13.781 21:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:13.781 21:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:13.781 21:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:14.047 21:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:14.047 21:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:14.047 21:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:14.047 21:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:14.047 21:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:14.047 21:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:12:14.047 [2024-07-13 21:55:33.382482] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:14.047 21:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=17f5b6ee-e5d4-4d1f-a63a-21aff93db0e5 00:12:14.047 21:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 17f5b6ee-e5d4-4d1f-a63a-21aff93db0e5 ']' 00:12:14.047 21:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:14.305 [2024-07-13 21:55:33.550689] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:14.305 [2024-07-13 21:55:33.550715] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:14.305 [2024-07-13 21:55:33.550786] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:14.305 [2024-07-13 21:55:33.550835] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:14.305 [2024-07-13 21:55:33.550850] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040880 name raid_bdev1, state offline 00:12:14.305 21:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:14.305 21:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:12:14.563 21:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:12:14.563 21:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:12:14.563 21:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:12:14.563 21:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:12:14.563 21:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:12:14.563 21:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:14.823 21:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:12:14.823 21:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:14.823 21:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:12:14.823 21:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:12:14.823 21:55:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:12:14.823 21:55:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:12:14.823 21:55:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:14.823 21:55:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:14.823 21:55:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:15.082 21:55:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:15.082 21:55:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:15.082 21:55:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:15.082 21:55:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:15.082 21:55:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:12:15.082 21:55:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:12:15.082 [2024-07-13 21:55:34.352786] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:15.082 [2024-07-13 21:55:34.354540] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:15.082 [2024-07-13 21:55:34.354599] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:15.083 [2024-07-13 21:55:34.354643] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:15.083 [2024-07-13 21:55:34.354659] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:15.083 [2024-07-13 21:55:34.354671] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040e80 name raid_bdev1, state configuring 00:12:15.083 request: 00:12:15.083 { 00:12:15.083 "name": "raid_bdev1", 00:12:15.083 "raid_level": "raid0", 00:12:15.083 "base_bdevs": [ 00:12:15.083 "malloc1", 00:12:15.083 "malloc2" 00:12:15.083 ], 00:12:15.083 "strip_size_kb": 64, 00:12:15.083 "superblock": false, 00:12:15.083 "method": "bdev_raid_create", 00:12:15.083 "req_id": 1 00:12:15.083 } 00:12:15.083 Got JSON-RPC error response 00:12:15.083 response: 00:12:15.083 { 00:12:15.083 "code": -17, 00:12:15.083 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:15.083 } 00:12:15.083 21:55:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:12:15.083 21:55:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:15.083 21:55:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:15.083 21:55:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:15.083 21:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:12:15.083 21:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:15.385 21:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:12:15.385 21:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:12:15.385 21:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:15.385 [2024-07-13 21:55:34.673564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:15.385 [2024-07-13 21:55:34.673617] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.385 [2024-07-13 21:55:34.673639] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:12:15.385 [2024-07-13 21:55:34.673652] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.385 [2024-07-13 21:55:34.675687] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.385 [2024-07-13 21:55:34.675718] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:15.385 [2024-07-13 21:55:34.675793] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:15.386 [2024-07-13 21:55:34.675862] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:15.386 pt1 00:12:15.386 21:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:12:15.386 21:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:15.386 21:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:15.386 21:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:15.386 21:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:15.386 21:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:15.386 21:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:15.386 21:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:15.386 21:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:15.386 21:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:15.386 21:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:15.386 21:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.645 21:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:15.645 "name": "raid_bdev1", 00:12:15.645 "uuid": "17f5b6ee-e5d4-4d1f-a63a-21aff93db0e5", 00:12:15.645 "strip_size_kb": 64, 00:12:15.645 "state": "configuring", 00:12:15.645 "raid_level": "raid0", 00:12:15.645 "superblock": true, 00:12:15.645 "num_base_bdevs": 2, 00:12:15.645 "num_base_bdevs_discovered": 1, 00:12:15.645 "num_base_bdevs_operational": 2, 00:12:15.645 "base_bdevs_list": [ 00:12:15.645 { 00:12:15.645 "name": "pt1", 00:12:15.645 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:15.645 "is_configured": true, 00:12:15.645 "data_offset": 2048, 00:12:15.645 "data_size": 63488 00:12:15.645 }, 00:12:15.645 { 00:12:15.645 "name": null, 00:12:15.645 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:15.645 "is_configured": false, 00:12:15.645 "data_offset": 2048, 00:12:15.645 "data_size": 63488 00:12:15.645 } 00:12:15.645 ] 00:12:15.645 }' 00:12:15.645 21:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:15.645 21:55:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.213 21:55:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:12:16.213 21:55:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:12:16.213 21:55:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:12:16.213 21:55:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:16.213 [2024-07-13 21:55:35.467656] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:16.213 [2024-07-13 21:55:35.467717] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:16.213 [2024-07-13 21:55:35.467749] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041d80 00:12:16.213 [2024-07-13 21:55:35.467762] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:16.213 [2024-07-13 21:55:35.468206] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:16.213 [2024-07-13 21:55:35.468230] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:16.213 [2024-07-13 21:55:35.468306] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:16.213 [2024-07-13 21:55:35.468336] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:16.213 [2024-07-13 21:55:35.468447] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041a80 00:12:16.213 [2024-07-13 21:55:35.468460] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:16.213 [2024-07-13 21:55:35.468669] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:12:16.213 [2024-07-13 21:55:35.468821] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041a80 00:12:16.213 [2024-07-13 21:55:35.468831] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041a80 00:12:16.213 [2024-07-13 21:55:35.468975] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:16.213 pt2 00:12:16.213 21:55:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:12:16.213 21:55:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:12:16.213 21:55:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:12:16.213 21:55:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:16.213 21:55:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:16.213 21:55:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:16.213 21:55:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:16.213 21:55:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:16.213 21:55:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:16.213 21:55:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:16.213 21:55:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:16.213 21:55:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:16.213 21:55:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:16.213 21:55:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.472 21:55:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:16.473 "name": "raid_bdev1", 00:12:16.473 "uuid": "17f5b6ee-e5d4-4d1f-a63a-21aff93db0e5", 00:12:16.473 "strip_size_kb": 64, 00:12:16.473 "state": "online", 00:12:16.473 "raid_level": "raid0", 00:12:16.473 "superblock": true, 00:12:16.473 "num_base_bdevs": 2, 00:12:16.473 "num_base_bdevs_discovered": 2, 00:12:16.473 "num_base_bdevs_operational": 2, 00:12:16.473 "base_bdevs_list": [ 00:12:16.473 { 00:12:16.473 "name": "pt1", 00:12:16.473 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:16.473 "is_configured": true, 00:12:16.473 "data_offset": 2048, 00:12:16.473 "data_size": 63488 00:12:16.473 }, 00:12:16.473 { 00:12:16.473 "name": "pt2", 00:12:16.473 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:16.473 "is_configured": true, 00:12:16.473 "data_offset": 2048, 00:12:16.473 "data_size": 63488 00:12:16.473 } 00:12:16.473 ] 00:12:16.473 }' 00:12:16.473 21:55:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:16.473 21:55:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.040 21:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:12:17.041 21:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:12:17.041 21:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:17.041 21:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:17.041 21:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:17.041 21:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:17.041 21:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:17.041 21:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:17.041 [2024-07-13 21:55:36.314107] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:17.041 21:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:17.041 "name": "raid_bdev1", 00:12:17.041 "aliases": [ 00:12:17.041 "17f5b6ee-e5d4-4d1f-a63a-21aff93db0e5" 00:12:17.041 ], 00:12:17.041 "product_name": "Raid Volume", 00:12:17.041 "block_size": 512, 00:12:17.041 "num_blocks": 126976, 00:12:17.041 "uuid": "17f5b6ee-e5d4-4d1f-a63a-21aff93db0e5", 00:12:17.041 "assigned_rate_limits": { 00:12:17.041 "rw_ios_per_sec": 0, 00:12:17.041 "rw_mbytes_per_sec": 0, 00:12:17.041 "r_mbytes_per_sec": 0, 00:12:17.041 "w_mbytes_per_sec": 0 00:12:17.041 }, 00:12:17.041 "claimed": false, 00:12:17.041 "zoned": false, 00:12:17.041 "supported_io_types": { 00:12:17.041 "read": true, 00:12:17.041 "write": true, 00:12:17.041 "unmap": true, 00:12:17.041 "flush": true, 00:12:17.041 "reset": true, 00:12:17.041 "nvme_admin": false, 00:12:17.041 "nvme_io": false, 00:12:17.041 "nvme_io_md": false, 00:12:17.041 "write_zeroes": true, 00:12:17.041 "zcopy": false, 00:12:17.041 "get_zone_info": false, 00:12:17.041 "zone_management": false, 00:12:17.041 "zone_append": false, 00:12:17.041 "compare": false, 00:12:17.041 "compare_and_write": false, 00:12:17.041 "abort": false, 00:12:17.041 "seek_hole": false, 00:12:17.041 "seek_data": false, 00:12:17.041 "copy": false, 00:12:17.041 "nvme_iov_md": false 00:12:17.041 }, 00:12:17.041 "memory_domains": [ 00:12:17.041 { 00:12:17.041 "dma_device_id": "system", 00:12:17.041 "dma_device_type": 1 00:12:17.041 }, 00:12:17.041 { 00:12:17.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.041 "dma_device_type": 2 00:12:17.041 }, 00:12:17.041 { 00:12:17.041 "dma_device_id": "system", 00:12:17.041 "dma_device_type": 1 00:12:17.041 }, 00:12:17.041 { 00:12:17.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.041 "dma_device_type": 2 00:12:17.041 } 00:12:17.041 ], 00:12:17.041 "driver_specific": { 00:12:17.041 "raid": { 00:12:17.041 "uuid": "17f5b6ee-e5d4-4d1f-a63a-21aff93db0e5", 00:12:17.041 "strip_size_kb": 64, 00:12:17.041 "state": "online", 00:12:17.041 "raid_level": "raid0", 00:12:17.041 "superblock": true, 00:12:17.041 "num_base_bdevs": 2, 00:12:17.041 "num_base_bdevs_discovered": 2, 00:12:17.041 "num_base_bdevs_operational": 2, 00:12:17.041 "base_bdevs_list": [ 00:12:17.041 { 00:12:17.041 "name": "pt1", 00:12:17.041 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:17.041 "is_configured": true, 00:12:17.041 "data_offset": 2048, 00:12:17.041 "data_size": 63488 00:12:17.041 }, 00:12:17.041 { 00:12:17.041 "name": "pt2", 00:12:17.041 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:17.041 "is_configured": true, 00:12:17.041 "data_offset": 2048, 00:12:17.041 "data_size": 63488 00:12:17.041 } 00:12:17.041 ] 00:12:17.041 } 00:12:17.041 } 00:12:17.041 }' 00:12:17.041 21:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:17.041 21:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:12:17.041 pt2' 00:12:17.041 21:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:17.041 21:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:12:17.041 21:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:17.300 21:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:17.300 "name": "pt1", 00:12:17.300 "aliases": [ 00:12:17.300 "00000000-0000-0000-0000-000000000001" 00:12:17.300 ], 00:12:17.300 "product_name": "passthru", 00:12:17.300 "block_size": 512, 00:12:17.300 "num_blocks": 65536, 00:12:17.300 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:17.300 "assigned_rate_limits": { 00:12:17.300 "rw_ios_per_sec": 0, 00:12:17.300 "rw_mbytes_per_sec": 0, 00:12:17.300 "r_mbytes_per_sec": 0, 00:12:17.300 "w_mbytes_per_sec": 0 00:12:17.300 }, 00:12:17.300 "claimed": true, 00:12:17.300 "claim_type": "exclusive_write", 00:12:17.300 "zoned": false, 00:12:17.300 "supported_io_types": { 00:12:17.300 "read": true, 00:12:17.300 "write": true, 00:12:17.300 "unmap": true, 00:12:17.300 "flush": true, 00:12:17.300 "reset": true, 00:12:17.300 "nvme_admin": false, 00:12:17.300 "nvme_io": false, 00:12:17.300 "nvme_io_md": false, 00:12:17.300 "write_zeroes": true, 00:12:17.300 "zcopy": true, 00:12:17.300 "get_zone_info": false, 00:12:17.300 "zone_management": false, 00:12:17.300 "zone_append": false, 00:12:17.300 "compare": false, 00:12:17.300 "compare_and_write": false, 00:12:17.300 "abort": true, 00:12:17.300 "seek_hole": false, 00:12:17.300 "seek_data": false, 00:12:17.300 "copy": true, 00:12:17.300 "nvme_iov_md": false 00:12:17.300 }, 00:12:17.300 "memory_domains": [ 00:12:17.300 { 00:12:17.300 "dma_device_id": "system", 00:12:17.300 "dma_device_type": 1 00:12:17.300 }, 00:12:17.300 { 00:12:17.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.300 "dma_device_type": 2 00:12:17.300 } 00:12:17.300 ], 00:12:17.300 "driver_specific": { 00:12:17.300 "passthru": { 00:12:17.300 "name": "pt1", 00:12:17.300 "base_bdev_name": "malloc1" 00:12:17.300 } 00:12:17.300 } 00:12:17.300 }' 00:12:17.300 21:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:17.300 21:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:17.300 21:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:17.300 21:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:17.300 21:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:17.300 21:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:17.300 21:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:17.559 21:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:17.559 21:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:17.559 21:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:17.559 21:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:17.559 21:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:17.559 21:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:17.559 21:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:12:17.559 21:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:17.818 21:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:17.818 "name": "pt2", 00:12:17.818 "aliases": [ 00:12:17.818 "00000000-0000-0000-0000-000000000002" 00:12:17.818 ], 00:12:17.818 "product_name": "passthru", 00:12:17.818 "block_size": 512, 00:12:17.818 "num_blocks": 65536, 00:12:17.818 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:17.818 "assigned_rate_limits": { 00:12:17.818 "rw_ios_per_sec": 0, 00:12:17.818 "rw_mbytes_per_sec": 0, 00:12:17.818 "r_mbytes_per_sec": 0, 00:12:17.818 "w_mbytes_per_sec": 0 00:12:17.818 }, 00:12:17.818 "claimed": true, 00:12:17.818 "claim_type": "exclusive_write", 00:12:17.818 "zoned": false, 00:12:17.818 "supported_io_types": { 00:12:17.818 "read": true, 00:12:17.818 "write": true, 00:12:17.818 "unmap": true, 00:12:17.818 "flush": true, 00:12:17.818 "reset": true, 00:12:17.818 "nvme_admin": false, 00:12:17.818 "nvme_io": false, 00:12:17.818 "nvme_io_md": false, 00:12:17.818 "write_zeroes": true, 00:12:17.818 "zcopy": true, 00:12:17.818 "get_zone_info": false, 00:12:17.818 "zone_management": false, 00:12:17.818 "zone_append": false, 00:12:17.818 "compare": false, 00:12:17.818 "compare_and_write": false, 00:12:17.818 "abort": true, 00:12:17.818 "seek_hole": false, 00:12:17.818 "seek_data": false, 00:12:17.818 "copy": true, 00:12:17.818 "nvme_iov_md": false 00:12:17.818 }, 00:12:17.818 "memory_domains": [ 00:12:17.818 { 00:12:17.818 "dma_device_id": "system", 00:12:17.818 "dma_device_type": 1 00:12:17.818 }, 00:12:17.818 { 00:12:17.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.818 "dma_device_type": 2 00:12:17.818 } 00:12:17.818 ], 00:12:17.818 "driver_specific": { 00:12:17.818 "passthru": { 00:12:17.818 "name": "pt2", 00:12:17.818 "base_bdev_name": "malloc2" 00:12:17.818 } 00:12:17.818 } 00:12:17.818 }' 00:12:17.818 21:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:17.818 21:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:17.818 21:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:17.818 21:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:17.818 21:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:17.818 21:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:17.818 21:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:17.818 21:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:17.818 21:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:17.818 21:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:17.818 21:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:17.818 21:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:17.818 21:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:12:17.818 21:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:18.078 [2024-07-13 21:55:37.356855] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:18.078 21:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 17f5b6ee-e5d4-4d1f-a63a-21aff93db0e5 '!=' 17f5b6ee-e5d4-4d1f-a63a-21aff93db0e5 ']' 00:12:18.078 21:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:12:18.078 21:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:18.078 21:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:12:18.078 21:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 1347455 00:12:18.078 21:55:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 1347455 ']' 00:12:18.078 21:55:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 1347455 00:12:18.078 21:55:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:12:18.078 21:55:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:18.078 21:55:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1347455 00:12:18.078 21:55:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:18.078 21:55:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:18.078 21:55:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1347455' 00:12:18.078 killing process with pid 1347455 00:12:18.078 21:55:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 1347455 00:12:18.078 [2024-07-13 21:55:37.415781] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:18.078 [2024-07-13 21:55:37.415864] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:18.078 [2024-07-13 21:55:37.415921] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:18.078 [2024-07-13 21:55:37.415936] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041a80 name raid_bdev1, state offline 00:12:18.078 21:55:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 1347455 00:12:18.337 [2024-07-13 21:55:37.557032] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:19.718 21:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:12:19.718 00:12:19.718 real 0m8.999s 00:12:19.718 user 0m14.752s 00:12:19.718 sys 0m1.658s 00:12:19.718 21:55:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:19.718 21:55:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.718 ************************************ 00:12:19.718 END TEST raid_superblock_test 00:12:19.718 ************************************ 00:12:19.718 21:55:38 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:12:19.718 21:55:38 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:12:19.718 21:55:38 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:12:19.718 21:55:38 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:19.718 21:55:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:19.718 ************************************ 00:12:19.718 START TEST raid_read_error_test 00:12:19.718 ************************************ 00:12:19.718 21:55:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 2 read 00:12:19.718 21:55:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:12:19.718 21:55:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:12:19.718 21:55:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:12:19.718 21:55:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:12:19.718 21:55:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:19.718 21:55:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:12:19.718 21:55:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:19.718 21:55:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:19.718 21:55:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:12:19.718 21:55:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:19.718 21:55:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:19.718 21:55:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:19.718 21:55:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:12:19.718 21:55:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:12:19.718 21:55:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:12:19.718 21:55:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:12:19.718 21:55:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:12:19.718 21:55:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:12:19.718 21:55:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:12:19.718 21:55:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:12:19.718 21:55:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:12:19.718 21:55:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:12:19.718 21:55:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.9MwnI2f31e 00:12:19.718 21:55:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1349231 00:12:19.718 21:55:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1349231 /var/tmp/spdk-raid.sock 00:12:19.718 21:55:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:19.718 21:55:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 1349231 ']' 00:12:19.718 21:55:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:19.718 21:55:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:19.718 21:55:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:19.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:19.718 21:55:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:19.718 21:55:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.718 [2024-07-13 21:55:38.921629] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:19.718 [2024-07-13 21:55:38.921724] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1349231 ] 00:12:19.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.718 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:19.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.718 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:19.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.718 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:19.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.718 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:19.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.718 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:19.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.718 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:19.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.718 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:19.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.718 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:19.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.718 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:19.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.718 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:19.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.718 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:19.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.718 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:19.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.718 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:19.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.718 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:19.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.718 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:19.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.718 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:19.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.718 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:19.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.718 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:19.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.718 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:19.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.718 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:19.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.718 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:19.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.718 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:19.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.718 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:19.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.718 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:19.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.718 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:19.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.718 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:19.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.718 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:19.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.718 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:19.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.718 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:19.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.718 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:19.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.718 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:19.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.718 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:19.718 [2024-07-13 21:55:39.083283] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.978 [2024-07-13 21:55:39.287407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.237 [2024-07-13 21:55:39.540118] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:20.237 [2024-07-13 21:55:39.540148] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:20.497 21:55:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:20.497 21:55:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:12:20.497 21:55:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:20.497 21:55:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:20.497 BaseBdev1_malloc 00:12:20.756 21:55:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:12:20.756 true 00:12:20.756 21:55:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:21.015 [2024-07-13 21:55:40.220087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:21.015 [2024-07-13 21:55:40.220139] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.015 [2024-07-13 21:55:40.220164] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:12:21.015 [2024-07-13 21:55:40.220181] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.015 [2024-07-13 21:55:40.222340] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.015 [2024-07-13 21:55:40.222375] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:21.015 BaseBdev1 00:12:21.015 21:55:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:21.015 21:55:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:21.274 BaseBdev2_malloc 00:12:21.274 21:55:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:12:21.274 true 00:12:21.274 21:55:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:21.533 [2024-07-13 21:55:40.747792] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:21.533 [2024-07-13 21:55:40.747845] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.533 [2024-07-13 21:55:40.747868] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:12:21.533 [2024-07-13 21:55:40.747884] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.533 [2024-07-13 21:55:40.749987] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.533 [2024-07-13 21:55:40.750021] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:21.533 BaseBdev2 00:12:21.533 21:55:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:12:21.533 [2024-07-13 21:55:40.916301] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:21.533 [2024-07-13 21:55:40.918143] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:21.533 [2024-07-13 21:55:40.918336] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000040e80 00:12:21.533 [2024-07-13 21:55:40.918353] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:21.533 [2024-07-13 21:55:40.918640] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:12:21.533 [2024-07-13 21:55:40.918834] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000040e80 00:12:21.533 [2024-07-13 21:55:40.918848] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000040e80 00:12:21.533 [2024-07-13 21:55:40.919029] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:21.793 21:55:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:12:21.793 21:55:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:21.793 21:55:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:21.793 21:55:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:21.793 21:55:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:21.793 21:55:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:21.793 21:55:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:21.793 21:55:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:21.793 21:55:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:21.793 21:55:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:21.793 21:55:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:21.793 21:55:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.793 21:55:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:21.793 "name": "raid_bdev1", 00:12:21.793 "uuid": "59c183f2-131f-45a5-b85f-fc3bbf55f6e4", 00:12:21.793 "strip_size_kb": 64, 00:12:21.793 "state": "online", 00:12:21.793 "raid_level": "raid0", 00:12:21.793 "superblock": true, 00:12:21.793 "num_base_bdevs": 2, 00:12:21.793 "num_base_bdevs_discovered": 2, 00:12:21.793 "num_base_bdevs_operational": 2, 00:12:21.793 "base_bdevs_list": [ 00:12:21.793 { 00:12:21.793 "name": "BaseBdev1", 00:12:21.793 "uuid": "86190116-231e-5859-9106-0eddf4201f7c", 00:12:21.793 "is_configured": true, 00:12:21.793 "data_offset": 2048, 00:12:21.793 "data_size": 63488 00:12:21.793 }, 00:12:21.793 { 00:12:21.793 "name": "BaseBdev2", 00:12:21.793 "uuid": "353f5cc7-e796-5866-af8e-40cded668f3a", 00:12:21.793 "is_configured": true, 00:12:21.793 "data_offset": 2048, 00:12:21.793 "data_size": 63488 00:12:21.793 } 00:12:21.793 ] 00:12:21.793 }' 00:12:21.793 21:55:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:21.793 21:55:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.361 21:55:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:12:22.361 21:55:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:12:22.361 [2024-07-13 21:55:41.635323] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:12:23.299 21:55:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:23.558 21:55:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:12:23.559 21:55:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:23.559 21:55:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:12:23.559 21:55:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:12:23.559 21:55:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:23.559 21:55:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:23.559 21:55:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:23.559 21:55:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:23.559 21:55:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:23.559 21:55:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:23.559 21:55:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:23.559 21:55:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:23.559 21:55:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:23.559 21:55:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:23.559 21:55:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.559 21:55:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:23.559 "name": "raid_bdev1", 00:12:23.559 "uuid": "59c183f2-131f-45a5-b85f-fc3bbf55f6e4", 00:12:23.559 "strip_size_kb": 64, 00:12:23.559 "state": "online", 00:12:23.559 "raid_level": "raid0", 00:12:23.559 "superblock": true, 00:12:23.559 "num_base_bdevs": 2, 00:12:23.559 "num_base_bdevs_discovered": 2, 00:12:23.559 "num_base_bdevs_operational": 2, 00:12:23.559 "base_bdevs_list": [ 00:12:23.559 { 00:12:23.559 "name": "BaseBdev1", 00:12:23.559 "uuid": "86190116-231e-5859-9106-0eddf4201f7c", 00:12:23.559 "is_configured": true, 00:12:23.559 "data_offset": 2048, 00:12:23.559 "data_size": 63488 00:12:23.559 }, 00:12:23.559 { 00:12:23.559 "name": "BaseBdev2", 00:12:23.559 "uuid": "353f5cc7-e796-5866-af8e-40cded668f3a", 00:12:23.559 "is_configured": true, 00:12:23.559 "data_offset": 2048, 00:12:23.559 "data_size": 63488 00:12:23.559 } 00:12:23.559 ] 00:12:23.559 }' 00:12:23.559 21:55:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:23.559 21:55:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.127 21:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:24.385 [2024-07-13 21:55:43.547563] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:24.385 [2024-07-13 21:55:43.547596] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:24.385 [2024-07-13 21:55:43.549791] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:24.385 [2024-07-13 21:55:43.549830] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:24.385 [2024-07-13 21:55:43.549859] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:24.385 [2024-07-13 21:55:43.549872] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040e80 name raid_bdev1, state offline 00:12:24.385 0 00:12:24.385 21:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1349231 00:12:24.385 21:55:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 1349231 ']' 00:12:24.385 21:55:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 1349231 00:12:24.385 21:55:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:12:24.385 21:55:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:24.385 21:55:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1349231 00:12:24.385 21:55:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:24.385 21:55:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:24.385 21:55:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1349231' 00:12:24.385 killing process with pid 1349231 00:12:24.386 21:55:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 1349231 00:12:24.386 21:55:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 1349231 00:12:24.386 [2024-07-13 21:55:43.613946] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:24.386 [2024-07-13 21:55:43.683414] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:25.763 21:55:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.9MwnI2f31e 00:12:25.763 21:55:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:12:25.763 21:55:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:12:25.763 21:55:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.52 00:12:25.763 21:55:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:12:25.763 21:55:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:25.763 21:55:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:12:25.763 21:55:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.52 != \0\.\0\0 ]] 00:12:25.763 00:12:25.763 real 0m6.125s 00:12:25.763 user 0m8.455s 00:12:25.763 sys 0m0.966s 00:12:25.763 21:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:25.763 21:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.763 ************************************ 00:12:25.763 END TEST raid_read_error_test 00:12:25.763 ************************************ 00:12:25.763 21:55:44 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:12:25.763 21:55:44 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:12:25.763 21:55:44 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:12:25.763 21:55:44 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:25.763 21:55:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:25.763 ************************************ 00:12:25.763 START TEST raid_write_error_test 00:12:25.763 ************************************ 00:12:25.763 21:55:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 2 write 00:12:25.763 21:55:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:12:25.763 21:55:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:12:25.763 21:55:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:12:25.763 21:55:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:12:25.763 21:55:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:25.763 21:55:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:12:25.763 21:55:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:25.763 21:55:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:25.763 21:55:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:12:25.763 21:55:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:25.763 21:55:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:25.763 21:55:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:25.764 21:55:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:12:25.764 21:55:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:12:25.764 21:55:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:12:25.764 21:55:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:12:25.764 21:55:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:12:25.764 21:55:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:12:25.764 21:55:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:12:25.764 21:55:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:12:25.764 21:55:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:12:25.764 21:55:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:12:25.764 21:55:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.s6twg8cBS7 00:12:25.764 21:55:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1350388 00:12:25.764 21:55:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1350388 /var/tmp/spdk-raid.sock 00:12:25.764 21:55:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 1350388 ']' 00:12:25.764 21:55:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:25.764 21:55:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:25.764 21:55:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:25.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:25.764 21:55:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:25.764 21:55:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.764 21:55:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:25.764 [2024-07-13 21:55:45.121396] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:25.764 [2024-07-13 21:55:45.121490] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1350388 ] 00:12:26.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.023 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:26.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.023 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:26.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.023 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:26.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.023 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:26.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.023 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:26.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.023 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:26.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.023 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:26.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.023 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:26.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.023 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:26.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.023 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:26.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.023 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:26.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.023 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:26.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.023 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:26.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.023 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:26.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.023 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:26.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.023 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:26.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.023 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:26.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.023 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:26.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.023 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:26.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.023 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:26.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.023 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:26.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.023 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:26.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.023 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:26.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.023 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:26.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.023 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:26.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.023 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:26.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.023 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:26.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.024 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:26.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.024 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:26.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.024 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:26.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.024 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:26.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.024 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:26.024 [2024-07-13 21:55:45.282435] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.283 [2024-07-13 21:55:45.485881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.542 [2024-07-13 21:55:45.717176] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:26.542 [2024-07-13 21:55:45.717204] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:26.542 21:55:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:26.542 21:55:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:12:26.542 21:55:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:26.542 21:55:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:26.801 BaseBdev1_malloc 00:12:26.801 21:55:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:12:27.060 true 00:12:27.060 21:55:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:27.060 [2024-07-13 21:55:46.375052] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:27.060 [2024-07-13 21:55:46.375107] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.060 [2024-07-13 21:55:46.375129] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:12:27.060 [2024-07-13 21:55:46.375145] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.060 [2024-07-13 21:55:46.377248] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.060 [2024-07-13 21:55:46.377282] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:27.061 BaseBdev1 00:12:27.061 21:55:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:27.061 21:55:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:27.320 BaseBdev2_malloc 00:12:27.320 21:55:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:12:27.579 true 00:12:27.579 21:55:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:27.579 [2024-07-13 21:55:46.914967] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:27.579 [2024-07-13 21:55:46.915020] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.579 [2024-07-13 21:55:46.915041] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:12:27.579 [2024-07-13 21:55:46.915056] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.579 [2024-07-13 21:55:46.917128] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.579 [2024-07-13 21:55:46.917160] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:27.579 BaseBdev2 00:12:27.579 21:55:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:12:27.847 [2024-07-13 21:55:47.091511] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:27.847 [2024-07-13 21:55:47.093319] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:27.847 [2024-07-13 21:55:47.093504] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000040e80 00:12:27.847 [2024-07-13 21:55:47.093521] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:27.847 [2024-07-13 21:55:47.093783] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:12:27.847 [2024-07-13 21:55:47.093986] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000040e80 00:12:27.847 [2024-07-13 21:55:47.094000] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000040e80 00:12:27.847 [2024-07-13 21:55:47.094175] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:27.847 21:55:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:12:27.847 21:55:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:27.847 21:55:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:27.847 21:55:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:27.847 21:55:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:27.847 21:55:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:27.847 21:55:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:27.847 21:55:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:27.847 21:55:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:27.847 21:55:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:27.847 21:55:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:27.847 21:55:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.136 21:55:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:28.136 "name": "raid_bdev1", 00:12:28.136 "uuid": "a788696a-c8e5-4546-b29b-1d78404a67f2", 00:12:28.136 "strip_size_kb": 64, 00:12:28.136 "state": "online", 00:12:28.136 "raid_level": "raid0", 00:12:28.136 "superblock": true, 00:12:28.136 "num_base_bdevs": 2, 00:12:28.136 "num_base_bdevs_discovered": 2, 00:12:28.136 "num_base_bdevs_operational": 2, 00:12:28.136 "base_bdevs_list": [ 00:12:28.136 { 00:12:28.136 "name": "BaseBdev1", 00:12:28.136 "uuid": "2b773b11-66d3-5f55-bc0f-975a298df2d2", 00:12:28.136 "is_configured": true, 00:12:28.136 "data_offset": 2048, 00:12:28.136 "data_size": 63488 00:12:28.136 }, 00:12:28.136 { 00:12:28.136 "name": "BaseBdev2", 00:12:28.136 "uuid": "a5720392-6b05-5bc5-99bc-b50425e6f16c", 00:12:28.136 "is_configured": true, 00:12:28.136 "data_offset": 2048, 00:12:28.136 "data_size": 63488 00:12:28.136 } 00:12:28.136 ] 00:12:28.136 }' 00:12:28.136 21:55:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:28.136 21:55:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.394 21:55:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:12:28.394 21:55:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:12:28.652 [2024-07-13 21:55:47.834797] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:12:29.589 21:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:29.589 21:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:12:29.589 21:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:29.589 21:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:12:29.589 21:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:12:29.589 21:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:29.589 21:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:29.589 21:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:29.589 21:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:29.589 21:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:29.589 21:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:29.589 21:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:29.589 21:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:29.589 21:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:29.589 21:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:29.589 21:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.848 21:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:29.848 "name": "raid_bdev1", 00:12:29.848 "uuid": "a788696a-c8e5-4546-b29b-1d78404a67f2", 00:12:29.848 "strip_size_kb": 64, 00:12:29.848 "state": "online", 00:12:29.848 "raid_level": "raid0", 00:12:29.848 "superblock": true, 00:12:29.848 "num_base_bdevs": 2, 00:12:29.848 "num_base_bdevs_discovered": 2, 00:12:29.848 "num_base_bdevs_operational": 2, 00:12:29.848 "base_bdevs_list": [ 00:12:29.848 { 00:12:29.848 "name": "BaseBdev1", 00:12:29.848 "uuid": "2b773b11-66d3-5f55-bc0f-975a298df2d2", 00:12:29.848 "is_configured": true, 00:12:29.848 "data_offset": 2048, 00:12:29.848 "data_size": 63488 00:12:29.848 }, 00:12:29.848 { 00:12:29.848 "name": "BaseBdev2", 00:12:29.848 "uuid": "a5720392-6b05-5bc5-99bc-b50425e6f16c", 00:12:29.848 "is_configured": true, 00:12:29.848 "data_offset": 2048, 00:12:29.848 "data_size": 63488 00:12:29.848 } 00:12:29.848 ] 00:12:29.848 }' 00:12:29.848 21:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:29.848 21:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.416 21:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:30.416 [2024-07-13 21:55:49.750782] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:30.416 [2024-07-13 21:55:49.750825] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:30.416 [2024-07-13 21:55:49.753016] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:30.416 [2024-07-13 21:55:49.753057] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:30.416 [2024-07-13 21:55:49.753090] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:30.416 [2024-07-13 21:55:49.753108] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040e80 name raid_bdev1, state offline 00:12:30.416 0 00:12:30.416 21:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1350388 00:12:30.416 21:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 1350388 ']' 00:12:30.416 21:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 1350388 00:12:30.416 21:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:12:30.416 21:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:30.416 21:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1350388 00:12:30.675 21:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:30.675 21:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:30.675 21:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1350388' 00:12:30.675 killing process with pid 1350388 00:12:30.675 21:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 1350388 00:12:30.675 [2024-07-13 21:55:49.823243] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:30.675 21:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 1350388 00:12:30.675 [2024-07-13 21:55:49.893602] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:32.054 21:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.s6twg8cBS7 00:12:32.054 21:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:12:32.054 21:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:12:32.054 21:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.52 00:12:32.054 21:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:12:32.054 21:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:32.054 21:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:12:32.054 21:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.52 != \0\.\0\0 ]] 00:12:32.054 00:12:32.054 real 0m6.160s 00:12:32.054 user 0m8.521s 00:12:32.054 sys 0m0.987s 00:12:32.054 21:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:32.054 21:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.054 ************************************ 00:12:32.054 END TEST raid_write_error_test 00:12:32.054 ************************************ 00:12:32.054 21:55:51 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:12:32.054 21:55:51 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:12:32.054 21:55:51 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:12:32.054 21:55:51 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:12:32.054 21:55:51 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:32.054 21:55:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:32.054 ************************************ 00:12:32.055 START TEST raid_state_function_test 00:12:32.055 ************************************ 00:12:32.055 21:55:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 2 false 00:12:32.055 21:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:12:32.055 21:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:12:32.055 21:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:12:32.055 21:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:12:32.055 21:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:12:32.055 21:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:32.055 21:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:12:32.055 21:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:32.055 21:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:32.055 21:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:12:32.055 21:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:32.055 21:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:32.055 21:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:32.055 21:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:12:32.055 21:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:12:32.055 21:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:12:32.055 21:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:12:32.055 21:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:12:32.055 21:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:12:32.055 21:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:12:32.055 21:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:12:32.055 21:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:12:32.055 21:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:12:32.055 21:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=1351540 00:12:32.055 21:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1351540' 00:12:32.055 Process raid pid: 1351540 00:12:32.055 21:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:32.055 21:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 1351540 /var/tmp/spdk-raid.sock 00:12:32.055 21:55:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 1351540 ']' 00:12:32.055 21:55:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:32.055 21:55:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:32.055 21:55:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:32.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:32.055 21:55:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:32.055 21:55:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.055 [2024-07-13 21:55:51.361864] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:32.055 [2024-07-13 21:55:51.361961] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:32.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.315 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:32.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.315 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:32.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.315 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:32.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.315 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:32.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.315 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:32.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.315 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:32.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.315 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:32.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.315 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:32.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.315 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:32.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.315 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:32.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.315 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:32.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.315 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:32.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.315 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:32.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.315 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:32.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.315 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:32.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.315 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:32.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.315 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:32.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.315 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:32.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.315 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:32.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.315 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:32.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.315 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:32.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.315 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:32.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.315 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:32.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.315 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:32.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.315 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:32.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.315 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:32.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.315 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:32.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.315 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:32.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.315 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:32.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.315 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:32.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.315 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:32.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.315 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:32.315 [2024-07-13 21:55:51.524266] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.575 [2024-07-13 21:55:51.745388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.834 [2024-07-13 21:55:52.002326] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:32.834 [2024-07-13 21:55:52.002354] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:32.834 21:55:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:32.834 21:55:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:12:32.834 21:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:33.093 [2024-07-13 21:55:52.294649] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:33.093 [2024-07-13 21:55:52.294692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:33.093 [2024-07-13 21:55:52.294702] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:33.093 [2024-07-13 21:55:52.294713] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:33.093 21:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:33.093 21:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:33.093 21:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:33.093 21:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:33.093 21:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:33.093 21:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:33.093 21:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:33.093 21:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:33.093 21:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:33.093 21:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:33.093 21:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:33.093 21:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.351 21:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:33.351 "name": "Existed_Raid", 00:12:33.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.351 "strip_size_kb": 64, 00:12:33.351 "state": "configuring", 00:12:33.351 "raid_level": "concat", 00:12:33.351 "superblock": false, 00:12:33.351 "num_base_bdevs": 2, 00:12:33.351 "num_base_bdevs_discovered": 0, 00:12:33.351 "num_base_bdevs_operational": 2, 00:12:33.351 "base_bdevs_list": [ 00:12:33.351 { 00:12:33.351 "name": "BaseBdev1", 00:12:33.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.351 "is_configured": false, 00:12:33.351 "data_offset": 0, 00:12:33.351 "data_size": 0 00:12:33.351 }, 00:12:33.351 { 00:12:33.351 "name": "BaseBdev2", 00:12:33.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.351 "is_configured": false, 00:12:33.351 "data_offset": 0, 00:12:33.351 "data_size": 0 00:12:33.351 } 00:12:33.351 ] 00:12:33.351 }' 00:12:33.351 21:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:33.351 21:55:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.610 21:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:33.868 [2024-07-13 21:55:53.136908] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:33.868 [2024-07-13 21:55:53.136956] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:12:33.868 21:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:34.126 [2024-07-13 21:55:53.313406] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:34.126 [2024-07-13 21:55:53.313445] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:34.126 [2024-07-13 21:55:53.313455] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:34.126 [2024-07-13 21:55:53.313467] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:34.126 21:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:34.383 [2024-07-13 21:55:53.535495] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:34.383 BaseBdev1 00:12:34.383 21:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:12:34.383 21:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:12:34.383 21:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:34.383 21:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:34.383 21:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:34.383 21:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:34.383 21:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:34.383 21:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:34.642 [ 00:12:34.642 { 00:12:34.642 "name": "BaseBdev1", 00:12:34.642 "aliases": [ 00:12:34.642 "41c49ea0-7f06-4a10-9e60-eec396956ccc" 00:12:34.642 ], 00:12:34.642 "product_name": "Malloc disk", 00:12:34.642 "block_size": 512, 00:12:34.642 "num_blocks": 65536, 00:12:34.642 "uuid": "41c49ea0-7f06-4a10-9e60-eec396956ccc", 00:12:34.642 "assigned_rate_limits": { 00:12:34.642 "rw_ios_per_sec": 0, 00:12:34.642 "rw_mbytes_per_sec": 0, 00:12:34.642 "r_mbytes_per_sec": 0, 00:12:34.642 "w_mbytes_per_sec": 0 00:12:34.642 }, 00:12:34.642 "claimed": true, 00:12:34.642 "claim_type": "exclusive_write", 00:12:34.642 "zoned": false, 00:12:34.642 "supported_io_types": { 00:12:34.642 "read": true, 00:12:34.642 "write": true, 00:12:34.642 "unmap": true, 00:12:34.642 "flush": true, 00:12:34.642 "reset": true, 00:12:34.642 "nvme_admin": false, 00:12:34.642 "nvme_io": false, 00:12:34.642 "nvme_io_md": false, 00:12:34.642 "write_zeroes": true, 00:12:34.642 "zcopy": true, 00:12:34.642 "get_zone_info": false, 00:12:34.642 "zone_management": false, 00:12:34.642 "zone_append": false, 00:12:34.642 "compare": false, 00:12:34.642 "compare_and_write": false, 00:12:34.642 "abort": true, 00:12:34.642 "seek_hole": false, 00:12:34.642 "seek_data": false, 00:12:34.642 "copy": true, 00:12:34.642 "nvme_iov_md": false 00:12:34.642 }, 00:12:34.642 "memory_domains": [ 00:12:34.642 { 00:12:34.642 "dma_device_id": "system", 00:12:34.642 "dma_device_type": 1 00:12:34.642 }, 00:12:34.642 { 00:12:34.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.642 "dma_device_type": 2 00:12:34.642 } 00:12:34.642 ], 00:12:34.642 "driver_specific": {} 00:12:34.642 } 00:12:34.642 ] 00:12:34.642 21:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:34.642 21:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:34.642 21:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:34.642 21:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:34.642 21:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:34.642 21:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:34.642 21:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:34.642 21:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:34.642 21:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:34.642 21:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:34.642 21:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:34.642 21:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:34.642 21:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.900 21:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:34.900 "name": "Existed_Raid", 00:12:34.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.900 "strip_size_kb": 64, 00:12:34.900 "state": "configuring", 00:12:34.900 "raid_level": "concat", 00:12:34.900 "superblock": false, 00:12:34.900 "num_base_bdevs": 2, 00:12:34.900 "num_base_bdevs_discovered": 1, 00:12:34.900 "num_base_bdevs_operational": 2, 00:12:34.900 "base_bdevs_list": [ 00:12:34.900 { 00:12:34.900 "name": "BaseBdev1", 00:12:34.900 "uuid": "41c49ea0-7f06-4a10-9e60-eec396956ccc", 00:12:34.900 "is_configured": true, 00:12:34.900 "data_offset": 0, 00:12:34.900 "data_size": 65536 00:12:34.900 }, 00:12:34.900 { 00:12:34.900 "name": "BaseBdev2", 00:12:34.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.900 "is_configured": false, 00:12:34.900 "data_offset": 0, 00:12:34.900 "data_size": 0 00:12:34.900 } 00:12:34.900 ] 00:12:34.900 }' 00:12:34.900 21:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:34.900 21:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.467 21:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:35.467 [2024-07-13 21:55:54.714614] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:35.467 [2024-07-13 21:55:54.714659] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:12:35.467 21:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:35.726 [2024-07-13 21:55:54.891146] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:35.726 [2024-07-13 21:55:54.892879] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:35.726 [2024-07-13 21:55:54.892924] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:35.726 21:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:12:35.726 21:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:35.726 21:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:35.726 21:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:35.726 21:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:35.726 21:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:35.726 21:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:35.726 21:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:35.726 21:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:35.726 21:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:35.726 21:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:35.726 21:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:35.726 21:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:35.726 21:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:35.726 21:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:35.726 "name": "Existed_Raid", 00:12:35.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.726 "strip_size_kb": 64, 00:12:35.726 "state": "configuring", 00:12:35.726 "raid_level": "concat", 00:12:35.726 "superblock": false, 00:12:35.726 "num_base_bdevs": 2, 00:12:35.726 "num_base_bdevs_discovered": 1, 00:12:35.726 "num_base_bdevs_operational": 2, 00:12:35.726 "base_bdevs_list": [ 00:12:35.726 { 00:12:35.726 "name": "BaseBdev1", 00:12:35.726 "uuid": "41c49ea0-7f06-4a10-9e60-eec396956ccc", 00:12:35.726 "is_configured": true, 00:12:35.726 "data_offset": 0, 00:12:35.726 "data_size": 65536 00:12:35.726 }, 00:12:35.726 { 00:12:35.726 "name": "BaseBdev2", 00:12:35.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.726 "is_configured": false, 00:12:35.726 "data_offset": 0, 00:12:35.726 "data_size": 0 00:12:35.726 } 00:12:35.726 ] 00:12:35.726 }' 00:12:35.726 21:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:35.726 21:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.293 21:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:36.553 [2024-07-13 21:55:55.764651] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:36.553 [2024-07-13 21:55:55.764691] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:12:36.553 [2024-07-13 21:55:55.764702] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:36.553 [2024-07-13 21:55:55.764962] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:12:36.553 [2024-07-13 21:55:55.765152] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:12:36.553 [2024-07-13 21:55:55.765166] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:12:36.553 [2024-07-13 21:55:55.765422] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:36.553 BaseBdev2 00:12:36.553 21:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:12:36.553 21:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:12:36.553 21:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:36.553 21:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:36.553 21:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:36.553 21:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:36.553 21:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:36.813 21:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:36.813 [ 00:12:36.813 { 00:12:36.813 "name": "BaseBdev2", 00:12:36.813 "aliases": [ 00:12:36.813 "03e26ffb-02bc-4ef9-ac33-f122250bf678" 00:12:36.813 ], 00:12:36.813 "product_name": "Malloc disk", 00:12:36.813 "block_size": 512, 00:12:36.813 "num_blocks": 65536, 00:12:36.813 "uuid": "03e26ffb-02bc-4ef9-ac33-f122250bf678", 00:12:36.813 "assigned_rate_limits": { 00:12:36.813 "rw_ios_per_sec": 0, 00:12:36.813 "rw_mbytes_per_sec": 0, 00:12:36.813 "r_mbytes_per_sec": 0, 00:12:36.813 "w_mbytes_per_sec": 0 00:12:36.813 }, 00:12:36.813 "claimed": true, 00:12:36.813 "claim_type": "exclusive_write", 00:12:36.813 "zoned": false, 00:12:36.813 "supported_io_types": { 00:12:36.813 "read": true, 00:12:36.813 "write": true, 00:12:36.813 "unmap": true, 00:12:36.813 "flush": true, 00:12:36.813 "reset": true, 00:12:36.813 "nvme_admin": false, 00:12:36.813 "nvme_io": false, 00:12:36.813 "nvme_io_md": false, 00:12:36.813 "write_zeroes": true, 00:12:36.813 "zcopy": true, 00:12:36.813 "get_zone_info": false, 00:12:36.813 "zone_management": false, 00:12:36.813 "zone_append": false, 00:12:36.813 "compare": false, 00:12:36.813 "compare_and_write": false, 00:12:36.813 "abort": true, 00:12:36.813 "seek_hole": false, 00:12:36.813 "seek_data": false, 00:12:36.813 "copy": true, 00:12:36.813 "nvme_iov_md": false 00:12:36.813 }, 00:12:36.813 "memory_domains": [ 00:12:36.813 { 00:12:36.813 "dma_device_id": "system", 00:12:36.813 "dma_device_type": 1 00:12:36.813 }, 00:12:36.813 { 00:12:36.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.813 "dma_device_type": 2 00:12:36.813 } 00:12:36.813 ], 00:12:36.813 "driver_specific": {} 00:12:36.813 } 00:12:36.813 ] 00:12:36.813 21:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:36.813 21:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:36.813 21:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:36.813 21:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:12:36.813 21:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:36.813 21:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:36.813 21:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:36.813 21:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:36.813 21:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:36.813 21:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:36.813 21:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:36.813 21:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:36.813 21:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:36.813 21:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:36.813 21:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:37.073 21:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:37.073 "name": "Existed_Raid", 00:12:37.073 "uuid": "1b3b46b5-563e-4cd6-9599-9dfafea91f9f", 00:12:37.073 "strip_size_kb": 64, 00:12:37.073 "state": "online", 00:12:37.073 "raid_level": "concat", 00:12:37.073 "superblock": false, 00:12:37.073 "num_base_bdevs": 2, 00:12:37.073 "num_base_bdevs_discovered": 2, 00:12:37.073 "num_base_bdevs_operational": 2, 00:12:37.073 "base_bdevs_list": [ 00:12:37.073 { 00:12:37.073 "name": "BaseBdev1", 00:12:37.073 "uuid": "41c49ea0-7f06-4a10-9e60-eec396956ccc", 00:12:37.073 "is_configured": true, 00:12:37.073 "data_offset": 0, 00:12:37.073 "data_size": 65536 00:12:37.073 }, 00:12:37.073 { 00:12:37.073 "name": "BaseBdev2", 00:12:37.073 "uuid": "03e26ffb-02bc-4ef9-ac33-f122250bf678", 00:12:37.073 "is_configured": true, 00:12:37.073 "data_offset": 0, 00:12:37.073 "data_size": 65536 00:12:37.073 } 00:12:37.073 ] 00:12:37.073 }' 00:12:37.073 21:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:37.073 21:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.641 21:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:12:37.641 21:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:37.641 21:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:37.641 21:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:37.641 21:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:37.641 21:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:37.641 21:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:37.641 21:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:37.641 [2024-07-13 21:55:56.952237] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:37.641 21:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:37.641 "name": "Existed_Raid", 00:12:37.641 "aliases": [ 00:12:37.641 "1b3b46b5-563e-4cd6-9599-9dfafea91f9f" 00:12:37.641 ], 00:12:37.641 "product_name": "Raid Volume", 00:12:37.641 "block_size": 512, 00:12:37.641 "num_blocks": 131072, 00:12:37.641 "uuid": "1b3b46b5-563e-4cd6-9599-9dfafea91f9f", 00:12:37.641 "assigned_rate_limits": { 00:12:37.641 "rw_ios_per_sec": 0, 00:12:37.641 "rw_mbytes_per_sec": 0, 00:12:37.641 "r_mbytes_per_sec": 0, 00:12:37.641 "w_mbytes_per_sec": 0 00:12:37.641 }, 00:12:37.641 "claimed": false, 00:12:37.641 "zoned": false, 00:12:37.641 "supported_io_types": { 00:12:37.641 "read": true, 00:12:37.641 "write": true, 00:12:37.641 "unmap": true, 00:12:37.641 "flush": true, 00:12:37.641 "reset": true, 00:12:37.641 "nvme_admin": false, 00:12:37.641 "nvme_io": false, 00:12:37.641 "nvme_io_md": false, 00:12:37.641 "write_zeroes": true, 00:12:37.641 "zcopy": false, 00:12:37.641 "get_zone_info": false, 00:12:37.641 "zone_management": false, 00:12:37.641 "zone_append": false, 00:12:37.641 "compare": false, 00:12:37.641 "compare_and_write": false, 00:12:37.641 "abort": false, 00:12:37.641 "seek_hole": false, 00:12:37.641 "seek_data": false, 00:12:37.641 "copy": false, 00:12:37.641 "nvme_iov_md": false 00:12:37.641 }, 00:12:37.641 "memory_domains": [ 00:12:37.641 { 00:12:37.641 "dma_device_id": "system", 00:12:37.641 "dma_device_type": 1 00:12:37.641 }, 00:12:37.641 { 00:12:37.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.641 "dma_device_type": 2 00:12:37.641 }, 00:12:37.641 { 00:12:37.641 "dma_device_id": "system", 00:12:37.641 "dma_device_type": 1 00:12:37.641 }, 00:12:37.641 { 00:12:37.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.641 "dma_device_type": 2 00:12:37.641 } 00:12:37.641 ], 00:12:37.641 "driver_specific": { 00:12:37.641 "raid": { 00:12:37.641 "uuid": "1b3b46b5-563e-4cd6-9599-9dfafea91f9f", 00:12:37.641 "strip_size_kb": 64, 00:12:37.641 "state": "online", 00:12:37.641 "raid_level": "concat", 00:12:37.641 "superblock": false, 00:12:37.641 "num_base_bdevs": 2, 00:12:37.641 "num_base_bdevs_discovered": 2, 00:12:37.641 "num_base_bdevs_operational": 2, 00:12:37.641 "base_bdevs_list": [ 00:12:37.641 { 00:12:37.641 "name": "BaseBdev1", 00:12:37.641 "uuid": "41c49ea0-7f06-4a10-9e60-eec396956ccc", 00:12:37.641 "is_configured": true, 00:12:37.641 "data_offset": 0, 00:12:37.641 "data_size": 65536 00:12:37.641 }, 00:12:37.641 { 00:12:37.641 "name": "BaseBdev2", 00:12:37.641 "uuid": "03e26ffb-02bc-4ef9-ac33-f122250bf678", 00:12:37.641 "is_configured": true, 00:12:37.641 "data_offset": 0, 00:12:37.641 "data_size": 65536 00:12:37.641 } 00:12:37.641 ] 00:12:37.641 } 00:12:37.641 } 00:12:37.641 }' 00:12:37.641 21:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:37.641 21:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:12:37.641 BaseBdev2' 00:12:37.641 21:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:37.641 21:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:12:37.641 21:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:37.900 21:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:37.900 "name": "BaseBdev1", 00:12:37.900 "aliases": [ 00:12:37.900 "41c49ea0-7f06-4a10-9e60-eec396956ccc" 00:12:37.900 ], 00:12:37.900 "product_name": "Malloc disk", 00:12:37.900 "block_size": 512, 00:12:37.900 "num_blocks": 65536, 00:12:37.900 "uuid": "41c49ea0-7f06-4a10-9e60-eec396956ccc", 00:12:37.900 "assigned_rate_limits": { 00:12:37.900 "rw_ios_per_sec": 0, 00:12:37.900 "rw_mbytes_per_sec": 0, 00:12:37.900 "r_mbytes_per_sec": 0, 00:12:37.900 "w_mbytes_per_sec": 0 00:12:37.900 }, 00:12:37.900 "claimed": true, 00:12:37.900 "claim_type": "exclusive_write", 00:12:37.900 "zoned": false, 00:12:37.900 "supported_io_types": { 00:12:37.900 "read": true, 00:12:37.900 "write": true, 00:12:37.900 "unmap": true, 00:12:37.900 "flush": true, 00:12:37.900 "reset": true, 00:12:37.900 "nvme_admin": false, 00:12:37.900 "nvme_io": false, 00:12:37.900 "nvme_io_md": false, 00:12:37.900 "write_zeroes": true, 00:12:37.900 "zcopy": true, 00:12:37.900 "get_zone_info": false, 00:12:37.900 "zone_management": false, 00:12:37.900 "zone_append": false, 00:12:37.900 "compare": false, 00:12:37.900 "compare_and_write": false, 00:12:37.900 "abort": true, 00:12:37.900 "seek_hole": false, 00:12:37.900 "seek_data": false, 00:12:37.900 "copy": true, 00:12:37.900 "nvme_iov_md": false 00:12:37.900 }, 00:12:37.900 "memory_domains": [ 00:12:37.900 { 00:12:37.900 "dma_device_id": "system", 00:12:37.900 "dma_device_type": 1 00:12:37.900 }, 00:12:37.900 { 00:12:37.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.900 "dma_device_type": 2 00:12:37.900 } 00:12:37.900 ], 00:12:37.900 "driver_specific": {} 00:12:37.900 }' 00:12:37.900 21:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:37.900 21:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:37.900 21:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:37.900 21:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:37.900 21:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:38.159 21:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:38.159 21:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:38.159 21:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:38.159 21:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:38.159 21:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:38.159 21:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:38.159 21:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:38.159 21:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:38.159 21:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:38.159 21:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:38.418 21:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:38.418 "name": "BaseBdev2", 00:12:38.418 "aliases": [ 00:12:38.418 "03e26ffb-02bc-4ef9-ac33-f122250bf678" 00:12:38.418 ], 00:12:38.418 "product_name": "Malloc disk", 00:12:38.418 "block_size": 512, 00:12:38.418 "num_blocks": 65536, 00:12:38.418 "uuid": "03e26ffb-02bc-4ef9-ac33-f122250bf678", 00:12:38.418 "assigned_rate_limits": { 00:12:38.418 "rw_ios_per_sec": 0, 00:12:38.418 "rw_mbytes_per_sec": 0, 00:12:38.418 "r_mbytes_per_sec": 0, 00:12:38.418 "w_mbytes_per_sec": 0 00:12:38.418 }, 00:12:38.418 "claimed": true, 00:12:38.418 "claim_type": "exclusive_write", 00:12:38.418 "zoned": false, 00:12:38.418 "supported_io_types": { 00:12:38.418 "read": true, 00:12:38.418 "write": true, 00:12:38.418 "unmap": true, 00:12:38.418 "flush": true, 00:12:38.418 "reset": true, 00:12:38.418 "nvme_admin": false, 00:12:38.418 "nvme_io": false, 00:12:38.418 "nvme_io_md": false, 00:12:38.418 "write_zeroes": true, 00:12:38.418 "zcopy": true, 00:12:38.418 "get_zone_info": false, 00:12:38.418 "zone_management": false, 00:12:38.418 "zone_append": false, 00:12:38.418 "compare": false, 00:12:38.418 "compare_and_write": false, 00:12:38.418 "abort": true, 00:12:38.418 "seek_hole": false, 00:12:38.418 "seek_data": false, 00:12:38.418 "copy": true, 00:12:38.418 "nvme_iov_md": false 00:12:38.418 }, 00:12:38.418 "memory_domains": [ 00:12:38.418 { 00:12:38.418 "dma_device_id": "system", 00:12:38.418 "dma_device_type": 1 00:12:38.418 }, 00:12:38.418 { 00:12:38.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.419 "dma_device_type": 2 00:12:38.419 } 00:12:38.419 ], 00:12:38.419 "driver_specific": {} 00:12:38.419 }' 00:12:38.419 21:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:38.419 21:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:38.419 21:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:38.419 21:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:38.419 21:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:38.678 21:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:38.678 21:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:38.678 21:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:38.678 21:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:38.678 21:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:38.678 21:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:38.678 21:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:38.678 21:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:38.937 [2024-07-13 21:55:58.127120] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:38.937 [2024-07-13 21:55:58.127149] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:38.937 [2024-07-13 21:55:58.127195] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:38.937 21:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:12:38.937 21:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:12:38.937 21:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:38.937 21:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:12:38.937 21:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:12:38.937 21:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:12:38.937 21:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:38.937 21:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:12:38.937 21:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:38.937 21:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:38.937 21:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:12:38.937 21:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:38.937 21:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:38.937 21:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:38.937 21:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:38.937 21:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:38.937 21:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:39.197 21:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:39.197 "name": "Existed_Raid", 00:12:39.197 "uuid": "1b3b46b5-563e-4cd6-9599-9dfafea91f9f", 00:12:39.197 "strip_size_kb": 64, 00:12:39.197 "state": "offline", 00:12:39.197 "raid_level": "concat", 00:12:39.197 "superblock": false, 00:12:39.197 "num_base_bdevs": 2, 00:12:39.197 "num_base_bdevs_discovered": 1, 00:12:39.197 "num_base_bdevs_operational": 1, 00:12:39.197 "base_bdevs_list": [ 00:12:39.197 { 00:12:39.197 "name": null, 00:12:39.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.197 "is_configured": false, 00:12:39.197 "data_offset": 0, 00:12:39.197 "data_size": 65536 00:12:39.197 }, 00:12:39.197 { 00:12:39.197 "name": "BaseBdev2", 00:12:39.197 "uuid": "03e26ffb-02bc-4ef9-ac33-f122250bf678", 00:12:39.197 "is_configured": true, 00:12:39.197 "data_offset": 0, 00:12:39.197 "data_size": 65536 00:12:39.197 } 00:12:39.197 ] 00:12:39.197 }' 00:12:39.197 21:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:39.197 21:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.457 21:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:12:39.457 21:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:39.457 21:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:39.457 21:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:39.716 21:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:39.716 21:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:39.716 21:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:39.975 [2024-07-13 21:55:59.159667] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:39.975 [2024-07-13 21:55:59.159714] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:12:39.975 21:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:39.975 21:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:39.975 21:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:39.975 21:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:12:40.239 21:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:12:40.239 21:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:12:40.239 21:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:12:40.239 21:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 1351540 00:12:40.239 21:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 1351540 ']' 00:12:40.239 21:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 1351540 00:12:40.239 21:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:12:40.239 21:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:40.239 21:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1351540 00:12:40.239 21:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:40.239 21:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:40.239 21:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1351540' 00:12:40.239 killing process with pid 1351540 00:12:40.239 21:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 1351540 00:12:40.239 [2024-07-13 21:55:59.498257] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:40.239 21:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 1351540 00:12:40.239 [2024-07-13 21:55:59.515458] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:41.618 21:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:12:41.618 00:12:41.618 real 0m9.471s 00:12:41.618 user 0m15.478s 00:12:41.618 sys 0m1.833s 00:12:41.618 21:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:41.618 21:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.618 ************************************ 00:12:41.618 END TEST raid_state_function_test 00:12:41.618 ************************************ 00:12:41.618 21:56:00 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:12:41.618 21:56:00 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:12:41.618 21:56:00 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:12:41.618 21:56:00 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:41.618 21:56:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:41.618 ************************************ 00:12:41.618 START TEST raid_state_function_test_sb 00:12:41.618 ************************************ 00:12:41.618 21:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 2 true 00:12:41.618 21:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:12:41.618 21:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:12:41.618 21:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:12:41.618 21:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:12:41.618 21:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:12:41.618 21:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:41.618 21:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:12:41.618 21:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:41.618 21:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:41.619 21:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:12:41.619 21:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:41.619 21:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:41.619 21:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:41.619 21:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:12:41.619 21:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:12:41.619 21:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:12:41.619 21:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:12:41.619 21:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:12:41.619 21:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:12:41.619 21:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:12:41.619 21:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:12:41.619 21:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:12:41.619 21:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:12:41.619 21:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=1353361 00:12:41.619 21:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1353361' 00:12:41.619 Process raid pid: 1353361 00:12:41.619 21:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:41.619 21:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 1353361 /var/tmp/spdk-raid.sock 00:12:41.619 21:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 1353361 ']' 00:12:41.619 21:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:41.619 21:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:41.619 21:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:41.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:41.619 21:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:41.619 21:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.619 [2024-07-13 21:56:00.916184] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:41.619 [2024-07-13 21:56:00.916277] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:41.878 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.878 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:41.878 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.878 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:41.878 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.878 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:41.878 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.878 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:41.878 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.878 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:41.878 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.878 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:41.878 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.878 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:41.878 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.878 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:41.878 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.878 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:41.878 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.878 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:41.878 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.878 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:41.878 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.878 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:41.878 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.878 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:41.878 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.878 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:41.878 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.878 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:41.878 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.878 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:41.878 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.878 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:41.878 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.878 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:41.878 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.879 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:41.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.879 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:41.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.879 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:41.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.879 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:41.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.879 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:41.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.879 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:41.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.879 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:41.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.879 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:41.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.879 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:41.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.879 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:41.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.879 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:41.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.879 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:41.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.879 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:41.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.879 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:41.879 [2024-07-13 21:56:01.079096] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.138 [2024-07-13 21:56:01.286506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.396 [2024-07-13 21:56:01.533226] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:42.396 [2024-07-13 21:56:01.533252] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:42.396 21:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:42.396 21:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:12:42.396 21:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:42.655 [2024-07-13 21:56:01.818876] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:42.655 [2024-07-13 21:56:01.818928] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:42.655 [2024-07-13 21:56:01.818940] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:42.655 [2024-07-13 21:56:01.818951] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:42.656 21:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:42.656 21:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:42.656 21:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:42.656 21:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:42.656 21:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:42.656 21:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:42.656 21:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:42.656 21:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:42.656 21:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:42.656 21:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:42.656 21:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:42.656 21:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:42.656 21:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:42.656 "name": "Existed_Raid", 00:12:42.656 "uuid": "b405557c-8693-4db3-aa7f-2a971a37f2f0", 00:12:42.656 "strip_size_kb": 64, 00:12:42.656 "state": "configuring", 00:12:42.656 "raid_level": "concat", 00:12:42.656 "superblock": true, 00:12:42.656 "num_base_bdevs": 2, 00:12:42.656 "num_base_bdevs_discovered": 0, 00:12:42.656 "num_base_bdevs_operational": 2, 00:12:42.656 "base_bdevs_list": [ 00:12:42.656 { 00:12:42.656 "name": "BaseBdev1", 00:12:42.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.656 "is_configured": false, 00:12:42.656 "data_offset": 0, 00:12:42.656 "data_size": 0 00:12:42.656 }, 00:12:42.656 { 00:12:42.656 "name": "BaseBdev2", 00:12:42.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.656 "is_configured": false, 00:12:42.656 "data_offset": 0, 00:12:42.656 "data_size": 0 00:12:42.656 } 00:12:42.656 ] 00:12:42.656 }' 00:12:42.656 21:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:42.656 21:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.223 21:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:43.481 [2024-07-13 21:56:02.624875] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:43.481 [2024-07-13 21:56:02.624913] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:12:43.481 21:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:43.481 [2024-07-13 21:56:02.805403] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:43.481 [2024-07-13 21:56:02.805438] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:43.481 [2024-07-13 21:56:02.805448] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:43.481 [2024-07-13 21:56:02.805460] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:43.481 21:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:43.740 [2024-07-13 21:56:03.022219] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:43.740 BaseBdev1 00:12:43.740 21:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:12:43.740 21:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:12:43.740 21:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:43.740 21:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:43.740 21:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:43.740 21:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:43.740 21:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:43.999 21:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:43.999 [ 00:12:43.999 { 00:12:43.999 "name": "BaseBdev1", 00:12:43.999 "aliases": [ 00:12:43.999 "942d3dfd-0951-44af-983e-c09f1c1b73ba" 00:12:43.999 ], 00:12:43.999 "product_name": "Malloc disk", 00:12:43.999 "block_size": 512, 00:12:43.999 "num_blocks": 65536, 00:12:43.999 "uuid": "942d3dfd-0951-44af-983e-c09f1c1b73ba", 00:12:43.999 "assigned_rate_limits": { 00:12:43.999 "rw_ios_per_sec": 0, 00:12:43.999 "rw_mbytes_per_sec": 0, 00:12:43.999 "r_mbytes_per_sec": 0, 00:12:43.999 "w_mbytes_per_sec": 0 00:12:43.999 }, 00:12:43.999 "claimed": true, 00:12:43.999 "claim_type": "exclusive_write", 00:12:43.999 "zoned": false, 00:12:43.999 "supported_io_types": { 00:12:43.999 "read": true, 00:12:43.999 "write": true, 00:12:43.999 "unmap": true, 00:12:43.999 "flush": true, 00:12:43.999 "reset": true, 00:12:43.999 "nvme_admin": false, 00:12:43.999 "nvme_io": false, 00:12:43.999 "nvme_io_md": false, 00:12:43.999 "write_zeroes": true, 00:12:43.999 "zcopy": true, 00:12:43.999 "get_zone_info": false, 00:12:43.999 "zone_management": false, 00:12:43.999 "zone_append": false, 00:12:43.999 "compare": false, 00:12:43.999 "compare_and_write": false, 00:12:43.999 "abort": true, 00:12:43.999 "seek_hole": false, 00:12:43.999 "seek_data": false, 00:12:43.999 "copy": true, 00:12:43.999 "nvme_iov_md": false 00:12:43.999 }, 00:12:43.999 "memory_domains": [ 00:12:43.999 { 00:12:43.999 "dma_device_id": "system", 00:12:43.999 "dma_device_type": 1 00:12:43.999 }, 00:12:43.999 { 00:12:43.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.999 "dma_device_type": 2 00:12:43.999 } 00:12:43.999 ], 00:12:43.999 "driver_specific": {} 00:12:43.999 } 00:12:43.999 ] 00:12:43.999 21:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:43.999 21:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:43.999 21:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:43.999 21:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:43.999 21:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:43.999 21:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:43.999 21:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:43.999 21:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:43.999 21:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:43.999 21:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:43.999 21:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:43.999 21:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:43.999 21:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.259 21:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:44.259 "name": "Existed_Raid", 00:12:44.259 "uuid": "e9d24012-12ec-4361-8932-a04903edc81c", 00:12:44.259 "strip_size_kb": 64, 00:12:44.259 "state": "configuring", 00:12:44.259 "raid_level": "concat", 00:12:44.259 "superblock": true, 00:12:44.259 "num_base_bdevs": 2, 00:12:44.259 "num_base_bdevs_discovered": 1, 00:12:44.259 "num_base_bdevs_operational": 2, 00:12:44.259 "base_bdevs_list": [ 00:12:44.259 { 00:12:44.259 "name": "BaseBdev1", 00:12:44.259 "uuid": "942d3dfd-0951-44af-983e-c09f1c1b73ba", 00:12:44.259 "is_configured": true, 00:12:44.259 "data_offset": 2048, 00:12:44.259 "data_size": 63488 00:12:44.259 }, 00:12:44.259 { 00:12:44.259 "name": "BaseBdev2", 00:12:44.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.259 "is_configured": false, 00:12:44.259 "data_offset": 0, 00:12:44.259 "data_size": 0 00:12:44.259 } 00:12:44.259 ] 00:12:44.259 }' 00:12:44.259 21:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:44.259 21:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.828 21:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:44.828 [2024-07-13 21:56:04.165261] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:44.828 [2024-07-13 21:56:04.165306] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:12:44.828 21:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:45.087 [2024-07-13 21:56:04.345792] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:45.087 [2024-07-13 21:56:04.347517] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:45.087 [2024-07-13 21:56:04.347550] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:45.087 21:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:12:45.087 21:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:45.087 21:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:45.087 21:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:45.087 21:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:45.087 21:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:45.087 21:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:45.087 21:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:45.087 21:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:45.087 21:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:45.087 21:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:45.087 21:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:45.087 21:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:45.087 21:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:45.347 21:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:45.347 "name": "Existed_Raid", 00:12:45.347 "uuid": "77795f05-41c5-439e-a468-8937786b64ad", 00:12:45.347 "strip_size_kb": 64, 00:12:45.347 "state": "configuring", 00:12:45.347 "raid_level": "concat", 00:12:45.347 "superblock": true, 00:12:45.347 "num_base_bdevs": 2, 00:12:45.347 "num_base_bdevs_discovered": 1, 00:12:45.347 "num_base_bdevs_operational": 2, 00:12:45.347 "base_bdevs_list": [ 00:12:45.347 { 00:12:45.347 "name": "BaseBdev1", 00:12:45.347 "uuid": "942d3dfd-0951-44af-983e-c09f1c1b73ba", 00:12:45.347 "is_configured": true, 00:12:45.347 "data_offset": 2048, 00:12:45.347 "data_size": 63488 00:12:45.347 }, 00:12:45.347 { 00:12:45.347 "name": "BaseBdev2", 00:12:45.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.347 "is_configured": false, 00:12:45.347 "data_offset": 0, 00:12:45.347 "data_size": 0 00:12:45.347 } 00:12:45.347 ] 00:12:45.347 }' 00:12:45.347 21:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:45.347 21:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.915 21:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:45.915 [2024-07-13 21:56:05.222819] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:45.915 [2024-07-13 21:56:05.223045] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:12:45.915 [2024-07-13 21:56:05.223064] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:45.915 [2024-07-13 21:56:05.223313] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:12:45.915 [2024-07-13 21:56:05.223474] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:12:45.915 [2024-07-13 21:56:05.223487] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:12:45.916 [2024-07-13 21:56:05.223630] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:45.916 BaseBdev2 00:12:45.916 21:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:12:45.916 21:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:12:45.916 21:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:45.916 21:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:45.916 21:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:45.916 21:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:45.916 21:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:46.175 21:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:46.434 [ 00:12:46.434 { 00:12:46.434 "name": "BaseBdev2", 00:12:46.435 "aliases": [ 00:12:46.435 "a77fc62e-9695-4bde-aeb0-b2b2f5b0041d" 00:12:46.435 ], 00:12:46.435 "product_name": "Malloc disk", 00:12:46.435 "block_size": 512, 00:12:46.435 "num_blocks": 65536, 00:12:46.435 "uuid": "a77fc62e-9695-4bde-aeb0-b2b2f5b0041d", 00:12:46.435 "assigned_rate_limits": { 00:12:46.435 "rw_ios_per_sec": 0, 00:12:46.435 "rw_mbytes_per_sec": 0, 00:12:46.435 "r_mbytes_per_sec": 0, 00:12:46.435 "w_mbytes_per_sec": 0 00:12:46.435 }, 00:12:46.435 "claimed": true, 00:12:46.435 "claim_type": "exclusive_write", 00:12:46.435 "zoned": false, 00:12:46.435 "supported_io_types": { 00:12:46.435 "read": true, 00:12:46.435 "write": true, 00:12:46.435 "unmap": true, 00:12:46.435 "flush": true, 00:12:46.435 "reset": true, 00:12:46.435 "nvme_admin": false, 00:12:46.435 "nvme_io": false, 00:12:46.435 "nvme_io_md": false, 00:12:46.435 "write_zeroes": true, 00:12:46.435 "zcopy": true, 00:12:46.435 "get_zone_info": false, 00:12:46.435 "zone_management": false, 00:12:46.435 "zone_append": false, 00:12:46.435 "compare": false, 00:12:46.435 "compare_and_write": false, 00:12:46.435 "abort": true, 00:12:46.435 "seek_hole": false, 00:12:46.435 "seek_data": false, 00:12:46.435 "copy": true, 00:12:46.435 "nvme_iov_md": false 00:12:46.435 }, 00:12:46.435 "memory_domains": [ 00:12:46.435 { 00:12:46.435 "dma_device_id": "system", 00:12:46.435 "dma_device_type": 1 00:12:46.435 }, 00:12:46.435 { 00:12:46.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.435 "dma_device_type": 2 00:12:46.435 } 00:12:46.435 ], 00:12:46.435 "driver_specific": {} 00:12:46.435 } 00:12:46.435 ] 00:12:46.435 21:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:46.435 21:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:46.435 21:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:46.435 21:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:12:46.435 21:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:46.435 21:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:46.435 21:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:46.435 21:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:46.435 21:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:46.435 21:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:46.435 21:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:46.435 21:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:46.435 21:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:46.435 21:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.435 21:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:46.435 21:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:46.435 "name": "Existed_Raid", 00:12:46.435 "uuid": "77795f05-41c5-439e-a468-8937786b64ad", 00:12:46.435 "strip_size_kb": 64, 00:12:46.435 "state": "online", 00:12:46.435 "raid_level": "concat", 00:12:46.435 "superblock": true, 00:12:46.435 "num_base_bdevs": 2, 00:12:46.435 "num_base_bdevs_discovered": 2, 00:12:46.435 "num_base_bdevs_operational": 2, 00:12:46.435 "base_bdevs_list": [ 00:12:46.435 { 00:12:46.435 "name": "BaseBdev1", 00:12:46.435 "uuid": "942d3dfd-0951-44af-983e-c09f1c1b73ba", 00:12:46.435 "is_configured": true, 00:12:46.435 "data_offset": 2048, 00:12:46.435 "data_size": 63488 00:12:46.435 }, 00:12:46.435 { 00:12:46.435 "name": "BaseBdev2", 00:12:46.435 "uuid": "a77fc62e-9695-4bde-aeb0-b2b2f5b0041d", 00:12:46.435 "is_configured": true, 00:12:46.435 "data_offset": 2048, 00:12:46.435 "data_size": 63488 00:12:46.435 } 00:12:46.435 ] 00:12:46.435 }' 00:12:46.435 21:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:46.435 21:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.002 21:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:12:47.002 21:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:47.002 21:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:47.002 21:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:47.002 21:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:47.002 21:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:12:47.002 21:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:47.002 21:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:47.262 [2024-07-13 21:56:06.414248] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:47.262 21:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:47.262 "name": "Existed_Raid", 00:12:47.262 "aliases": [ 00:12:47.262 "77795f05-41c5-439e-a468-8937786b64ad" 00:12:47.262 ], 00:12:47.262 "product_name": "Raid Volume", 00:12:47.262 "block_size": 512, 00:12:47.262 "num_blocks": 126976, 00:12:47.262 "uuid": "77795f05-41c5-439e-a468-8937786b64ad", 00:12:47.262 "assigned_rate_limits": { 00:12:47.262 "rw_ios_per_sec": 0, 00:12:47.262 "rw_mbytes_per_sec": 0, 00:12:47.262 "r_mbytes_per_sec": 0, 00:12:47.262 "w_mbytes_per_sec": 0 00:12:47.262 }, 00:12:47.262 "claimed": false, 00:12:47.262 "zoned": false, 00:12:47.262 "supported_io_types": { 00:12:47.262 "read": true, 00:12:47.262 "write": true, 00:12:47.262 "unmap": true, 00:12:47.262 "flush": true, 00:12:47.262 "reset": true, 00:12:47.262 "nvme_admin": false, 00:12:47.262 "nvme_io": false, 00:12:47.262 "nvme_io_md": false, 00:12:47.262 "write_zeroes": true, 00:12:47.262 "zcopy": false, 00:12:47.262 "get_zone_info": false, 00:12:47.262 "zone_management": false, 00:12:47.262 "zone_append": false, 00:12:47.262 "compare": false, 00:12:47.262 "compare_and_write": false, 00:12:47.262 "abort": false, 00:12:47.262 "seek_hole": false, 00:12:47.262 "seek_data": false, 00:12:47.262 "copy": false, 00:12:47.262 "nvme_iov_md": false 00:12:47.262 }, 00:12:47.262 "memory_domains": [ 00:12:47.262 { 00:12:47.262 "dma_device_id": "system", 00:12:47.262 "dma_device_type": 1 00:12:47.262 }, 00:12:47.262 { 00:12:47.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.262 "dma_device_type": 2 00:12:47.262 }, 00:12:47.262 { 00:12:47.262 "dma_device_id": "system", 00:12:47.262 "dma_device_type": 1 00:12:47.262 }, 00:12:47.262 { 00:12:47.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.262 "dma_device_type": 2 00:12:47.262 } 00:12:47.262 ], 00:12:47.262 "driver_specific": { 00:12:47.262 "raid": { 00:12:47.262 "uuid": "77795f05-41c5-439e-a468-8937786b64ad", 00:12:47.262 "strip_size_kb": 64, 00:12:47.262 "state": "online", 00:12:47.262 "raid_level": "concat", 00:12:47.262 "superblock": true, 00:12:47.262 "num_base_bdevs": 2, 00:12:47.262 "num_base_bdevs_discovered": 2, 00:12:47.262 "num_base_bdevs_operational": 2, 00:12:47.262 "base_bdevs_list": [ 00:12:47.262 { 00:12:47.262 "name": "BaseBdev1", 00:12:47.262 "uuid": "942d3dfd-0951-44af-983e-c09f1c1b73ba", 00:12:47.262 "is_configured": true, 00:12:47.262 "data_offset": 2048, 00:12:47.262 "data_size": 63488 00:12:47.262 }, 00:12:47.262 { 00:12:47.262 "name": "BaseBdev2", 00:12:47.262 "uuid": "a77fc62e-9695-4bde-aeb0-b2b2f5b0041d", 00:12:47.262 "is_configured": true, 00:12:47.262 "data_offset": 2048, 00:12:47.262 "data_size": 63488 00:12:47.262 } 00:12:47.262 ] 00:12:47.262 } 00:12:47.262 } 00:12:47.262 }' 00:12:47.262 21:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:47.262 21:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:12:47.262 BaseBdev2' 00:12:47.262 21:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:47.262 21:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:12:47.262 21:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:47.262 21:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:47.262 "name": "BaseBdev1", 00:12:47.262 "aliases": [ 00:12:47.262 "942d3dfd-0951-44af-983e-c09f1c1b73ba" 00:12:47.263 ], 00:12:47.263 "product_name": "Malloc disk", 00:12:47.263 "block_size": 512, 00:12:47.263 "num_blocks": 65536, 00:12:47.263 "uuid": "942d3dfd-0951-44af-983e-c09f1c1b73ba", 00:12:47.263 "assigned_rate_limits": { 00:12:47.263 "rw_ios_per_sec": 0, 00:12:47.263 "rw_mbytes_per_sec": 0, 00:12:47.263 "r_mbytes_per_sec": 0, 00:12:47.263 "w_mbytes_per_sec": 0 00:12:47.263 }, 00:12:47.263 "claimed": true, 00:12:47.263 "claim_type": "exclusive_write", 00:12:47.263 "zoned": false, 00:12:47.263 "supported_io_types": { 00:12:47.263 "read": true, 00:12:47.263 "write": true, 00:12:47.263 "unmap": true, 00:12:47.263 "flush": true, 00:12:47.263 "reset": true, 00:12:47.263 "nvme_admin": false, 00:12:47.263 "nvme_io": false, 00:12:47.263 "nvme_io_md": false, 00:12:47.263 "write_zeroes": true, 00:12:47.263 "zcopy": true, 00:12:47.263 "get_zone_info": false, 00:12:47.263 "zone_management": false, 00:12:47.263 "zone_append": false, 00:12:47.263 "compare": false, 00:12:47.263 "compare_and_write": false, 00:12:47.263 "abort": true, 00:12:47.263 "seek_hole": false, 00:12:47.263 "seek_data": false, 00:12:47.263 "copy": true, 00:12:47.263 "nvme_iov_md": false 00:12:47.263 }, 00:12:47.263 "memory_domains": [ 00:12:47.263 { 00:12:47.263 "dma_device_id": "system", 00:12:47.263 "dma_device_type": 1 00:12:47.263 }, 00:12:47.263 { 00:12:47.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.263 "dma_device_type": 2 00:12:47.263 } 00:12:47.263 ], 00:12:47.263 "driver_specific": {} 00:12:47.263 }' 00:12:47.263 21:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:47.522 21:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:47.522 21:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:47.522 21:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:47.522 21:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:47.522 21:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:47.522 21:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:47.522 21:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:47.522 21:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:47.522 21:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:47.780 21:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:47.780 21:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:47.780 21:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:47.780 21:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:47.780 21:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:47.780 21:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:47.780 "name": "BaseBdev2", 00:12:47.780 "aliases": [ 00:12:47.780 "a77fc62e-9695-4bde-aeb0-b2b2f5b0041d" 00:12:47.780 ], 00:12:47.780 "product_name": "Malloc disk", 00:12:47.780 "block_size": 512, 00:12:47.780 "num_blocks": 65536, 00:12:47.780 "uuid": "a77fc62e-9695-4bde-aeb0-b2b2f5b0041d", 00:12:47.780 "assigned_rate_limits": { 00:12:47.780 "rw_ios_per_sec": 0, 00:12:47.780 "rw_mbytes_per_sec": 0, 00:12:47.780 "r_mbytes_per_sec": 0, 00:12:47.780 "w_mbytes_per_sec": 0 00:12:47.780 }, 00:12:47.780 "claimed": true, 00:12:47.780 "claim_type": "exclusive_write", 00:12:47.780 "zoned": false, 00:12:47.780 "supported_io_types": { 00:12:47.780 "read": true, 00:12:47.780 "write": true, 00:12:47.780 "unmap": true, 00:12:47.780 "flush": true, 00:12:47.780 "reset": true, 00:12:47.780 "nvme_admin": false, 00:12:47.780 "nvme_io": false, 00:12:47.780 "nvme_io_md": false, 00:12:47.780 "write_zeroes": true, 00:12:47.780 "zcopy": true, 00:12:47.780 "get_zone_info": false, 00:12:47.780 "zone_management": false, 00:12:47.780 "zone_append": false, 00:12:47.780 "compare": false, 00:12:47.780 "compare_and_write": false, 00:12:47.780 "abort": true, 00:12:47.780 "seek_hole": false, 00:12:47.780 "seek_data": false, 00:12:47.780 "copy": true, 00:12:47.780 "nvme_iov_md": false 00:12:47.780 }, 00:12:47.780 "memory_domains": [ 00:12:47.780 { 00:12:47.780 "dma_device_id": "system", 00:12:47.780 "dma_device_type": 1 00:12:47.780 }, 00:12:47.780 { 00:12:47.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.780 "dma_device_type": 2 00:12:47.780 } 00:12:47.780 ], 00:12:47.780 "driver_specific": {} 00:12:47.780 }' 00:12:47.780 21:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:48.038 21:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:48.038 21:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:48.038 21:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:48.038 21:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:48.038 21:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:48.038 21:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:48.038 21:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:48.038 21:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:48.038 21:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:48.038 21:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:48.297 21:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:48.297 21:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:48.297 [2024-07-13 21:56:07.621250] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:48.297 [2024-07-13 21:56:07.621281] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:48.297 [2024-07-13 21:56:07.621329] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:48.297 21:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:12:48.297 21:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:12:48.297 21:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:48.297 21:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:12:48.297 21:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:12:48.297 21:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:12:48.297 21:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:48.297 21:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:12:48.297 21:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:48.297 21:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:48.297 21:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:12:48.298 21:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:48.298 21:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:48.298 21:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:48.298 21:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:48.298 21:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:48.298 21:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.557 21:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:48.557 "name": "Existed_Raid", 00:12:48.557 "uuid": "77795f05-41c5-439e-a468-8937786b64ad", 00:12:48.557 "strip_size_kb": 64, 00:12:48.557 "state": "offline", 00:12:48.557 "raid_level": "concat", 00:12:48.557 "superblock": true, 00:12:48.557 "num_base_bdevs": 2, 00:12:48.557 "num_base_bdevs_discovered": 1, 00:12:48.557 "num_base_bdevs_operational": 1, 00:12:48.557 "base_bdevs_list": [ 00:12:48.557 { 00:12:48.557 "name": null, 00:12:48.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.557 "is_configured": false, 00:12:48.557 "data_offset": 2048, 00:12:48.557 "data_size": 63488 00:12:48.557 }, 00:12:48.557 { 00:12:48.557 "name": "BaseBdev2", 00:12:48.557 "uuid": "a77fc62e-9695-4bde-aeb0-b2b2f5b0041d", 00:12:48.557 "is_configured": true, 00:12:48.557 "data_offset": 2048, 00:12:48.557 "data_size": 63488 00:12:48.557 } 00:12:48.557 ] 00:12:48.557 }' 00:12:48.557 21:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:48.557 21:56:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.125 21:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:12:49.125 21:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:49.125 21:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:49.125 21:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:49.125 21:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:49.125 21:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:49.125 21:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:49.384 [2024-07-13 21:56:08.628624] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:49.384 [2024-07-13 21:56:08.628678] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:12:49.384 21:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:49.384 21:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:49.384 21:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:49.384 21:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:12:49.643 21:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:12:49.643 21:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:12:49.643 21:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:12:49.643 21:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 1353361 00:12:49.643 21:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 1353361 ']' 00:12:49.643 21:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 1353361 00:12:49.643 21:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:12:49.643 21:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:49.643 21:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1353361 00:12:49.643 21:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:49.643 21:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:49.643 21:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1353361' 00:12:49.643 killing process with pid 1353361 00:12:49.643 21:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 1353361 00:12:49.643 [2024-07-13 21:56:08.969622] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:49.643 21:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 1353361 00:12:49.643 [2024-07-13 21:56:08.987112] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:51.022 21:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:12:51.022 00:12:51.022 real 0m9.367s 00:12:51.022 user 0m15.386s 00:12:51.022 sys 0m1.756s 00:12:51.022 21:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:51.022 21:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.022 ************************************ 00:12:51.022 END TEST raid_state_function_test_sb 00:12:51.022 ************************************ 00:12:51.022 21:56:10 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:12:51.022 21:56:10 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:12:51.022 21:56:10 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:12:51.022 21:56:10 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:51.022 21:56:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:51.022 ************************************ 00:12:51.022 START TEST raid_superblock_test 00:12:51.022 ************************************ 00:12:51.022 21:56:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 2 00:12:51.022 21:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:12:51.022 21:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:12:51.022 21:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:12:51.022 21:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:12:51.022 21:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:12:51.022 21:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:12:51.022 21:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:12:51.022 21:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:12:51.022 21:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:12:51.022 21:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:12:51.022 21:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:12:51.022 21:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:12:51.022 21:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:12:51.022 21:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:12:51.022 21:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:12:51.022 21:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:12:51.022 21:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=1355198 00:12:51.022 21:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 1355198 /var/tmp/spdk-raid.sock 00:12:51.022 21:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:12:51.022 21:56:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 1355198 ']' 00:12:51.022 21:56:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:51.022 21:56:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:51.022 21:56:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:51.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:51.022 21:56:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:51.022 21:56:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.022 [2024-07-13 21:56:10.361264] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:51.022 [2024-07-13 21:56:10.361370] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1355198 ] 00:12:51.282 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.282 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:51.282 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.282 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:51.282 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.282 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:51.282 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.282 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:51.282 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.282 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:51.282 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.282 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:51.282 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.282 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:51.282 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.282 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:51.282 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.282 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:51.282 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.282 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:51.282 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.282 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:51.282 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.282 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:51.282 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.282 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:51.282 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.282 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:51.282 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.282 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:51.282 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.282 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:51.282 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.282 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:51.282 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.282 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:51.282 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.282 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:51.282 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.282 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:51.282 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.282 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:51.282 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.282 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:51.282 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.282 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:51.282 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.282 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:51.282 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.282 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:51.282 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.282 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:51.282 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.282 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:51.282 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.282 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:51.282 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.282 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:51.282 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.282 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:51.282 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.282 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:51.282 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.282 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:51.282 [2024-07-13 21:56:10.519717] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.541 [2024-07-13 21:56:10.724503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.801 [2024-07-13 21:56:10.962074] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:51.801 [2024-07-13 21:56:10.962100] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:51.801 21:56:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:51.801 21:56:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:12:51.801 21:56:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:12:51.801 21:56:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:51.801 21:56:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:12:51.801 21:56:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:12:51.801 21:56:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:51.801 21:56:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:51.801 21:56:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:12:51.801 21:56:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:51.801 21:56:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:12:52.060 malloc1 00:12:52.060 21:56:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:52.320 [2024-07-13 21:56:11.473741] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:52.320 [2024-07-13 21:56:11.473795] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:52.320 [2024-07-13 21:56:11.473817] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:12:52.320 [2024-07-13 21:56:11.473829] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:52.320 [2024-07-13 21:56:11.475881] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:52.320 [2024-07-13 21:56:11.475920] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:52.320 pt1 00:12:52.320 21:56:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:12:52.320 21:56:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:52.320 21:56:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:12:52.320 21:56:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:12:52.320 21:56:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:52.320 21:56:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:52.320 21:56:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:12:52.320 21:56:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:52.320 21:56:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:12:52.320 malloc2 00:12:52.320 21:56:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:52.617 [2024-07-13 21:56:11.853255] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:52.617 [2024-07-13 21:56:11.853305] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:52.617 [2024-07-13 21:56:11.853326] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:12:52.617 [2024-07-13 21:56:11.853336] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:52.617 [2024-07-13 21:56:11.855463] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:52.617 [2024-07-13 21:56:11.855497] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:52.617 pt2 00:12:52.617 21:56:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:12:52.617 21:56:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:52.617 21:56:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:12:52.876 [2024-07-13 21:56:12.013712] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:52.876 [2024-07-13 21:56:12.015539] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:52.876 [2024-07-13 21:56:12.015707] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000040880 00:12:52.876 [2024-07-13 21:56:12.015723] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:52.876 [2024-07-13 21:56:12.015995] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:12:52.876 [2024-07-13 21:56:12.016189] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000040880 00:12:52.876 [2024-07-13 21:56:12.016202] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000040880 00:12:52.876 [2024-07-13 21:56:12.016349] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:52.876 21:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:12:52.876 21:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:52.876 21:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:52.876 21:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:52.876 21:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:52.876 21:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:52.876 21:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:52.876 21:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:52.876 21:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:52.876 21:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:52.876 21:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:52.876 21:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.876 21:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:52.876 "name": "raid_bdev1", 00:12:52.876 "uuid": "464c3158-46e3-4ec5-a999-5cfa436050da", 00:12:52.876 "strip_size_kb": 64, 00:12:52.876 "state": "online", 00:12:52.876 "raid_level": "concat", 00:12:52.876 "superblock": true, 00:12:52.876 "num_base_bdevs": 2, 00:12:52.876 "num_base_bdevs_discovered": 2, 00:12:52.876 "num_base_bdevs_operational": 2, 00:12:52.876 "base_bdevs_list": [ 00:12:52.876 { 00:12:52.876 "name": "pt1", 00:12:52.876 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:52.876 "is_configured": true, 00:12:52.876 "data_offset": 2048, 00:12:52.876 "data_size": 63488 00:12:52.876 }, 00:12:52.876 { 00:12:52.876 "name": "pt2", 00:12:52.876 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:52.876 "is_configured": true, 00:12:52.876 "data_offset": 2048, 00:12:52.876 "data_size": 63488 00:12:52.876 } 00:12:52.876 ] 00:12:52.876 }' 00:12:52.876 21:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:52.876 21:56:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.445 21:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:12:53.445 21:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:12:53.445 21:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:53.445 21:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:53.445 21:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:53.445 21:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:53.445 21:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:53.445 21:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:53.704 [2024-07-13 21:56:12.836076] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:53.704 21:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:53.704 "name": "raid_bdev1", 00:12:53.704 "aliases": [ 00:12:53.704 "464c3158-46e3-4ec5-a999-5cfa436050da" 00:12:53.704 ], 00:12:53.704 "product_name": "Raid Volume", 00:12:53.704 "block_size": 512, 00:12:53.704 "num_blocks": 126976, 00:12:53.704 "uuid": "464c3158-46e3-4ec5-a999-5cfa436050da", 00:12:53.704 "assigned_rate_limits": { 00:12:53.704 "rw_ios_per_sec": 0, 00:12:53.704 "rw_mbytes_per_sec": 0, 00:12:53.704 "r_mbytes_per_sec": 0, 00:12:53.704 "w_mbytes_per_sec": 0 00:12:53.704 }, 00:12:53.704 "claimed": false, 00:12:53.704 "zoned": false, 00:12:53.704 "supported_io_types": { 00:12:53.704 "read": true, 00:12:53.704 "write": true, 00:12:53.704 "unmap": true, 00:12:53.704 "flush": true, 00:12:53.704 "reset": true, 00:12:53.704 "nvme_admin": false, 00:12:53.704 "nvme_io": false, 00:12:53.704 "nvme_io_md": false, 00:12:53.704 "write_zeroes": true, 00:12:53.704 "zcopy": false, 00:12:53.704 "get_zone_info": false, 00:12:53.704 "zone_management": false, 00:12:53.704 "zone_append": false, 00:12:53.704 "compare": false, 00:12:53.704 "compare_and_write": false, 00:12:53.704 "abort": false, 00:12:53.704 "seek_hole": false, 00:12:53.704 "seek_data": false, 00:12:53.704 "copy": false, 00:12:53.704 "nvme_iov_md": false 00:12:53.704 }, 00:12:53.704 "memory_domains": [ 00:12:53.704 { 00:12:53.704 "dma_device_id": "system", 00:12:53.704 "dma_device_type": 1 00:12:53.704 }, 00:12:53.704 { 00:12:53.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.704 "dma_device_type": 2 00:12:53.704 }, 00:12:53.704 { 00:12:53.704 "dma_device_id": "system", 00:12:53.704 "dma_device_type": 1 00:12:53.704 }, 00:12:53.704 { 00:12:53.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.704 "dma_device_type": 2 00:12:53.704 } 00:12:53.704 ], 00:12:53.704 "driver_specific": { 00:12:53.704 "raid": { 00:12:53.704 "uuid": "464c3158-46e3-4ec5-a999-5cfa436050da", 00:12:53.704 "strip_size_kb": 64, 00:12:53.704 "state": "online", 00:12:53.704 "raid_level": "concat", 00:12:53.704 "superblock": true, 00:12:53.704 "num_base_bdevs": 2, 00:12:53.704 "num_base_bdevs_discovered": 2, 00:12:53.704 "num_base_bdevs_operational": 2, 00:12:53.704 "base_bdevs_list": [ 00:12:53.704 { 00:12:53.704 "name": "pt1", 00:12:53.704 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:53.704 "is_configured": true, 00:12:53.704 "data_offset": 2048, 00:12:53.704 "data_size": 63488 00:12:53.704 }, 00:12:53.704 { 00:12:53.704 "name": "pt2", 00:12:53.704 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:53.704 "is_configured": true, 00:12:53.704 "data_offset": 2048, 00:12:53.704 "data_size": 63488 00:12:53.704 } 00:12:53.704 ] 00:12:53.704 } 00:12:53.704 } 00:12:53.704 }' 00:12:53.704 21:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:53.704 21:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:12:53.704 pt2' 00:12:53.704 21:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:53.704 21:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:12:53.704 21:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:53.704 21:56:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:53.704 "name": "pt1", 00:12:53.704 "aliases": [ 00:12:53.704 "00000000-0000-0000-0000-000000000001" 00:12:53.704 ], 00:12:53.704 "product_name": "passthru", 00:12:53.704 "block_size": 512, 00:12:53.704 "num_blocks": 65536, 00:12:53.704 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:53.704 "assigned_rate_limits": { 00:12:53.704 "rw_ios_per_sec": 0, 00:12:53.704 "rw_mbytes_per_sec": 0, 00:12:53.704 "r_mbytes_per_sec": 0, 00:12:53.704 "w_mbytes_per_sec": 0 00:12:53.704 }, 00:12:53.704 "claimed": true, 00:12:53.704 "claim_type": "exclusive_write", 00:12:53.704 "zoned": false, 00:12:53.704 "supported_io_types": { 00:12:53.704 "read": true, 00:12:53.704 "write": true, 00:12:53.704 "unmap": true, 00:12:53.704 "flush": true, 00:12:53.704 "reset": true, 00:12:53.704 "nvme_admin": false, 00:12:53.704 "nvme_io": false, 00:12:53.704 "nvme_io_md": false, 00:12:53.704 "write_zeroes": true, 00:12:53.704 "zcopy": true, 00:12:53.704 "get_zone_info": false, 00:12:53.704 "zone_management": false, 00:12:53.704 "zone_append": false, 00:12:53.704 "compare": false, 00:12:53.705 "compare_and_write": false, 00:12:53.705 "abort": true, 00:12:53.705 "seek_hole": false, 00:12:53.705 "seek_data": false, 00:12:53.705 "copy": true, 00:12:53.705 "nvme_iov_md": false 00:12:53.705 }, 00:12:53.705 "memory_domains": [ 00:12:53.705 { 00:12:53.705 "dma_device_id": "system", 00:12:53.705 "dma_device_type": 1 00:12:53.705 }, 00:12:53.705 { 00:12:53.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.705 "dma_device_type": 2 00:12:53.705 } 00:12:53.705 ], 00:12:53.705 "driver_specific": { 00:12:53.705 "passthru": { 00:12:53.705 "name": "pt1", 00:12:53.705 "base_bdev_name": "malloc1" 00:12:53.705 } 00:12:53.705 } 00:12:53.705 }' 00:12:53.705 21:56:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:53.964 21:56:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:53.964 21:56:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:53.964 21:56:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:53.964 21:56:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:53.964 21:56:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:53.964 21:56:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:53.964 21:56:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:53.964 21:56:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:53.964 21:56:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:53.964 21:56:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:54.223 21:56:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:54.223 21:56:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:54.223 21:56:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:54.223 21:56:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:12:54.223 21:56:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:54.223 "name": "pt2", 00:12:54.223 "aliases": [ 00:12:54.223 "00000000-0000-0000-0000-000000000002" 00:12:54.223 ], 00:12:54.223 "product_name": "passthru", 00:12:54.223 "block_size": 512, 00:12:54.223 "num_blocks": 65536, 00:12:54.223 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:54.223 "assigned_rate_limits": { 00:12:54.223 "rw_ios_per_sec": 0, 00:12:54.223 "rw_mbytes_per_sec": 0, 00:12:54.223 "r_mbytes_per_sec": 0, 00:12:54.223 "w_mbytes_per_sec": 0 00:12:54.223 }, 00:12:54.223 "claimed": true, 00:12:54.223 "claim_type": "exclusive_write", 00:12:54.223 "zoned": false, 00:12:54.223 "supported_io_types": { 00:12:54.223 "read": true, 00:12:54.223 "write": true, 00:12:54.223 "unmap": true, 00:12:54.223 "flush": true, 00:12:54.223 "reset": true, 00:12:54.223 "nvme_admin": false, 00:12:54.223 "nvme_io": false, 00:12:54.223 "nvme_io_md": false, 00:12:54.223 "write_zeroes": true, 00:12:54.223 "zcopy": true, 00:12:54.223 "get_zone_info": false, 00:12:54.223 "zone_management": false, 00:12:54.223 "zone_append": false, 00:12:54.223 "compare": false, 00:12:54.223 "compare_and_write": false, 00:12:54.223 "abort": true, 00:12:54.223 "seek_hole": false, 00:12:54.223 "seek_data": false, 00:12:54.223 "copy": true, 00:12:54.223 "nvme_iov_md": false 00:12:54.223 }, 00:12:54.223 "memory_domains": [ 00:12:54.223 { 00:12:54.223 "dma_device_id": "system", 00:12:54.223 "dma_device_type": 1 00:12:54.223 }, 00:12:54.223 { 00:12:54.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.223 "dma_device_type": 2 00:12:54.223 } 00:12:54.223 ], 00:12:54.223 "driver_specific": { 00:12:54.223 "passthru": { 00:12:54.223 "name": "pt2", 00:12:54.223 "base_bdev_name": "malloc2" 00:12:54.223 } 00:12:54.223 } 00:12:54.223 }' 00:12:54.223 21:56:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:54.223 21:56:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:54.223 21:56:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:54.223 21:56:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:54.481 21:56:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:54.481 21:56:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:54.481 21:56:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:54.481 21:56:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:54.481 21:56:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:54.481 21:56:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:54.481 21:56:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:54.481 21:56:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:54.481 21:56:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:54.481 21:56:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:12:54.741 [2024-07-13 21:56:13.995166] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:54.741 21:56:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=464c3158-46e3-4ec5-a999-5cfa436050da 00:12:54.741 21:56:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 464c3158-46e3-4ec5-a999-5cfa436050da ']' 00:12:54.741 21:56:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:55.000 [2024-07-13 21:56:14.163371] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:55.000 [2024-07-13 21:56:14.163397] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:55.000 [2024-07-13 21:56:14.163469] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:55.000 [2024-07-13 21:56:14.163513] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:55.000 [2024-07-13 21:56:14.163530] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040880 name raid_bdev1, state offline 00:12:55.000 21:56:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:55.000 21:56:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:12:55.000 21:56:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:12:55.000 21:56:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:12:55.000 21:56:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:12:55.000 21:56:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:12:55.258 21:56:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:12:55.258 21:56:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:55.516 21:56:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:55.516 21:56:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:12:55.516 21:56:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:12:55.516 21:56:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:12:55.516 21:56:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:12:55.516 21:56:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:12:55.516 21:56:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:55.516 21:56:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:55.516 21:56:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:55.516 21:56:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:55.516 21:56:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:55.516 21:56:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:55.516 21:56:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:55.516 21:56:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:12:55.516 21:56:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:12:55.775 [2024-07-13 21:56:14.993559] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:55.775 [2024-07-13 21:56:14.995273] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:55.775 [2024-07-13 21:56:14.995329] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:55.775 [2024-07-13 21:56:14.995372] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:55.775 [2024-07-13 21:56:14.995388] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:55.775 [2024-07-13 21:56:14.995400] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040e80 name raid_bdev1, state configuring 00:12:55.775 request: 00:12:55.775 { 00:12:55.775 "name": "raid_bdev1", 00:12:55.775 "raid_level": "concat", 00:12:55.775 "base_bdevs": [ 00:12:55.775 "malloc1", 00:12:55.775 "malloc2" 00:12:55.775 ], 00:12:55.775 "strip_size_kb": 64, 00:12:55.775 "superblock": false, 00:12:55.775 "method": "bdev_raid_create", 00:12:55.775 "req_id": 1 00:12:55.775 } 00:12:55.775 Got JSON-RPC error response 00:12:55.775 response: 00:12:55.775 { 00:12:55.775 "code": -17, 00:12:55.775 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:55.775 } 00:12:55.775 21:56:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:12:55.775 21:56:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:55.775 21:56:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:55.775 21:56:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:55.775 21:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:55.775 21:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:12:56.034 21:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:12:56.034 21:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:12:56.034 21:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:56.034 [2024-07-13 21:56:15.326367] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:56.034 [2024-07-13 21:56:15.326423] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.034 [2024-07-13 21:56:15.326444] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:12:56.034 [2024-07-13 21:56:15.326459] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.034 [2024-07-13 21:56:15.328577] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.034 [2024-07-13 21:56:15.328627] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:56.034 [2024-07-13 21:56:15.328703] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:56.034 [2024-07-13 21:56:15.328763] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:56.034 pt1 00:12:56.034 21:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:12:56.034 21:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:56.034 21:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:56.034 21:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:56.034 21:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:56.034 21:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:56.034 21:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:56.034 21:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:56.034 21:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:56.034 21:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:56.034 21:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:56.034 21:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.293 21:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:56.293 "name": "raid_bdev1", 00:12:56.293 "uuid": "464c3158-46e3-4ec5-a999-5cfa436050da", 00:12:56.293 "strip_size_kb": 64, 00:12:56.293 "state": "configuring", 00:12:56.293 "raid_level": "concat", 00:12:56.293 "superblock": true, 00:12:56.293 "num_base_bdevs": 2, 00:12:56.293 "num_base_bdevs_discovered": 1, 00:12:56.293 "num_base_bdevs_operational": 2, 00:12:56.293 "base_bdevs_list": [ 00:12:56.293 { 00:12:56.293 "name": "pt1", 00:12:56.293 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:56.293 "is_configured": true, 00:12:56.293 "data_offset": 2048, 00:12:56.293 "data_size": 63488 00:12:56.293 }, 00:12:56.293 { 00:12:56.293 "name": null, 00:12:56.293 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:56.293 "is_configured": false, 00:12:56.293 "data_offset": 2048, 00:12:56.293 "data_size": 63488 00:12:56.293 } 00:12:56.293 ] 00:12:56.293 }' 00:12:56.293 21:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:56.293 21:56:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.860 21:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:12:56.860 21:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:12:56.860 21:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:12:56.860 21:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:56.860 [2024-07-13 21:56:16.136487] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:56.860 [2024-07-13 21:56:16.136584] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.860 [2024-07-13 21:56:16.136606] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041d80 00:12:56.860 [2024-07-13 21:56:16.136620] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.860 [2024-07-13 21:56:16.137082] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.860 [2024-07-13 21:56:16.137106] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:56.860 [2024-07-13 21:56:16.137180] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:56.860 [2024-07-13 21:56:16.137208] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:56.860 [2024-07-13 21:56:16.137338] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041a80 00:12:56.860 [2024-07-13 21:56:16.137351] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:56.860 [2024-07-13 21:56:16.137572] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:12:56.860 [2024-07-13 21:56:16.137729] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041a80 00:12:56.860 [2024-07-13 21:56:16.137739] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041a80 00:12:56.860 [2024-07-13 21:56:16.137866] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:56.860 pt2 00:12:56.860 21:56:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:12:56.860 21:56:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:12:56.860 21:56:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:12:56.860 21:56:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:56.860 21:56:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:56.860 21:56:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:56.860 21:56:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:56.860 21:56:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:56.860 21:56:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:56.860 21:56:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:56.860 21:56:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:56.860 21:56:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:56.860 21:56:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:56.860 21:56:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.119 21:56:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:57.119 "name": "raid_bdev1", 00:12:57.119 "uuid": "464c3158-46e3-4ec5-a999-5cfa436050da", 00:12:57.119 "strip_size_kb": 64, 00:12:57.119 "state": "online", 00:12:57.119 "raid_level": "concat", 00:12:57.119 "superblock": true, 00:12:57.119 "num_base_bdevs": 2, 00:12:57.119 "num_base_bdevs_discovered": 2, 00:12:57.119 "num_base_bdevs_operational": 2, 00:12:57.119 "base_bdevs_list": [ 00:12:57.119 { 00:12:57.119 "name": "pt1", 00:12:57.119 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:57.119 "is_configured": true, 00:12:57.119 "data_offset": 2048, 00:12:57.119 "data_size": 63488 00:12:57.119 }, 00:12:57.119 { 00:12:57.119 "name": "pt2", 00:12:57.119 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:57.119 "is_configured": true, 00:12:57.119 "data_offset": 2048, 00:12:57.119 "data_size": 63488 00:12:57.119 } 00:12:57.119 ] 00:12:57.119 }' 00:12:57.119 21:56:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:57.119 21:56:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.687 21:56:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:12:57.687 21:56:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:12:57.687 21:56:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:57.687 21:56:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:57.687 21:56:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:57.687 21:56:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:57.687 21:56:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:57.687 21:56:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:57.687 [2024-07-13 21:56:16.954865] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:57.687 21:56:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:57.687 "name": "raid_bdev1", 00:12:57.687 "aliases": [ 00:12:57.687 "464c3158-46e3-4ec5-a999-5cfa436050da" 00:12:57.687 ], 00:12:57.687 "product_name": "Raid Volume", 00:12:57.687 "block_size": 512, 00:12:57.687 "num_blocks": 126976, 00:12:57.687 "uuid": "464c3158-46e3-4ec5-a999-5cfa436050da", 00:12:57.687 "assigned_rate_limits": { 00:12:57.687 "rw_ios_per_sec": 0, 00:12:57.687 "rw_mbytes_per_sec": 0, 00:12:57.687 "r_mbytes_per_sec": 0, 00:12:57.687 "w_mbytes_per_sec": 0 00:12:57.687 }, 00:12:57.687 "claimed": false, 00:12:57.687 "zoned": false, 00:12:57.687 "supported_io_types": { 00:12:57.687 "read": true, 00:12:57.687 "write": true, 00:12:57.687 "unmap": true, 00:12:57.687 "flush": true, 00:12:57.687 "reset": true, 00:12:57.687 "nvme_admin": false, 00:12:57.687 "nvme_io": false, 00:12:57.687 "nvme_io_md": false, 00:12:57.687 "write_zeroes": true, 00:12:57.687 "zcopy": false, 00:12:57.687 "get_zone_info": false, 00:12:57.687 "zone_management": false, 00:12:57.687 "zone_append": false, 00:12:57.687 "compare": false, 00:12:57.687 "compare_and_write": false, 00:12:57.687 "abort": false, 00:12:57.687 "seek_hole": false, 00:12:57.687 "seek_data": false, 00:12:57.687 "copy": false, 00:12:57.687 "nvme_iov_md": false 00:12:57.687 }, 00:12:57.687 "memory_domains": [ 00:12:57.687 { 00:12:57.687 "dma_device_id": "system", 00:12:57.687 "dma_device_type": 1 00:12:57.687 }, 00:12:57.687 { 00:12:57.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.687 "dma_device_type": 2 00:12:57.687 }, 00:12:57.687 { 00:12:57.687 "dma_device_id": "system", 00:12:57.687 "dma_device_type": 1 00:12:57.687 }, 00:12:57.687 { 00:12:57.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.687 "dma_device_type": 2 00:12:57.687 } 00:12:57.687 ], 00:12:57.687 "driver_specific": { 00:12:57.687 "raid": { 00:12:57.687 "uuid": "464c3158-46e3-4ec5-a999-5cfa436050da", 00:12:57.687 "strip_size_kb": 64, 00:12:57.687 "state": "online", 00:12:57.687 "raid_level": "concat", 00:12:57.687 "superblock": true, 00:12:57.687 "num_base_bdevs": 2, 00:12:57.687 "num_base_bdevs_discovered": 2, 00:12:57.687 "num_base_bdevs_operational": 2, 00:12:57.687 "base_bdevs_list": [ 00:12:57.687 { 00:12:57.687 "name": "pt1", 00:12:57.687 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:57.687 "is_configured": true, 00:12:57.687 "data_offset": 2048, 00:12:57.687 "data_size": 63488 00:12:57.687 }, 00:12:57.687 { 00:12:57.687 "name": "pt2", 00:12:57.687 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:57.687 "is_configured": true, 00:12:57.687 "data_offset": 2048, 00:12:57.687 "data_size": 63488 00:12:57.687 } 00:12:57.687 ] 00:12:57.687 } 00:12:57.687 } 00:12:57.687 }' 00:12:57.687 21:56:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:57.688 21:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:12:57.688 pt2' 00:12:57.688 21:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:57.688 21:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:12:57.688 21:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:57.946 21:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:57.946 "name": "pt1", 00:12:57.946 "aliases": [ 00:12:57.946 "00000000-0000-0000-0000-000000000001" 00:12:57.946 ], 00:12:57.946 "product_name": "passthru", 00:12:57.946 "block_size": 512, 00:12:57.946 "num_blocks": 65536, 00:12:57.946 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:57.946 "assigned_rate_limits": { 00:12:57.946 "rw_ios_per_sec": 0, 00:12:57.946 "rw_mbytes_per_sec": 0, 00:12:57.946 "r_mbytes_per_sec": 0, 00:12:57.946 "w_mbytes_per_sec": 0 00:12:57.946 }, 00:12:57.946 "claimed": true, 00:12:57.946 "claim_type": "exclusive_write", 00:12:57.946 "zoned": false, 00:12:57.946 "supported_io_types": { 00:12:57.947 "read": true, 00:12:57.947 "write": true, 00:12:57.947 "unmap": true, 00:12:57.947 "flush": true, 00:12:57.947 "reset": true, 00:12:57.947 "nvme_admin": false, 00:12:57.947 "nvme_io": false, 00:12:57.947 "nvme_io_md": false, 00:12:57.947 "write_zeroes": true, 00:12:57.947 "zcopy": true, 00:12:57.947 "get_zone_info": false, 00:12:57.947 "zone_management": false, 00:12:57.947 "zone_append": false, 00:12:57.947 "compare": false, 00:12:57.947 "compare_and_write": false, 00:12:57.947 "abort": true, 00:12:57.947 "seek_hole": false, 00:12:57.947 "seek_data": false, 00:12:57.947 "copy": true, 00:12:57.947 "nvme_iov_md": false 00:12:57.947 }, 00:12:57.947 "memory_domains": [ 00:12:57.947 { 00:12:57.947 "dma_device_id": "system", 00:12:57.947 "dma_device_type": 1 00:12:57.947 }, 00:12:57.947 { 00:12:57.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.947 "dma_device_type": 2 00:12:57.947 } 00:12:57.947 ], 00:12:57.947 "driver_specific": { 00:12:57.947 "passthru": { 00:12:57.947 "name": "pt1", 00:12:57.947 "base_bdev_name": "malloc1" 00:12:57.947 } 00:12:57.947 } 00:12:57.947 }' 00:12:57.947 21:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:57.947 21:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:57.947 21:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:57.947 21:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:57.947 21:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:58.206 21:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:58.206 21:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:58.206 21:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:58.206 21:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:58.206 21:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:58.206 21:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:58.206 21:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:58.206 21:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:58.206 21:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:12:58.206 21:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:58.466 21:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:58.466 "name": "pt2", 00:12:58.466 "aliases": [ 00:12:58.466 "00000000-0000-0000-0000-000000000002" 00:12:58.466 ], 00:12:58.466 "product_name": "passthru", 00:12:58.466 "block_size": 512, 00:12:58.466 "num_blocks": 65536, 00:12:58.466 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:58.466 "assigned_rate_limits": { 00:12:58.466 "rw_ios_per_sec": 0, 00:12:58.466 "rw_mbytes_per_sec": 0, 00:12:58.466 "r_mbytes_per_sec": 0, 00:12:58.466 "w_mbytes_per_sec": 0 00:12:58.466 }, 00:12:58.466 "claimed": true, 00:12:58.466 "claim_type": "exclusive_write", 00:12:58.466 "zoned": false, 00:12:58.466 "supported_io_types": { 00:12:58.466 "read": true, 00:12:58.466 "write": true, 00:12:58.466 "unmap": true, 00:12:58.466 "flush": true, 00:12:58.466 "reset": true, 00:12:58.466 "nvme_admin": false, 00:12:58.466 "nvme_io": false, 00:12:58.466 "nvme_io_md": false, 00:12:58.466 "write_zeroes": true, 00:12:58.466 "zcopy": true, 00:12:58.466 "get_zone_info": false, 00:12:58.466 "zone_management": false, 00:12:58.466 "zone_append": false, 00:12:58.466 "compare": false, 00:12:58.466 "compare_and_write": false, 00:12:58.466 "abort": true, 00:12:58.466 "seek_hole": false, 00:12:58.466 "seek_data": false, 00:12:58.466 "copy": true, 00:12:58.466 "nvme_iov_md": false 00:12:58.466 }, 00:12:58.466 "memory_domains": [ 00:12:58.466 { 00:12:58.466 "dma_device_id": "system", 00:12:58.466 "dma_device_type": 1 00:12:58.466 }, 00:12:58.466 { 00:12:58.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.466 "dma_device_type": 2 00:12:58.466 } 00:12:58.466 ], 00:12:58.466 "driver_specific": { 00:12:58.466 "passthru": { 00:12:58.466 "name": "pt2", 00:12:58.466 "base_bdev_name": "malloc2" 00:12:58.466 } 00:12:58.466 } 00:12:58.466 }' 00:12:58.466 21:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:58.466 21:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:58.466 21:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:58.466 21:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:58.466 21:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:58.466 21:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:58.466 21:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:58.466 21:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:58.725 21:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:58.725 21:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:58.725 21:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:58.725 21:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:58.725 21:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:58.725 21:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:12:58.725 [2024-07-13 21:56:18.113935] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:58.984 21:56:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 464c3158-46e3-4ec5-a999-5cfa436050da '!=' 464c3158-46e3-4ec5-a999-5cfa436050da ']' 00:12:58.984 21:56:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:12:58.984 21:56:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:58.985 21:56:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:12:58.985 21:56:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 1355198 00:12:58.985 21:56:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 1355198 ']' 00:12:58.985 21:56:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 1355198 00:12:58.985 21:56:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:12:58.985 21:56:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:58.985 21:56:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1355198 00:12:58.985 21:56:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:58.985 21:56:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:58.985 21:56:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1355198' 00:12:58.985 killing process with pid 1355198 00:12:58.985 21:56:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 1355198 00:12:58.985 [2024-07-13 21:56:18.188786] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:58.985 [2024-07-13 21:56:18.188869] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:58.985 [2024-07-13 21:56:18.188925] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:58.985 [2024-07-13 21:56:18.188941] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041a80 name raid_bdev1, state offline 00:12:58.985 21:56:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 1355198 00:12:58.985 [2024-07-13 21:56:18.333556] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:00.364 21:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:13:00.364 00:13:00.364 real 0m9.265s 00:13:00.364 user 0m15.295s 00:13:00.364 sys 0m1.707s 00:13:00.364 21:56:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:00.364 21:56:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.364 ************************************ 00:13:00.364 END TEST raid_superblock_test 00:13:00.364 ************************************ 00:13:00.364 21:56:19 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:00.364 21:56:19 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:13:00.364 21:56:19 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:00.364 21:56:19 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:00.364 21:56:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:00.364 ************************************ 00:13:00.364 START TEST raid_read_error_test 00:13:00.364 ************************************ 00:13:00.364 21:56:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 2 read 00:13:00.364 21:56:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:13:00.364 21:56:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:13:00.364 21:56:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:13:00.364 21:56:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:13:00.364 21:56:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:00.364 21:56:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:13:00.364 21:56:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:00.364 21:56:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:00.364 21:56:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:13:00.364 21:56:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:00.364 21:56:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:00.364 21:56:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:00.364 21:56:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:13:00.365 21:56:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:13:00.365 21:56:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:13:00.365 21:56:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:13:00.365 21:56:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:13:00.365 21:56:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:13:00.365 21:56:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:13:00.365 21:56:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:13:00.365 21:56:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:13:00.365 21:56:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:13:00.365 21:56:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.GBUoV6qj6Z 00:13:00.365 21:56:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1357005 00:13:00.365 21:56:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1357005 /var/tmp/spdk-raid.sock 00:13:00.365 21:56:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:00.365 21:56:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 1357005 ']' 00:13:00.365 21:56:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:00.365 21:56:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:00.365 21:56:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:00.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:00.365 21:56:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:00.365 21:56:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.365 [2024-07-13 21:56:19.716769] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:00.365 [2024-07-13 21:56:19.716866] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1357005 ] 00:13:00.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.624 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:00.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.624 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:00.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.624 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:00.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.624 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:00.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.624 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:00.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.624 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:00.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.624 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:00.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.625 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:00.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.625 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:00.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.625 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:00.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.625 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:00.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.625 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:00.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.625 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:00.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.625 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:00.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.625 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:00.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.625 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:00.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.625 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:00.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.625 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:00.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.625 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:00.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.625 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:00.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.625 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:00.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.625 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:00.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.625 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:00.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.625 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:00.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.625 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:00.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.625 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:00.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.625 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:00.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.625 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:00.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.625 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:00.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.625 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:00.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.625 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:00.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.625 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:00.625 [2024-07-13 21:56:19.879096] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.884 [2024-07-13 21:56:20.093840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.143 [2024-07-13 21:56:20.340087] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:01.143 [2024-07-13 21:56:20.340118] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:01.143 21:56:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:01.143 21:56:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:13:01.143 21:56:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:01.143 21:56:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:01.403 BaseBdev1_malloc 00:13:01.403 21:56:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:13:01.662 true 00:13:01.662 21:56:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:01.662 [2024-07-13 21:56:20.993409] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:01.662 [2024-07-13 21:56:20.993458] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.662 [2024-07-13 21:56:20.993479] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:13:01.662 [2024-07-13 21:56:20.993496] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.662 [2024-07-13 21:56:20.995523] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.662 [2024-07-13 21:56:20.995555] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:01.662 BaseBdev1 00:13:01.662 21:56:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:01.662 21:56:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:01.921 BaseBdev2_malloc 00:13:01.921 21:56:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:13:02.179 true 00:13:02.179 21:56:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:02.179 [2024-07-13 21:56:21.520866] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:02.179 [2024-07-13 21:56:21.520922] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:02.179 [2024-07-13 21:56:21.520959] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:13:02.179 [2024-07-13 21:56:21.520975] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:02.179 [2024-07-13 21:56:21.523013] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:02.179 [2024-07-13 21:56:21.523046] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:02.179 BaseBdev2 00:13:02.179 21:56:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:13:02.439 [2024-07-13 21:56:21.681362] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:02.439 [2024-07-13 21:56:21.683149] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:02.439 [2024-07-13 21:56:21.683342] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000040e80 00:13:02.439 [2024-07-13 21:56:21.683359] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:02.439 [2024-07-13 21:56:21.683618] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:13:02.439 [2024-07-13 21:56:21.683819] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000040e80 00:13:02.439 [2024-07-13 21:56:21.683829] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000040e80 00:13:02.439 [2024-07-13 21:56:21.683996] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:02.439 21:56:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:13:02.439 21:56:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:02.439 21:56:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:02.439 21:56:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:02.439 21:56:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:02.439 21:56:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:02.439 21:56:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:02.439 21:56:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:02.439 21:56:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:02.439 21:56:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:02.439 21:56:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:02.439 21:56:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.697 21:56:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:02.697 "name": "raid_bdev1", 00:13:02.697 "uuid": "e5359df6-7904-48c7-a843-8f7e7f9a3cda", 00:13:02.697 "strip_size_kb": 64, 00:13:02.697 "state": "online", 00:13:02.697 "raid_level": "concat", 00:13:02.697 "superblock": true, 00:13:02.697 "num_base_bdevs": 2, 00:13:02.697 "num_base_bdevs_discovered": 2, 00:13:02.697 "num_base_bdevs_operational": 2, 00:13:02.697 "base_bdevs_list": [ 00:13:02.697 { 00:13:02.697 "name": "BaseBdev1", 00:13:02.697 "uuid": "24bb84d6-5b33-5e1e-b903-535bab34e5e3", 00:13:02.697 "is_configured": true, 00:13:02.697 "data_offset": 2048, 00:13:02.697 "data_size": 63488 00:13:02.697 }, 00:13:02.697 { 00:13:02.697 "name": "BaseBdev2", 00:13:02.697 "uuid": "7a4d3eab-ab40-5948-babb-70704fc1e823", 00:13:02.697 "is_configured": true, 00:13:02.697 "data_offset": 2048, 00:13:02.697 "data_size": 63488 00:13:02.697 } 00:13:02.697 ] 00:13:02.697 }' 00:13:02.697 21:56:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:02.697 21:56:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.956 21:56:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:13:02.956 21:56:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:13:03.214 [2024-07-13 21:56:22.416719] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:13:04.152 21:56:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:04.152 21:56:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:13:04.152 21:56:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:13:04.152 21:56:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:13:04.152 21:56:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:13:04.152 21:56:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:04.152 21:56:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:04.152 21:56:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:04.152 21:56:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:04.152 21:56:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:04.152 21:56:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:04.152 21:56:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:04.152 21:56:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:04.152 21:56:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:04.152 21:56:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:04.152 21:56:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.410 21:56:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:04.410 "name": "raid_bdev1", 00:13:04.410 "uuid": "e5359df6-7904-48c7-a843-8f7e7f9a3cda", 00:13:04.410 "strip_size_kb": 64, 00:13:04.410 "state": "online", 00:13:04.410 "raid_level": "concat", 00:13:04.410 "superblock": true, 00:13:04.410 "num_base_bdevs": 2, 00:13:04.410 "num_base_bdevs_discovered": 2, 00:13:04.410 "num_base_bdevs_operational": 2, 00:13:04.410 "base_bdevs_list": [ 00:13:04.410 { 00:13:04.410 "name": "BaseBdev1", 00:13:04.410 "uuid": "24bb84d6-5b33-5e1e-b903-535bab34e5e3", 00:13:04.410 "is_configured": true, 00:13:04.410 "data_offset": 2048, 00:13:04.410 "data_size": 63488 00:13:04.410 }, 00:13:04.410 { 00:13:04.410 "name": "BaseBdev2", 00:13:04.410 "uuid": "7a4d3eab-ab40-5948-babb-70704fc1e823", 00:13:04.410 "is_configured": true, 00:13:04.410 "data_offset": 2048, 00:13:04.410 "data_size": 63488 00:13:04.410 } 00:13:04.410 ] 00:13:04.410 }' 00:13:04.410 21:56:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:04.410 21:56:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.015 21:56:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:05.015 [2024-07-13 21:56:24.332277] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:05.015 [2024-07-13 21:56:24.332316] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:05.015 [2024-07-13 21:56:24.334620] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:05.015 [2024-07-13 21:56:24.334662] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:05.015 [2024-07-13 21:56:24.334690] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:05.015 [2024-07-13 21:56:24.334705] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040e80 name raid_bdev1, state offline 00:13:05.015 0 00:13:05.015 21:56:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1357005 00:13:05.015 21:56:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 1357005 ']' 00:13:05.015 21:56:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 1357005 00:13:05.015 21:56:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:13:05.015 21:56:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:05.015 21:56:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1357005 00:13:05.015 21:56:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:05.015 21:56:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:05.015 21:56:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1357005' 00:13:05.015 killing process with pid 1357005 00:13:05.015 21:56:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 1357005 00:13:05.015 [2024-07-13 21:56:24.404530] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:05.015 21:56:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 1357005 00:13:05.274 [2024-07-13 21:56:24.481435] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:06.652 21:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:13:06.652 21:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.GBUoV6qj6Z 00:13:06.652 21:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:13:06.652 21:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.52 00:13:06.652 21:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:13:06.652 21:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:06.652 21:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:06.652 21:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.52 != \0\.\0\0 ]] 00:13:06.652 00:13:06.652 real 0m6.132s 00:13:06.652 user 0m8.478s 00:13:06.652 sys 0m0.955s 00:13:06.652 21:56:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:06.652 21:56:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.652 ************************************ 00:13:06.652 END TEST raid_read_error_test 00:13:06.652 ************************************ 00:13:06.652 21:56:25 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:06.652 21:56:25 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:13:06.652 21:56:25 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:06.652 21:56:25 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:06.652 21:56:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:06.652 ************************************ 00:13:06.652 START TEST raid_write_error_test 00:13:06.652 ************************************ 00:13:06.652 21:56:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 2 write 00:13:06.652 21:56:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:13:06.652 21:56:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:13:06.652 21:56:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:13:06.652 21:56:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:13:06.652 21:56:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:06.652 21:56:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:13:06.652 21:56:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:06.653 21:56:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:06.653 21:56:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:13:06.653 21:56:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:06.653 21:56:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:06.653 21:56:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:06.653 21:56:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:13:06.653 21:56:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:13:06.653 21:56:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:13:06.653 21:56:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:13:06.653 21:56:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:13:06.653 21:56:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:13:06.653 21:56:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:13:06.653 21:56:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:13:06.653 21:56:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:13:06.653 21:56:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:13:06.653 21:56:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.61V0CXkjSS 00:13:06.653 21:56:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1358180 00:13:06.653 21:56:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1358180 /var/tmp/spdk-raid.sock 00:13:06.653 21:56:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:06.653 21:56:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 1358180 ']' 00:13:06.653 21:56:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:06.653 21:56:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:06.653 21:56:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:06.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:06.653 21:56:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:06.653 21:56:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.653 [2024-07-13 21:56:25.941272] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:06.653 [2024-07-13 21:56:25.941370] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1358180 ] 00:13:06.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:06.653 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:06.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:06.653 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:06.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:06.653 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:06.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:06.653 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:06.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:06.653 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:06.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:06.653 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:06.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:06.653 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:06.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:06.653 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:06.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:06.653 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:06.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:06.653 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:06.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:06.653 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:06.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:06.653 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:06.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:06.653 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:06.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:06.653 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:06.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:06.653 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:06.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:06.653 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:06.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:06.653 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:06.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:06.653 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:06.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:06.653 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:06.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:06.653 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:06.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:06.653 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:06.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:06.653 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:06.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:06.653 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:06.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:06.653 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:06.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:06.653 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:06.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:06.653 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:06.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:06.653 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:06.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:06.913 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:06.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:06.913 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:06.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:06.913 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:06.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:06.913 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:06.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:06.913 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:06.913 [2024-07-13 21:56:26.102543] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.172 [2024-07-13 21:56:26.314800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.172 [2024-07-13 21:56:26.553853] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:07.172 [2024-07-13 21:56:26.553885] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:07.431 21:56:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:07.431 21:56:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:13:07.431 21:56:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:07.431 21:56:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:07.689 BaseBdev1_malloc 00:13:07.689 21:56:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:13:07.689 true 00:13:07.689 21:56:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:07.947 [2024-07-13 21:56:27.229109] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:07.948 [2024-07-13 21:56:27.229156] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:07.948 [2024-07-13 21:56:27.229177] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:13:07.948 [2024-07-13 21:56:27.229194] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:07.948 [2024-07-13 21:56:27.231277] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:07.948 [2024-07-13 21:56:27.231309] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:07.948 BaseBdev1 00:13:07.948 21:56:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:07.948 21:56:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:08.206 BaseBdev2_malloc 00:13:08.206 21:56:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:13:08.466 true 00:13:08.466 21:56:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:08.466 [2024-07-13 21:56:27.777824] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:08.466 [2024-07-13 21:56:27.777873] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.466 [2024-07-13 21:56:27.777894] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:13:08.466 [2024-07-13 21:56:27.777916] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.466 [2024-07-13 21:56:27.779973] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.466 [2024-07-13 21:56:27.780003] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:08.466 BaseBdev2 00:13:08.466 21:56:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:13:08.725 [2024-07-13 21:56:27.934310] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:08.725 [2024-07-13 21:56:27.936099] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:08.725 [2024-07-13 21:56:27.936288] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000040e80 00:13:08.725 [2024-07-13 21:56:27.936304] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:08.725 [2024-07-13 21:56:27.936557] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:13:08.725 [2024-07-13 21:56:27.936745] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000040e80 00:13:08.725 [2024-07-13 21:56:27.936759] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000040e80 00:13:08.725 [2024-07-13 21:56:27.936900] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:08.725 21:56:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:13:08.725 21:56:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:08.725 21:56:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:08.725 21:56:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:08.725 21:56:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:08.725 21:56:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:08.725 21:56:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:08.725 21:56:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:08.725 21:56:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:08.725 21:56:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:08.725 21:56:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:08.725 21:56:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.984 21:56:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:08.984 "name": "raid_bdev1", 00:13:08.984 "uuid": "9ed1da1a-cd8f-4db9-b5eb-2e5a9439f1b0", 00:13:08.984 "strip_size_kb": 64, 00:13:08.984 "state": "online", 00:13:08.984 "raid_level": "concat", 00:13:08.984 "superblock": true, 00:13:08.984 "num_base_bdevs": 2, 00:13:08.984 "num_base_bdevs_discovered": 2, 00:13:08.984 "num_base_bdevs_operational": 2, 00:13:08.984 "base_bdevs_list": [ 00:13:08.984 { 00:13:08.984 "name": "BaseBdev1", 00:13:08.984 "uuid": "64511e2a-5760-5b9d-98f2-7b8a5416464d", 00:13:08.984 "is_configured": true, 00:13:08.984 "data_offset": 2048, 00:13:08.984 "data_size": 63488 00:13:08.984 }, 00:13:08.984 { 00:13:08.984 "name": "BaseBdev2", 00:13:08.984 "uuid": "ec51a82a-c59f-50dc-8713-d21bd9094a96", 00:13:08.984 "is_configured": true, 00:13:08.984 "data_offset": 2048, 00:13:08.984 "data_size": 63488 00:13:08.984 } 00:13:08.984 ] 00:13:08.984 }' 00:13:08.984 21:56:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:08.984 21:56:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.243 21:56:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:13:09.243 21:56:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:13:09.501 [2024-07-13 21:56:28.701676] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:13:10.438 21:56:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:10.438 21:56:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:13:10.438 21:56:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:13:10.438 21:56:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:13:10.438 21:56:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:13:10.438 21:56:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:10.438 21:56:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:10.438 21:56:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:10.438 21:56:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:10.438 21:56:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:10.438 21:56:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:10.438 21:56:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:10.438 21:56:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:10.438 21:56:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:10.438 21:56:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:10.438 21:56:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.697 21:56:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:10.697 "name": "raid_bdev1", 00:13:10.697 "uuid": "9ed1da1a-cd8f-4db9-b5eb-2e5a9439f1b0", 00:13:10.697 "strip_size_kb": 64, 00:13:10.697 "state": "online", 00:13:10.697 "raid_level": "concat", 00:13:10.697 "superblock": true, 00:13:10.697 "num_base_bdevs": 2, 00:13:10.697 "num_base_bdevs_discovered": 2, 00:13:10.697 "num_base_bdevs_operational": 2, 00:13:10.697 "base_bdevs_list": [ 00:13:10.697 { 00:13:10.697 "name": "BaseBdev1", 00:13:10.697 "uuid": "64511e2a-5760-5b9d-98f2-7b8a5416464d", 00:13:10.697 "is_configured": true, 00:13:10.697 "data_offset": 2048, 00:13:10.697 "data_size": 63488 00:13:10.697 }, 00:13:10.697 { 00:13:10.697 "name": "BaseBdev2", 00:13:10.697 "uuid": "ec51a82a-c59f-50dc-8713-d21bd9094a96", 00:13:10.697 "is_configured": true, 00:13:10.697 "data_offset": 2048, 00:13:10.697 "data_size": 63488 00:13:10.697 } 00:13:10.697 ] 00:13:10.697 }' 00:13:10.697 21:56:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:10.697 21:56:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.266 21:56:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:11.266 [2024-07-13 21:56:30.621598] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:11.266 [2024-07-13 21:56:30.621637] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:11.266 [2024-07-13 21:56:30.624002] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:11.266 [2024-07-13 21:56:30.624058] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:11.266 [2024-07-13 21:56:30.624086] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:11.266 [2024-07-13 21:56:30.624102] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040e80 name raid_bdev1, state offline 00:13:11.266 0 00:13:11.266 21:56:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1358180 00:13:11.266 21:56:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 1358180 ']' 00:13:11.266 21:56:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 1358180 00:13:11.266 21:56:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:13:11.266 21:56:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:11.266 21:56:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1358180 00:13:11.525 21:56:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:11.525 21:56:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:11.525 21:56:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1358180' 00:13:11.525 killing process with pid 1358180 00:13:11.525 21:56:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 1358180 00:13:11.525 [2024-07-13 21:56:30.693516] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:11.525 21:56:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 1358180 00:13:11.525 [2024-07-13 21:56:30.766072] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:12.905 21:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.61V0CXkjSS 00:13:12.905 21:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:13:12.905 21:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:13:12.905 21:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.52 00:13:12.905 21:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:13:12.905 21:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:12.905 21:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:12.905 21:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.52 != \0\.\0\0 ]] 00:13:12.905 00:13:12.905 real 0m6.204s 00:13:12.905 user 0m8.560s 00:13:12.905 sys 0m1.021s 00:13:12.905 21:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:12.905 21:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.905 ************************************ 00:13:12.905 END TEST raid_write_error_test 00:13:12.905 ************************************ 00:13:12.905 21:56:32 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:12.905 21:56:32 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:13:12.905 21:56:32 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:13:12.905 21:56:32 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:12.905 21:56:32 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:12.905 21:56:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:12.905 ************************************ 00:13:12.905 START TEST raid_state_function_test 00:13:12.905 ************************************ 00:13:12.905 21:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 false 00:13:12.905 21:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:13:12.905 21:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:13:12.905 21:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:13:12.905 21:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:13:12.905 21:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:13:12.905 21:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:12.905 21:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:13:12.905 21:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:12.905 21:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:12.905 21:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:13:12.905 21:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:12.905 21:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:12.906 21:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:12.906 21:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:13:12.906 21:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:13:12.906 21:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:13:12.906 21:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:13:12.906 21:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:13:12.906 21:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:13:12.906 21:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:13:12.906 21:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:13:12.906 21:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:13:12.906 21:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=1359393 00:13:12.906 21:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:12.906 21:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1359393' 00:13:12.906 Process raid pid: 1359393 00:13:12.906 21:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 1359393 /var/tmp/spdk-raid.sock 00:13:12.906 21:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 1359393 ']' 00:13:12.906 21:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:12.906 21:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:12.906 21:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:12.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:12.906 21:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:12.906 21:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.906 [2024-07-13 21:56:32.200476] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:12.906 [2024-07-13 21:56:32.200583] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:13.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:13.165 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:13.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:13.165 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:13.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:13.165 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:13.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:13.165 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:13.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:13.165 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:13.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:13.165 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:13.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:13.165 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:13.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:13.165 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:13.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:13.165 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:13.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:13.165 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:13.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:13.165 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:13.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:13.165 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:13.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:13.165 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:13.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:13.165 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:13.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:13.165 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:13.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:13.165 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:13.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:13.166 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:13.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:13.166 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:13.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:13.166 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:13.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:13.166 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:13.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:13.166 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:13.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:13.166 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:13.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:13.166 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:13.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:13.166 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:13.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:13.166 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:13.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:13.166 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:13.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:13.166 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:13.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:13.166 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:13.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:13.166 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:13.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:13.166 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:13.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:13.166 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:13.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:13.166 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:13.166 [2024-07-13 21:56:32.363179] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.425 [2024-07-13 21:56:32.568648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.425 [2024-07-13 21:56:32.805531] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:13.425 [2024-07-13 21:56:32.805561] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:13.684 21:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:13.684 21:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:13:13.684 21:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:13.944 [2024-07-13 21:56:33.100526] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:13.944 [2024-07-13 21:56:33.100571] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:13.944 [2024-07-13 21:56:33.100581] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:13.944 [2024-07-13 21:56:33.100593] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:13.944 21:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:13.944 21:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:13.944 21:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:13.944 21:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:13.944 21:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:13.944 21:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:13.944 21:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:13.944 21:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:13.944 21:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:13.944 21:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:13.944 21:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:13.944 21:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:13.944 21:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:13.944 "name": "Existed_Raid", 00:13:13.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.944 "strip_size_kb": 0, 00:13:13.944 "state": "configuring", 00:13:13.944 "raid_level": "raid1", 00:13:13.944 "superblock": false, 00:13:13.944 "num_base_bdevs": 2, 00:13:13.944 "num_base_bdevs_discovered": 0, 00:13:13.944 "num_base_bdevs_operational": 2, 00:13:13.944 "base_bdevs_list": [ 00:13:13.944 { 00:13:13.944 "name": "BaseBdev1", 00:13:13.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.944 "is_configured": false, 00:13:13.944 "data_offset": 0, 00:13:13.944 "data_size": 0 00:13:13.944 }, 00:13:13.944 { 00:13:13.944 "name": "BaseBdev2", 00:13:13.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.944 "is_configured": false, 00:13:13.944 "data_offset": 0, 00:13:13.944 "data_size": 0 00:13:13.944 } 00:13:13.944 ] 00:13:13.944 }' 00:13:13.944 21:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:13.944 21:56:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.510 21:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:14.769 [2024-07-13 21:56:33.934640] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:14.769 [2024-07-13 21:56:33.934674] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:13:14.769 21:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:14.769 [2024-07-13 21:56:34.103085] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:14.769 [2024-07-13 21:56:34.103122] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:14.769 [2024-07-13 21:56:34.103131] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:14.769 [2024-07-13 21:56:34.103143] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:14.769 21:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:15.028 [2024-07-13 21:56:34.312179] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:15.028 BaseBdev1 00:13:15.028 21:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:13:15.028 21:56:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:13:15.028 21:56:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:15.028 21:56:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:15.028 21:56:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:15.028 21:56:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:15.028 21:56:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:15.286 21:56:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:15.286 [ 00:13:15.286 { 00:13:15.286 "name": "BaseBdev1", 00:13:15.286 "aliases": [ 00:13:15.286 "e2659336-939a-4f67-b6a9-dd49d206c280" 00:13:15.286 ], 00:13:15.286 "product_name": "Malloc disk", 00:13:15.286 "block_size": 512, 00:13:15.287 "num_blocks": 65536, 00:13:15.287 "uuid": "e2659336-939a-4f67-b6a9-dd49d206c280", 00:13:15.287 "assigned_rate_limits": { 00:13:15.287 "rw_ios_per_sec": 0, 00:13:15.287 "rw_mbytes_per_sec": 0, 00:13:15.287 "r_mbytes_per_sec": 0, 00:13:15.287 "w_mbytes_per_sec": 0 00:13:15.287 }, 00:13:15.287 "claimed": true, 00:13:15.287 "claim_type": "exclusive_write", 00:13:15.287 "zoned": false, 00:13:15.287 "supported_io_types": { 00:13:15.287 "read": true, 00:13:15.287 "write": true, 00:13:15.287 "unmap": true, 00:13:15.287 "flush": true, 00:13:15.287 "reset": true, 00:13:15.287 "nvme_admin": false, 00:13:15.287 "nvme_io": false, 00:13:15.287 "nvme_io_md": false, 00:13:15.287 "write_zeroes": true, 00:13:15.287 "zcopy": true, 00:13:15.287 "get_zone_info": false, 00:13:15.287 "zone_management": false, 00:13:15.287 "zone_append": false, 00:13:15.287 "compare": false, 00:13:15.287 "compare_and_write": false, 00:13:15.287 "abort": true, 00:13:15.287 "seek_hole": false, 00:13:15.287 "seek_data": false, 00:13:15.287 "copy": true, 00:13:15.287 "nvme_iov_md": false 00:13:15.287 }, 00:13:15.287 "memory_domains": [ 00:13:15.287 { 00:13:15.287 "dma_device_id": "system", 00:13:15.287 "dma_device_type": 1 00:13:15.287 }, 00:13:15.287 { 00:13:15.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.287 "dma_device_type": 2 00:13:15.287 } 00:13:15.287 ], 00:13:15.287 "driver_specific": {} 00:13:15.287 } 00:13:15.287 ] 00:13:15.287 21:56:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:15.287 21:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:15.287 21:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:15.287 21:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:15.287 21:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:15.287 21:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:15.287 21:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:15.287 21:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:15.287 21:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:15.287 21:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:15.287 21:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:15.287 21:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:15.287 21:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:15.546 21:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:15.546 "name": "Existed_Raid", 00:13:15.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.546 "strip_size_kb": 0, 00:13:15.546 "state": "configuring", 00:13:15.546 "raid_level": "raid1", 00:13:15.546 "superblock": false, 00:13:15.546 "num_base_bdevs": 2, 00:13:15.546 "num_base_bdevs_discovered": 1, 00:13:15.546 "num_base_bdevs_operational": 2, 00:13:15.546 "base_bdevs_list": [ 00:13:15.546 { 00:13:15.546 "name": "BaseBdev1", 00:13:15.546 "uuid": "e2659336-939a-4f67-b6a9-dd49d206c280", 00:13:15.546 "is_configured": true, 00:13:15.546 "data_offset": 0, 00:13:15.546 "data_size": 65536 00:13:15.546 }, 00:13:15.546 { 00:13:15.546 "name": "BaseBdev2", 00:13:15.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.546 "is_configured": false, 00:13:15.546 "data_offset": 0, 00:13:15.546 "data_size": 0 00:13:15.546 } 00:13:15.546 ] 00:13:15.546 }' 00:13:15.546 21:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:15.546 21:56:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.114 21:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:16.114 [2024-07-13 21:56:35.475288] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:16.114 [2024-07-13 21:56:35.475335] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:13:16.114 21:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:16.373 [2024-07-13 21:56:35.647808] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:16.373 [2024-07-13 21:56:35.649600] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:16.373 [2024-07-13 21:56:35.649638] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:16.373 21:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:13:16.373 21:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:16.373 21:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:16.373 21:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:16.373 21:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:16.373 21:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:16.373 21:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:16.373 21:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:16.373 21:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:16.373 21:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:16.373 21:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:16.373 21:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:16.373 21:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:16.373 21:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:16.631 21:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:16.631 "name": "Existed_Raid", 00:13:16.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.631 "strip_size_kb": 0, 00:13:16.631 "state": "configuring", 00:13:16.631 "raid_level": "raid1", 00:13:16.631 "superblock": false, 00:13:16.631 "num_base_bdevs": 2, 00:13:16.631 "num_base_bdevs_discovered": 1, 00:13:16.631 "num_base_bdevs_operational": 2, 00:13:16.631 "base_bdevs_list": [ 00:13:16.631 { 00:13:16.631 "name": "BaseBdev1", 00:13:16.631 "uuid": "e2659336-939a-4f67-b6a9-dd49d206c280", 00:13:16.631 "is_configured": true, 00:13:16.631 "data_offset": 0, 00:13:16.631 "data_size": 65536 00:13:16.631 }, 00:13:16.631 { 00:13:16.631 "name": "BaseBdev2", 00:13:16.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.631 "is_configured": false, 00:13:16.631 "data_offset": 0, 00:13:16.631 "data_size": 0 00:13:16.631 } 00:13:16.631 ] 00:13:16.631 }' 00:13:16.631 21:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:16.631 21:56:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.198 21:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:17.198 [2024-07-13 21:56:36.528608] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:17.198 [2024-07-13 21:56:36.528659] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:13:17.198 [2024-07-13 21:56:36.528671] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:17.198 [2024-07-13 21:56:36.528923] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:13:17.198 [2024-07-13 21:56:36.529094] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:13:17.198 [2024-07-13 21:56:36.529107] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:13:17.198 [2024-07-13 21:56:36.529373] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:17.198 BaseBdev2 00:13:17.198 21:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:13:17.198 21:56:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:13:17.198 21:56:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:17.198 21:56:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:17.198 21:56:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:17.198 21:56:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:17.198 21:56:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:17.460 21:56:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:17.718 [ 00:13:17.718 { 00:13:17.718 "name": "BaseBdev2", 00:13:17.718 "aliases": [ 00:13:17.718 "b0a95c7b-4e78-4689-84bc-10ca73d71da6" 00:13:17.718 ], 00:13:17.718 "product_name": "Malloc disk", 00:13:17.718 "block_size": 512, 00:13:17.718 "num_blocks": 65536, 00:13:17.718 "uuid": "b0a95c7b-4e78-4689-84bc-10ca73d71da6", 00:13:17.718 "assigned_rate_limits": { 00:13:17.718 "rw_ios_per_sec": 0, 00:13:17.718 "rw_mbytes_per_sec": 0, 00:13:17.718 "r_mbytes_per_sec": 0, 00:13:17.718 "w_mbytes_per_sec": 0 00:13:17.718 }, 00:13:17.718 "claimed": true, 00:13:17.718 "claim_type": "exclusive_write", 00:13:17.718 "zoned": false, 00:13:17.718 "supported_io_types": { 00:13:17.718 "read": true, 00:13:17.718 "write": true, 00:13:17.718 "unmap": true, 00:13:17.718 "flush": true, 00:13:17.718 "reset": true, 00:13:17.718 "nvme_admin": false, 00:13:17.718 "nvme_io": false, 00:13:17.718 "nvme_io_md": false, 00:13:17.718 "write_zeroes": true, 00:13:17.718 "zcopy": true, 00:13:17.718 "get_zone_info": false, 00:13:17.718 "zone_management": false, 00:13:17.718 "zone_append": false, 00:13:17.718 "compare": false, 00:13:17.718 "compare_and_write": false, 00:13:17.718 "abort": true, 00:13:17.718 "seek_hole": false, 00:13:17.718 "seek_data": false, 00:13:17.718 "copy": true, 00:13:17.718 "nvme_iov_md": false 00:13:17.718 }, 00:13:17.718 "memory_domains": [ 00:13:17.718 { 00:13:17.718 "dma_device_id": "system", 00:13:17.718 "dma_device_type": 1 00:13:17.718 }, 00:13:17.718 { 00:13:17.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.718 "dma_device_type": 2 00:13:17.718 } 00:13:17.718 ], 00:13:17.718 "driver_specific": {} 00:13:17.718 } 00:13:17.718 ] 00:13:17.718 21:56:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:17.718 21:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:17.718 21:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:17.718 21:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:13:17.718 21:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:17.718 21:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:17.718 21:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:17.718 21:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:17.718 21:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:17.718 21:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:17.718 21:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:17.718 21:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:17.718 21:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:17.718 21:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:17.718 21:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.718 21:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:17.718 "name": "Existed_Raid", 00:13:17.718 "uuid": "379afeda-5d3a-48f1-a6f8-f06de26b45be", 00:13:17.718 "strip_size_kb": 0, 00:13:17.718 "state": "online", 00:13:17.718 "raid_level": "raid1", 00:13:17.718 "superblock": false, 00:13:17.718 "num_base_bdevs": 2, 00:13:17.718 "num_base_bdevs_discovered": 2, 00:13:17.718 "num_base_bdevs_operational": 2, 00:13:17.718 "base_bdevs_list": [ 00:13:17.718 { 00:13:17.718 "name": "BaseBdev1", 00:13:17.718 "uuid": "e2659336-939a-4f67-b6a9-dd49d206c280", 00:13:17.718 "is_configured": true, 00:13:17.718 "data_offset": 0, 00:13:17.718 "data_size": 65536 00:13:17.718 }, 00:13:17.718 { 00:13:17.718 "name": "BaseBdev2", 00:13:17.718 "uuid": "b0a95c7b-4e78-4689-84bc-10ca73d71da6", 00:13:17.718 "is_configured": true, 00:13:17.718 "data_offset": 0, 00:13:17.718 "data_size": 65536 00:13:17.718 } 00:13:17.718 ] 00:13:17.718 }' 00:13:17.718 21:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:17.718 21:56:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.284 21:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:13:18.284 21:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:18.284 21:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:18.284 21:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:18.284 21:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:18.284 21:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:18.284 21:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:18.284 21:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:18.542 [2024-07-13 21:56:37.695967] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:18.542 21:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:18.542 "name": "Existed_Raid", 00:13:18.542 "aliases": [ 00:13:18.542 "379afeda-5d3a-48f1-a6f8-f06de26b45be" 00:13:18.542 ], 00:13:18.542 "product_name": "Raid Volume", 00:13:18.542 "block_size": 512, 00:13:18.542 "num_blocks": 65536, 00:13:18.542 "uuid": "379afeda-5d3a-48f1-a6f8-f06de26b45be", 00:13:18.542 "assigned_rate_limits": { 00:13:18.542 "rw_ios_per_sec": 0, 00:13:18.542 "rw_mbytes_per_sec": 0, 00:13:18.542 "r_mbytes_per_sec": 0, 00:13:18.542 "w_mbytes_per_sec": 0 00:13:18.542 }, 00:13:18.542 "claimed": false, 00:13:18.542 "zoned": false, 00:13:18.542 "supported_io_types": { 00:13:18.542 "read": true, 00:13:18.542 "write": true, 00:13:18.542 "unmap": false, 00:13:18.542 "flush": false, 00:13:18.542 "reset": true, 00:13:18.542 "nvme_admin": false, 00:13:18.542 "nvme_io": false, 00:13:18.542 "nvme_io_md": false, 00:13:18.542 "write_zeroes": true, 00:13:18.542 "zcopy": false, 00:13:18.542 "get_zone_info": false, 00:13:18.542 "zone_management": false, 00:13:18.542 "zone_append": false, 00:13:18.542 "compare": false, 00:13:18.542 "compare_and_write": false, 00:13:18.542 "abort": false, 00:13:18.542 "seek_hole": false, 00:13:18.542 "seek_data": false, 00:13:18.543 "copy": false, 00:13:18.543 "nvme_iov_md": false 00:13:18.543 }, 00:13:18.543 "memory_domains": [ 00:13:18.543 { 00:13:18.543 "dma_device_id": "system", 00:13:18.543 "dma_device_type": 1 00:13:18.543 }, 00:13:18.543 { 00:13:18.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.543 "dma_device_type": 2 00:13:18.543 }, 00:13:18.543 { 00:13:18.543 "dma_device_id": "system", 00:13:18.543 "dma_device_type": 1 00:13:18.543 }, 00:13:18.543 { 00:13:18.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.543 "dma_device_type": 2 00:13:18.543 } 00:13:18.543 ], 00:13:18.543 "driver_specific": { 00:13:18.543 "raid": { 00:13:18.543 "uuid": "379afeda-5d3a-48f1-a6f8-f06de26b45be", 00:13:18.543 "strip_size_kb": 0, 00:13:18.543 "state": "online", 00:13:18.543 "raid_level": "raid1", 00:13:18.543 "superblock": false, 00:13:18.543 "num_base_bdevs": 2, 00:13:18.543 "num_base_bdevs_discovered": 2, 00:13:18.543 "num_base_bdevs_operational": 2, 00:13:18.543 "base_bdevs_list": [ 00:13:18.543 { 00:13:18.543 "name": "BaseBdev1", 00:13:18.543 "uuid": "e2659336-939a-4f67-b6a9-dd49d206c280", 00:13:18.543 "is_configured": true, 00:13:18.543 "data_offset": 0, 00:13:18.543 "data_size": 65536 00:13:18.543 }, 00:13:18.543 { 00:13:18.543 "name": "BaseBdev2", 00:13:18.543 "uuid": "b0a95c7b-4e78-4689-84bc-10ca73d71da6", 00:13:18.543 "is_configured": true, 00:13:18.543 "data_offset": 0, 00:13:18.543 "data_size": 65536 00:13:18.543 } 00:13:18.543 ] 00:13:18.543 } 00:13:18.543 } 00:13:18.543 }' 00:13:18.543 21:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:18.543 21:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:13:18.543 BaseBdev2' 00:13:18.543 21:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:18.543 21:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:13:18.543 21:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:18.543 21:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:18.543 "name": "BaseBdev1", 00:13:18.543 "aliases": [ 00:13:18.543 "e2659336-939a-4f67-b6a9-dd49d206c280" 00:13:18.543 ], 00:13:18.543 "product_name": "Malloc disk", 00:13:18.543 "block_size": 512, 00:13:18.543 "num_blocks": 65536, 00:13:18.543 "uuid": "e2659336-939a-4f67-b6a9-dd49d206c280", 00:13:18.543 "assigned_rate_limits": { 00:13:18.543 "rw_ios_per_sec": 0, 00:13:18.543 "rw_mbytes_per_sec": 0, 00:13:18.543 "r_mbytes_per_sec": 0, 00:13:18.543 "w_mbytes_per_sec": 0 00:13:18.543 }, 00:13:18.543 "claimed": true, 00:13:18.543 "claim_type": "exclusive_write", 00:13:18.543 "zoned": false, 00:13:18.543 "supported_io_types": { 00:13:18.543 "read": true, 00:13:18.543 "write": true, 00:13:18.543 "unmap": true, 00:13:18.543 "flush": true, 00:13:18.543 "reset": true, 00:13:18.543 "nvme_admin": false, 00:13:18.543 "nvme_io": false, 00:13:18.543 "nvme_io_md": false, 00:13:18.543 "write_zeroes": true, 00:13:18.543 "zcopy": true, 00:13:18.543 "get_zone_info": false, 00:13:18.543 "zone_management": false, 00:13:18.543 "zone_append": false, 00:13:18.543 "compare": false, 00:13:18.543 "compare_and_write": false, 00:13:18.543 "abort": true, 00:13:18.543 "seek_hole": false, 00:13:18.543 "seek_data": false, 00:13:18.543 "copy": true, 00:13:18.543 "nvme_iov_md": false 00:13:18.543 }, 00:13:18.543 "memory_domains": [ 00:13:18.543 { 00:13:18.543 "dma_device_id": "system", 00:13:18.543 "dma_device_type": 1 00:13:18.543 }, 00:13:18.543 { 00:13:18.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.543 "dma_device_type": 2 00:13:18.543 } 00:13:18.543 ], 00:13:18.543 "driver_specific": {} 00:13:18.543 }' 00:13:18.800 21:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:18.800 21:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:18.800 21:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:18.800 21:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:18.800 21:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:18.800 21:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:18.800 21:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:18.800 21:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:18.800 21:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:18.800 21:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:18.800 21:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:19.057 21:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:19.057 21:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:19.057 21:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:19.057 21:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:19.057 21:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:19.057 "name": "BaseBdev2", 00:13:19.057 "aliases": [ 00:13:19.057 "b0a95c7b-4e78-4689-84bc-10ca73d71da6" 00:13:19.057 ], 00:13:19.057 "product_name": "Malloc disk", 00:13:19.057 "block_size": 512, 00:13:19.057 "num_blocks": 65536, 00:13:19.057 "uuid": "b0a95c7b-4e78-4689-84bc-10ca73d71da6", 00:13:19.057 "assigned_rate_limits": { 00:13:19.057 "rw_ios_per_sec": 0, 00:13:19.057 "rw_mbytes_per_sec": 0, 00:13:19.057 "r_mbytes_per_sec": 0, 00:13:19.057 "w_mbytes_per_sec": 0 00:13:19.057 }, 00:13:19.057 "claimed": true, 00:13:19.057 "claim_type": "exclusive_write", 00:13:19.057 "zoned": false, 00:13:19.057 "supported_io_types": { 00:13:19.057 "read": true, 00:13:19.057 "write": true, 00:13:19.057 "unmap": true, 00:13:19.057 "flush": true, 00:13:19.057 "reset": true, 00:13:19.057 "nvme_admin": false, 00:13:19.057 "nvme_io": false, 00:13:19.057 "nvme_io_md": false, 00:13:19.057 "write_zeroes": true, 00:13:19.057 "zcopy": true, 00:13:19.057 "get_zone_info": false, 00:13:19.057 "zone_management": false, 00:13:19.057 "zone_append": false, 00:13:19.057 "compare": false, 00:13:19.057 "compare_and_write": false, 00:13:19.057 "abort": true, 00:13:19.057 "seek_hole": false, 00:13:19.058 "seek_data": false, 00:13:19.058 "copy": true, 00:13:19.058 "nvme_iov_md": false 00:13:19.058 }, 00:13:19.058 "memory_domains": [ 00:13:19.058 { 00:13:19.058 "dma_device_id": "system", 00:13:19.058 "dma_device_type": 1 00:13:19.058 }, 00:13:19.058 { 00:13:19.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.058 "dma_device_type": 2 00:13:19.058 } 00:13:19.058 ], 00:13:19.058 "driver_specific": {} 00:13:19.058 }' 00:13:19.058 21:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:19.058 21:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:19.315 21:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:19.315 21:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:19.315 21:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:19.315 21:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:19.315 21:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:19.315 21:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:19.315 21:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:19.315 21:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:19.315 21:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:19.315 21:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:19.315 21:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:19.574 [2024-07-13 21:56:38.846833] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:19.574 21:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:13:19.574 21:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:13:19.574 21:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:19.574 21:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:13:19.574 21:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:13:19.574 21:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:13:19.574 21:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:19.575 21:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:19.575 21:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:19.575 21:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:19.575 21:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:13:19.575 21:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:19.575 21:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:19.575 21:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:19.575 21:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:19.575 21:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:19.575 21:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:19.834 21:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:19.834 "name": "Existed_Raid", 00:13:19.834 "uuid": "379afeda-5d3a-48f1-a6f8-f06de26b45be", 00:13:19.834 "strip_size_kb": 0, 00:13:19.834 "state": "online", 00:13:19.834 "raid_level": "raid1", 00:13:19.834 "superblock": false, 00:13:19.834 "num_base_bdevs": 2, 00:13:19.834 "num_base_bdevs_discovered": 1, 00:13:19.834 "num_base_bdevs_operational": 1, 00:13:19.834 "base_bdevs_list": [ 00:13:19.834 { 00:13:19.834 "name": null, 00:13:19.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.834 "is_configured": false, 00:13:19.834 "data_offset": 0, 00:13:19.834 "data_size": 65536 00:13:19.834 }, 00:13:19.834 { 00:13:19.834 "name": "BaseBdev2", 00:13:19.834 "uuid": "b0a95c7b-4e78-4689-84bc-10ca73d71da6", 00:13:19.834 "is_configured": true, 00:13:19.834 "data_offset": 0, 00:13:19.834 "data_size": 65536 00:13:19.834 } 00:13:19.834 ] 00:13:19.834 }' 00:13:19.834 21:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:19.834 21:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.403 21:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:13:20.403 21:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:20.403 21:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:20.403 21:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:20.403 21:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:20.403 21:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:20.403 21:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:20.662 [2024-07-13 21:56:39.888798] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:20.662 [2024-07-13 21:56:39.888889] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:20.662 [2024-07-13 21:56:39.980277] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:20.662 [2024-07-13 21:56:39.980328] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:20.662 [2024-07-13 21:56:39.980342] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:13:20.662 21:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:20.662 21:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:20.662 21:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:20.662 21:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:13:20.921 21:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:13:20.921 21:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:13:20.921 21:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:13:20.921 21:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 1359393 00:13:20.921 21:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 1359393 ']' 00:13:20.921 21:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 1359393 00:13:20.921 21:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:13:20.921 21:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:20.921 21:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1359393 00:13:20.921 21:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:20.921 21:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:20.921 21:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1359393' 00:13:20.921 killing process with pid 1359393 00:13:20.921 21:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 1359393 00:13:20.921 [2024-07-13 21:56:40.221569] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:20.921 21:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 1359393 00:13:20.921 [2024-07-13 21:56:40.239267] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:22.301 21:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:13:22.301 00:13:22.301 real 0m9.309s 00:13:22.301 user 0m15.341s 00:13:22.301 sys 0m1.711s 00:13:22.301 21:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:22.301 21:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.301 ************************************ 00:13:22.301 END TEST raid_state_function_test 00:13:22.301 ************************************ 00:13:22.301 21:56:41 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:22.301 21:56:41 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:13:22.301 21:56:41 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:22.301 21:56:41 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:22.301 21:56:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:22.301 ************************************ 00:13:22.301 START TEST raid_state_function_test_sb 00:13:22.301 ************************************ 00:13:22.301 21:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:13:22.301 21:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:13:22.301 21:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:13:22.301 21:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:13:22.301 21:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:13:22.301 21:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:13:22.301 21:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:22.301 21:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:13:22.301 21:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:22.301 21:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:22.301 21:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:13:22.301 21:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:22.301 21:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:22.301 21:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:22.301 21:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:13:22.301 21:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:13:22.301 21:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:13:22.301 21:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:13:22.301 21:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:13:22.301 21:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:13:22.301 21:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:13:22.301 21:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:13:22.301 21:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:13:22.301 21:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=1361227 00:13:22.301 21:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1361227' 00:13:22.301 Process raid pid: 1361227 00:13:22.301 21:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 1361227 /var/tmp/spdk-raid.sock 00:13:22.301 21:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 1361227 ']' 00:13:22.301 21:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:22.302 21:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:22.302 21:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:22.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:22.302 21:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:22.302 21:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.302 21:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:22.302 [2024-07-13 21:56:41.608268] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:22.302 [2024-07-13 21:56:41.608376] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:22.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.562 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:22.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.562 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:22.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.562 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:22.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.562 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:22.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.562 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:22.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.562 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:22.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.562 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:22.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.562 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:22.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.562 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:22.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.562 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:22.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.562 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:22.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.562 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:22.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.562 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:22.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.562 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:22.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.562 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:22.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.562 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:22.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.562 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:22.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.562 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:22.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.562 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:22.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.562 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:22.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.562 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:22.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.562 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:22.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.562 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:22.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.562 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:22.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.562 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:22.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.562 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:22.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.562 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:22.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.562 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:22.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.562 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:22.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.562 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:22.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.562 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:22.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.562 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:22.562 [2024-07-13 21:56:41.771888] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.822 [2024-07-13 21:56:41.977591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.082 [2024-07-13 21:56:42.224396] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:23.082 [2024-07-13 21:56:42.224422] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:23.082 21:56:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:23.082 21:56:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:13:23.082 21:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:23.342 [2024-07-13 21:56:42.521213] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:23.342 [2024-07-13 21:56:42.521257] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:23.342 [2024-07-13 21:56:42.521267] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:23.342 [2024-07-13 21:56:42.521279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:23.342 21:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:23.342 21:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:23.342 21:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:23.342 21:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:23.342 21:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:23.342 21:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:23.342 21:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:23.342 21:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:23.342 21:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:23.342 21:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:23.342 21:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:23.342 21:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.342 21:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:23.342 "name": "Existed_Raid", 00:13:23.342 "uuid": "828b84d1-ba67-4f5f-8989-04c31d418781", 00:13:23.342 "strip_size_kb": 0, 00:13:23.342 "state": "configuring", 00:13:23.342 "raid_level": "raid1", 00:13:23.342 "superblock": true, 00:13:23.342 "num_base_bdevs": 2, 00:13:23.342 "num_base_bdevs_discovered": 0, 00:13:23.342 "num_base_bdevs_operational": 2, 00:13:23.342 "base_bdevs_list": [ 00:13:23.342 { 00:13:23.342 "name": "BaseBdev1", 00:13:23.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.342 "is_configured": false, 00:13:23.342 "data_offset": 0, 00:13:23.342 "data_size": 0 00:13:23.342 }, 00:13:23.342 { 00:13:23.342 "name": "BaseBdev2", 00:13:23.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.342 "is_configured": false, 00:13:23.342 "data_offset": 0, 00:13:23.342 "data_size": 0 00:13:23.342 } 00:13:23.342 ] 00:13:23.342 }' 00:13:23.342 21:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:23.342 21:56:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.911 21:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:24.169 [2024-07-13 21:56:43.323217] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:24.169 [2024-07-13 21:56:43.323251] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:13:24.169 21:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:24.169 [2024-07-13 21:56:43.491697] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:24.169 [2024-07-13 21:56:43.491736] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:24.169 [2024-07-13 21:56:43.491746] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:24.169 [2024-07-13 21:56:43.491775] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:24.169 21:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:24.427 [2024-07-13 21:56:43.698043] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:24.427 BaseBdev1 00:13:24.427 21:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:13:24.427 21:56:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:13:24.427 21:56:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:24.427 21:56:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:24.427 21:56:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:24.427 21:56:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:24.427 21:56:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:24.685 21:56:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:24.685 [ 00:13:24.685 { 00:13:24.685 "name": "BaseBdev1", 00:13:24.685 "aliases": [ 00:13:24.685 "c0da31e2-f555-416d-8895-5183094a48a0" 00:13:24.685 ], 00:13:24.685 "product_name": "Malloc disk", 00:13:24.685 "block_size": 512, 00:13:24.685 "num_blocks": 65536, 00:13:24.685 "uuid": "c0da31e2-f555-416d-8895-5183094a48a0", 00:13:24.685 "assigned_rate_limits": { 00:13:24.685 "rw_ios_per_sec": 0, 00:13:24.685 "rw_mbytes_per_sec": 0, 00:13:24.685 "r_mbytes_per_sec": 0, 00:13:24.685 "w_mbytes_per_sec": 0 00:13:24.685 }, 00:13:24.685 "claimed": true, 00:13:24.685 "claim_type": "exclusive_write", 00:13:24.685 "zoned": false, 00:13:24.685 "supported_io_types": { 00:13:24.685 "read": true, 00:13:24.685 "write": true, 00:13:24.685 "unmap": true, 00:13:24.685 "flush": true, 00:13:24.685 "reset": true, 00:13:24.685 "nvme_admin": false, 00:13:24.685 "nvme_io": false, 00:13:24.685 "nvme_io_md": false, 00:13:24.685 "write_zeroes": true, 00:13:24.685 "zcopy": true, 00:13:24.685 "get_zone_info": false, 00:13:24.685 "zone_management": false, 00:13:24.685 "zone_append": false, 00:13:24.685 "compare": false, 00:13:24.685 "compare_and_write": false, 00:13:24.685 "abort": true, 00:13:24.685 "seek_hole": false, 00:13:24.685 "seek_data": false, 00:13:24.685 "copy": true, 00:13:24.685 "nvme_iov_md": false 00:13:24.685 }, 00:13:24.685 "memory_domains": [ 00:13:24.685 { 00:13:24.685 "dma_device_id": "system", 00:13:24.685 "dma_device_type": 1 00:13:24.685 }, 00:13:24.685 { 00:13:24.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.685 "dma_device_type": 2 00:13:24.685 } 00:13:24.685 ], 00:13:24.685 "driver_specific": {} 00:13:24.685 } 00:13:24.685 ] 00:13:24.685 21:56:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:24.685 21:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:24.685 21:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:24.685 21:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:24.685 21:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:24.685 21:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:24.685 21:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:24.685 21:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:24.685 21:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:24.685 21:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:24.685 21:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:24.685 21:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:24.685 21:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:24.945 21:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:24.945 "name": "Existed_Raid", 00:13:24.945 "uuid": "217de8e1-e1dd-4b39-80fc-4092db6d22ec", 00:13:24.945 "strip_size_kb": 0, 00:13:24.945 "state": "configuring", 00:13:24.945 "raid_level": "raid1", 00:13:24.945 "superblock": true, 00:13:24.945 "num_base_bdevs": 2, 00:13:24.945 "num_base_bdevs_discovered": 1, 00:13:24.945 "num_base_bdevs_operational": 2, 00:13:24.945 "base_bdevs_list": [ 00:13:24.945 { 00:13:24.945 "name": "BaseBdev1", 00:13:24.945 "uuid": "c0da31e2-f555-416d-8895-5183094a48a0", 00:13:24.945 "is_configured": true, 00:13:24.945 "data_offset": 2048, 00:13:24.945 "data_size": 63488 00:13:24.945 }, 00:13:24.945 { 00:13:24.945 "name": "BaseBdev2", 00:13:24.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.945 "is_configured": false, 00:13:24.945 "data_offset": 0, 00:13:24.945 "data_size": 0 00:13:24.945 } 00:13:24.945 ] 00:13:24.945 }' 00:13:24.945 21:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:24.945 21:56:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.511 21:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:25.511 [2024-07-13 21:56:44.853131] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:25.511 [2024-07-13 21:56:44.853180] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:13:25.512 21:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:25.770 [2024-07-13 21:56:45.009597] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:25.770 [2024-07-13 21:56:45.011312] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:25.770 [2024-07-13 21:56:45.011349] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:25.770 21:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:13:25.770 21:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:25.770 21:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:25.770 21:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:25.770 21:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:25.770 21:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:25.770 21:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:25.770 21:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:25.770 21:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:25.770 21:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:25.770 21:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:25.770 21:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:25.770 21:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:25.770 21:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.029 21:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:26.029 "name": "Existed_Raid", 00:13:26.029 "uuid": "f84a4f78-17ab-4129-b485-307d7403807d", 00:13:26.029 "strip_size_kb": 0, 00:13:26.029 "state": "configuring", 00:13:26.029 "raid_level": "raid1", 00:13:26.029 "superblock": true, 00:13:26.029 "num_base_bdevs": 2, 00:13:26.029 "num_base_bdevs_discovered": 1, 00:13:26.029 "num_base_bdevs_operational": 2, 00:13:26.029 "base_bdevs_list": [ 00:13:26.029 { 00:13:26.029 "name": "BaseBdev1", 00:13:26.029 "uuid": "c0da31e2-f555-416d-8895-5183094a48a0", 00:13:26.029 "is_configured": true, 00:13:26.029 "data_offset": 2048, 00:13:26.029 "data_size": 63488 00:13:26.029 }, 00:13:26.029 { 00:13:26.029 "name": "BaseBdev2", 00:13:26.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.029 "is_configured": false, 00:13:26.029 "data_offset": 0, 00:13:26.029 "data_size": 0 00:13:26.029 } 00:13:26.029 ] 00:13:26.029 }' 00:13:26.029 21:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:26.029 21:56:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.597 21:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:26.597 [2024-07-13 21:56:45.869508] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:26.597 [2024-07-13 21:56:45.869738] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:13:26.597 [2024-07-13 21:56:45.869758] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:26.597 [2024-07-13 21:56:45.870014] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:13:26.597 [2024-07-13 21:56:45.870189] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:13:26.597 [2024-07-13 21:56:45.870203] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:13:26.597 [2024-07-13 21:56:45.870342] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:26.597 BaseBdev2 00:13:26.597 21:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:13:26.597 21:56:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:13:26.597 21:56:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:26.597 21:56:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:26.597 21:56:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:26.597 21:56:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:26.597 21:56:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:26.857 21:56:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:26.857 [ 00:13:26.857 { 00:13:26.857 "name": "BaseBdev2", 00:13:26.857 "aliases": [ 00:13:26.857 "9220204a-4d4d-4a12-8a15-4a198869016d" 00:13:26.857 ], 00:13:26.857 "product_name": "Malloc disk", 00:13:26.857 "block_size": 512, 00:13:26.857 "num_blocks": 65536, 00:13:26.857 "uuid": "9220204a-4d4d-4a12-8a15-4a198869016d", 00:13:26.857 "assigned_rate_limits": { 00:13:26.857 "rw_ios_per_sec": 0, 00:13:26.857 "rw_mbytes_per_sec": 0, 00:13:26.857 "r_mbytes_per_sec": 0, 00:13:26.857 "w_mbytes_per_sec": 0 00:13:26.857 }, 00:13:26.857 "claimed": true, 00:13:26.857 "claim_type": "exclusive_write", 00:13:26.857 "zoned": false, 00:13:26.857 "supported_io_types": { 00:13:26.857 "read": true, 00:13:26.857 "write": true, 00:13:26.857 "unmap": true, 00:13:26.857 "flush": true, 00:13:26.857 "reset": true, 00:13:26.857 "nvme_admin": false, 00:13:26.857 "nvme_io": false, 00:13:26.857 "nvme_io_md": false, 00:13:26.857 "write_zeroes": true, 00:13:26.857 "zcopy": true, 00:13:26.857 "get_zone_info": false, 00:13:26.857 "zone_management": false, 00:13:26.857 "zone_append": false, 00:13:26.857 "compare": false, 00:13:26.857 "compare_and_write": false, 00:13:26.857 "abort": true, 00:13:26.857 "seek_hole": false, 00:13:26.857 "seek_data": false, 00:13:26.857 "copy": true, 00:13:26.857 "nvme_iov_md": false 00:13:26.857 }, 00:13:26.857 "memory_domains": [ 00:13:26.857 { 00:13:26.857 "dma_device_id": "system", 00:13:26.857 "dma_device_type": 1 00:13:26.857 }, 00:13:26.857 { 00:13:26.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.857 "dma_device_type": 2 00:13:26.857 } 00:13:26.857 ], 00:13:26.857 "driver_specific": {} 00:13:26.857 } 00:13:26.857 ] 00:13:26.857 21:56:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:26.857 21:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:26.857 21:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:26.857 21:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:13:26.857 21:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:26.857 21:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:26.857 21:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:26.857 21:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:26.857 21:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:26.857 21:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:26.857 21:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:26.857 21:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:26.857 21:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:27.117 21:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:27.117 21:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.117 21:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:27.117 "name": "Existed_Raid", 00:13:27.117 "uuid": "f84a4f78-17ab-4129-b485-307d7403807d", 00:13:27.117 "strip_size_kb": 0, 00:13:27.117 "state": "online", 00:13:27.117 "raid_level": "raid1", 00:13:27.117 "superblock": true, 00:13:27.117 "num_base_bdevs": 2, 00:13:27.117 "num_base_bdevs_discovered": 2, 00:13:27.117 "num_base_bdevs_operational": 2, 00:13:27.117 "base_bdevs_list": [ 00:13:27.117 { 00:13:27.117 "name": "BaseBdev1", 00:13:27.117 "uuid": "c0da31e2-f555-416d-8895-5183094a48a0", 00:13:27.117 "is_configured": true, 00:13:27.117 "data_offset": 2048, 00:13:27.117 "data_size": 63488 00:13:27.117 }, 00:13:27.117 { 00:13:27.117 "name": "BaseBdev2", 00:13:27.117 "uuid": "9220204a-4d4d-4a12-8a15-4a198869016d", 00:13:27.117 "is_configured": true, 00:13:27.117 "data_offset": 2048, 00:13:27.117 "data_size": 63488 00:13:27.117 } 00:13:27.117 ] 00:13:27.117 }' 00:13:27.117 21:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:27.117 21:56:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.686 21:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:13:27.686 21:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:27.686 21:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:27.686 21:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:27.686 21:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:27.686 21:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:13:27.686 21:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:27.686 21:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:27.686 [2024-07-13 21:56:47.056931] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:27.946 21:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:27.946 "name": "Existed_Raid", 00:13:27.946 "aliases": [ 00:13:27.946 "f84a4f78-17ab-4129-b485-307d7403807d" 00:13:27.946 ], 00:13:27.946 "product_name": "Raid Volume", 00:13:27.946 "block_size": 512, 00:13:27.946 "num_blocks": 63488, 00:13:27.946 "uuid": "f84a4f78-17ab-4129-b485-307d7403807d", 00:13:27.946 "assigned_rate_limits": { 00:13:27.946 "rw_ios_per_sec": 0, 00:13:27.946 "rw_mbytes_per_sec": 0, 00:13:27.946 "r_mbytes_per_sec": 0, 00:13:27.946 "w_mbytes_per_sec": 0 00:13:27.946 }, 00:13:27.946 "claimed": false, 00:13:27.946 "zoned": false, 00:13:27.946 "supported_io_types": { 00:13:27.946 "read": true, 00:13:27.946 "write": true, 00:13:27.946 "unmap": false, 00:13:27.946 "flush": false, 00:13:27.946 "reset": true, 00:13:27.946 "nvme_admin": false, 00:13:27.946 "nvme_io": false, 00:13:27.946 "nvme_io_md": false, 00:13:27.946 "write_zeroes": true, 00:13:27.946 "zcopy": false, 00:13:27.946 "get_zone_info": false, 00:13:27.946 "zone_management": false, 00:13:27.946 "zone_append": false, 00:13:27.946 "compare": false, 00:13:27.946 "compare_and_write": false, 00:13:27.946 "abort": false, 00:13:27.946 "seek_hole": false, 00:13:27.946 "seek_data": false, 00:13:27.946 "copy": false, 00:13:27.946 "nvme_iov_md": false 00:13:27.946 }, 00:13:27.946 "memory_domains": [ 00:13:27.946 { 00:13:27.946 "dma_device_id": "system", 00:13:27.946 "dma_device_type": 1 00:13:27.946 }, 00:13:27.946 { 00:13:27.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.946 "dma_device_type": 2 00:13:27.946 }, 00:13:27.946 { 00:13:27.946 "dma_device_id": "system", 00:13:27.946 "dma_device_type": 1 00:13:27.946 }, 00:13:27.946 { 00:13:27.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.946 "dma_device_type": 2 00:13:27.946 } 00:13:27.946 ], 00:13:27.946 "driver_specific": { 00:13:27.946 "raid": { 00:13:27.946 "uuid": "f84a4f78-17ab-4129-b485-307d7403807d", 00:13:27.946 "strip_size_kb": 0, 00:13:27.946 "state": "online", 00:13:27.946 "raid_level": "raid1", 00:13:27.946 "superblock": true, 00:13:27.946 "num_base_bdevs": 2, 00:13:27.946 "num_base_bdevs_discovered": 2, 00:13:27.946 "num_base_bdevs_operational": 2, 00:13:27.946 "base_bdevs_list": [ 00:13:27.946 { 00:13:27.946 "name": "BaseBdev1", 00:13:27.946 "uuid": "c0da31e2-f555-416d-8895-5183094a48a0", 00:13:27.946 "is_configured": true, 00:13:27.946 "data_offset": 2048, 00:13:27.946 "data_size": 63488 00:13:27.946 }, 00:13:27.946 { 00:13:27.946 "name": "BaseBdev2", 00:13:27.946 "uuid": "9220204a-4d4d-4a12-8a15-4a198869016d", 00:13:27.946 "is_configured": true, 00:13:27.946 "data_offset": 2048, 00:13:27.946 "data_size": 63488 00:13:27.946 } 00:13:27.946 ] 00:13:27.946 } 00:13:27.946 } 00:13:27.946 }' 00:13:27.946 21:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:27.946 21:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:13:27.946 BaseBdev2' 00:13:27.946 21:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:27.946 21:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:13:27.946 21:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:27.946 21:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:27.946 "name": "BaseBdev1", 00:13:27.946 "aliases": [ 00:13:27.946 "c0da31e2-f555-416d-8895-5183094a48a0" 00:13:27.946 ], 00:13:27.946 "product_name": "Malloc disk", 00:13:27.946 "block_size": 512, 00:13:27.946 "num_blocks": 65536, 00:13:27.946 "uuid": "c0da31e2-f555-416d-8895-5183094a48a0", 00:13:27.946 "assigned_rate_limits": { 00:13:27.946 "rw_ios_per_sec": 0, 00:13:27.946 "rw_mbytes_per_sec": 0, 00:13:27.946 "r_mbytes_per_sec": 0, 00:13:27.946 "w_mbytes_per_sec": 0 00:13:27.946 }, 00:13:27.946 "claimed": true, 00:13:27.946 "claim_type": "exclusive_write", 00:13:27.946 "zoned": false, 00:13:27.946 "supported_io_types": { 00:13:27.946 "read": true, 00:13:27.946 "write": true, 00:13:27.946 "unmap": true, 00:13:27.946 "flush": true, 00:13:27.946 "reset": true, 00:13:27.946 "nvme_admin": false, 00:13:27.946 "nvme_io": false, 00:13:27.946 "nvme_io_md": false, 00:13:27.946 "write_zeroes": true, 00:13:27.946 "zcopy": true, 00:13:27.946 "get_zone_info": false, 00:13:27.946 "zone_management": false, 00:13:27.946 "zone_append": false, 00:13:27.946 "compare": false, 00:13:27.946 "compare_and_write": false, 00:13:27.946 "abort": true, 00:13:27.946 "seek_hole": false, 00:13:27.946 "seek_data": false, 00:13:27.946 "copy": true, 00:13:27.946 "nvme_iov_md": false 00:13:27.946 }, 00:13:27.946 "memory_domains": [ 00:13:27.946 { 00:13:27.946 "dma_device_id": "system", 00:13:27.946 "dma_device_type": 1 00:13:27.946 }, 00:13:27.946 { 00:13:27.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.946 "dma_device_type": 2 00:13:27.946 } 00:13:27.946 ], 00:13:27.946 "driver_specific": {} 00:13:27.946 }' 00:13:27.946 21:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:28.206 21:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:28.206 21:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:28.206 21:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:28.206 21:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:28.206 21:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:28.206 21:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:28.206 21:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:28.206 21:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:28.206 21:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:28.206 21:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:28.465 21:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:28.465 21:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:28.465 21:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:28.465 21:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:28.465 21:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:28.465 "name": "BaseBdev2", 00:13:28.465 "aliases": [ 00:13:28.465 "9220204a-4d4d-4a12-8a15-4a198869016d" 00:13:28.465 ], 00:13:28.465 "product_name": "Malloc disk", 00:13:28.465 "block_size": 512, 00:13:28.465 "num_blocks": 65536, 00:13:28.465 "uuid": "9220204a-4d4d-4a12-8a15-4a198869016d", 00:13:28.465 "assigned_rate_limits": { 00:13:28.465 "rw_ios_per_sec": 0, 00:13:28.465 "rw_mbytes_per_sec": 0, 00:13:28.465 "r_mbytes_per_sec": 0, 00:13:28.465 "w_mbytes_per_sec": 0 00:13:28.465 }, 00:13:28.466 "claimed": true, 00:13:28.466 "claim_type": "exclusive_write", 00:13:28.466 "zoned": false, 00:13:28.466 "supported_io_types": { 00:13:28.466 "read": true, 00:13:28.466 "write": true, 00:13:28.466 "unmap": true, 00:13:28.466 "flush": true, 00:13:28.466 "reset": true, 00:13:28.466 "nvme_admin": false, 00:13:28.466 "nvme_io": false, 00:13:28.466 "nvme_io_md": false, 00:13:28.466 "write_zeroes": true, 00:13:28.466 "zcopy": true, 00:13:28.466 "get_zone_info": false, 00:13:28.466 "zone_management": false, 00:13:28.466 "zone_append": false, 00:13:28.466 "compare": false, 00:13:28.466 "compare_and_write": false, 00:13:28.466 "abort": true, 00:13:28.466 "seek_hole": false, 00:13:28.466 "seek_data": false, 00:13:28.466 "copy": true, 00:13:28.466 "nvme_iov_md": false 00:13:28.466 }, 00:13:28.466 "memory_domains": [ 00:13:28.466 { 00:13:28.466 "dma_device_id": "system", 00:13:28.466 "dma_device_type": 1 00:13:28.466 }, 00:13:28.466 { 00:13:28.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:28.466 "dma_device_type": 2 00:13:28.466 } 00:13:28.466 ], 00:13:28.466 "driver_specific": {} 00:13:28.466 }' 00:13:28.466 21:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:28.466 21:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:28.725 21:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:28.725 21:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:28.725 21:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:28.725 21:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:28.725 21:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:28.725 21:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:28.725 21:56:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:28.725 21:56:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:28.725 21:56:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:28.985 21:56:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:28.985 21:56:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:28.985 [2024-07-13 21:56:48.267957] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:28.985 21:56:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:13:28.985 21:56:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:13:28.985 21:56:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:28.985 21:56:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:13:28.985 21:56:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:13:28.985 21:56:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:13:28.985 21:56:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:28.985 21:56:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:28.985 21:56:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:28.985 21:56:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:28.985 21:56:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:13:28.985 21:56:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:28.985 21:56:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:28.985 21:56:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:28.985 21:56:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:28.985 21:56:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:28.985 21:56:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.244 21:56:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:29.244 "name": "Existed_Raid", 00:13:29.244 "uuid": "f84a4f78-17ab-4129-b485-307d7403807d", 00:13:29.244 "strip_size_kb": 0, 00:13:29.244 "state": "online", 00:13:29.244 "raid_level": "raid1", 00:13:29.244 "superblock": true, 00:13:29.244 "num_base_bdevs": 2, 00:13:29.244 "num_base_bdevs_discovered": 1, 00:13:29.244 "num_base_bdevs_operational": 1, 00:13:29.244 "base_bdevs_list": [ 00:13:29.244 { 00:13:29.244 "name": null, 00:13:29.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.244 "is_configured": false, 00:13:29.244 "data_offset": 2048, 00:13:29.244 "data_size": 63488 00:13:29.244 }, 00:13:29.244 { 00:13:29.244 "name": "BaseBdev2", 00:13:29.244 "uuid": "9220204a-4d4d-4a12-8a15-4a198869016d", 00:13:29.244 "is_configured": true, 00:13:29.244 "data_offset": 2048, 00:13:29.244 "data_size": 63488 00:13:29.244 } 00:13:29.244 ] 00:13:29.244 }' 00:13:29.244 21:56:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:29.244 21:56:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.567 21:56:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:13:29.567 21:56:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:29.567 21:56:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:29.567 21:56:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:29.826 21:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:29.826 21:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:29.826 21:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:30.086 [2024-07-13 21:56:49.261158] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:30.086 [2024-07-13 21:56:49.261257] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:30.086 [2024-07-13 21:56:49.354961] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:30.086 [2024-07-13 21:56:49.355010] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:30.086 [2024-07-13 21:56:49.355024] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:13:30.086 21:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:30.086 21:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:30.086 21:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:30.086 21:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:13:30.345 21:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:13:30.345 21:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:13:30.345 21:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:13:30.345 21:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 1361227 00:13:30.345 21:56:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 1361227 ']' 00:13:30.345 21:56:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 1361227 00:13:30.345 21:56:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:13:30.345 21:56:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:30.345 21:56:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1361227 00:13:30.345 21:56:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:30.345 21:56:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:30.345 21:56:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1361227' 00:13:30.345 killing process with pid 1361227 00:13:30.345 21:56:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 1361227 00:13:30.345 [2024-07-13 21:56:49.585881] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:30.345 21:56:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 1361227 00:13:30.345 [2024-07-13 21:56:49.603586] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:31.723 21:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:13:31.723 00:13:31.723 real 0m9.293s 00:13:31.723 user 0m15.278s 00:13:31.723 sys 0m1.748s 00:13:31.723 21:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:31.723 21:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.723 ************************************ 00:13:31.723 END TEST raid_state_function_test_sb 00:13:31.723 ************************************ 00:13:31.723 21:56:50 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:31.723 21:56:50 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:13:31.723 21:56:50 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:13:31.723 21:56:50 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:31.723 21:56:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:31.723 ************************************ 00:13:31.723 START TEST raid_superblock_test 00:13:31.723 ************************************ 00:13:31.723 21:56:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:13:31.723 21:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:13:31.723 21:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:13:31.723 21:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:13:31.723 21:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:13:31.723 21:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:13:31.723 21:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:13:31.723 21:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:13:31.723 21:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:13:31.723 21:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:13:31.723 21:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:13:31.723 21:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:13:31.723 21:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:13:31.723 21:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:13:31.723 21:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:13:31.723 21:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:13:31.723 21:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=1363030 00:13:31.723 21:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 1363030 /var/tmp/spdk-raid.sock 00:13:31.723 21:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:13:31.723 21:56:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 1363030 ']' 00:13:31.723 21:56:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:31.723 21:56:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:31.723 21:56:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:31.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:31.723 21:56:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:31.723 21:56:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.723 [2024-07-13 21:56:50.982271] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:31.723 [2024-07-13 21:56:50.982386] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1363030 ] 00:13:31.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.723 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:31.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.723 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:31.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.723 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:31.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.723 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:31.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.723 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:31.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.723 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:31.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.723 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:31.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.723 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:31.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.723 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:31.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.723 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:31.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.723 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:31.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.723 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:31.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.723 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:31.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.723 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:31.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.723 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:31.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.723 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:31.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.723 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:31.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.723 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:31.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.723 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:31.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.723 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:31.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.723 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:31.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.723 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:31.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.723 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:31.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.723 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:31.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.723 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:31.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.723 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:31.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.723 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:31.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.723 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:31.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.724 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:31.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.724 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:31.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.724 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:31.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.724 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:31.983 [2024-07-13 21:56:51.145668] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.983 [2024-07-13 21:56:51.357408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.241 [2024-07-13 21:56:51.600698] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:32.241 [2024-07-13 21:56:51.600727] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:32.500 21:56:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:32.500 21:56:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:13:32.500 21:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:13:32.500 21:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:32.500 21:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:13:32.500 21:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:13:32.500 21:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:32.500 21:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:32.500 21:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:13:32.500 21:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:32.500 21:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:13:32.759 malloc1 00:13:32.759 21:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:32.759 [2024-07-13 21:56:52.119144] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:32.759 [2024-07-13 21:56:52.119196] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:32.759 [2024-07-13 21:56:52.119235] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:13:32.759 [2024-07-13 21:56:52.119247] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:32.759 [2024-07-13 21:56:52.121342] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:32.759 [2024-07-13 21:56:52.121369] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:32.759 pt1 00:13:32.759 21:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:13:32.760 21:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:32.760 21:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:13:32.760 21:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:13:32.760 21:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:32.760 21:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:32.760 21:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:13:32.760 21:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:32.760 21:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:13:33.019 malloc2 00:13:33.019 21:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:33.278 [2024-07-13 21:56:52.503445] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:33.278 [2024-07-13 21:56:52.503492] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:33.278 [2024-07-13 21:56:52.503530] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:13:33.278 [2024-07-13 21:56:52.503541] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:33.278 [2024-07-13 21:56:52.505618] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:33.278 [2024-07-13 21:56:52.505654] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:33.278 pt2 00:13:33.278 21:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:13:33.278 21:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:33.278 21:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:13:33.538 [2024-07-13 21:56:52.675945] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:33.538 [2024-07-13 21:56:52.677718] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:33.538 [2024-07-13 21:56:52.677887] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000040880 00:13:33.538 [2024-07-13 21:56:52.677911] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:33.538 [2024-07-13 21:56:52.678175] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:13:33.538 [2024-07-13 21:56:52.678367] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000040880 00:13:33.538 [2024-07-13 21:56:52.678380] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000040880 00:13:33.538 [2024-07-13 21:56:52.678533] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:33.538 21:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:33.538 21:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:33.538 21:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:33.538 21:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:33.538 21:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:33.538 21:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:33.538 21:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:33.538 21:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:33.538 21:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:33.538 21:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:33.538 21:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:33.538 21:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.538 21:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:33.538 "name": "raid_bdev1", 00:13:33.538 "uuid": "30f853a8-a7cb-4ae8-8094-a1178f85ab05", 00:13:33.538 "strip_size_kb": 0, 00:13:33.538 "state": "online", 00:13:33.538 "raid_level": "raid1", 00:13:33.538 "superblock": true, 00:13:33.538 "num_base_bdevs": 2, 00:13:33.538 "num_base_bdevs_discovered": 2, 00:13:33.538 "num_base_bdevs_operational": 2, 00:13:33.538 "base_bdevs_list": [ 00:13:33.538 { 00:13:33.538 "name": "pt1", 00:13:33.538 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:33.538 "is_configured": true, 00:13:33.538 "data_offset": 2048, 00:13:33.538 "data_size": 63488 00:13:33.538 }, 00:13:33.538 { 00:13:33.538 "name": "pt2", 00:13:33.538 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:33.538 "is_configured": true, 00:13:33.538 "data_offset": 2048, 00:13:33.538 "data_size": 63488 00:13:33.538 } 00:13:33.538 ] 00:13:33.538 }' 00:13:33.538 21:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:33.538 21:56:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.106 21:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:13:34.106 21:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:13:34.106 21:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:34.106 21:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:34.106 21:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:34.106 21:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:34.106 21:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:34.106 21:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:34.364 [2024-07-13 21:56:53.506302] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:34.364 21:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:34.364 "name": "raid_bdev1", 00:13:34.364 "aliases": [ 00:13:34.364 "30f853a8-a7cb-4ae8-8094-a1178f85ab05" 00:13:34.364 ], 00:13:34.364 "product_name": "Raid Volume", 00:13:34.364 "block_size": 512, 00:13:34.364 "num_blocks": 63488, 00:13:34.364 "uuid": "30f853a8-a7cb-4ae8-8094-a1178f85ab05", 00:13:34.364 "assigned_rate_limits": { 00:13:34.364 "rw_ios_per_sec": 0, 00:13:34.364 "rw_mbytes_per_sec": 0, 00:13:34.364 "r_mbytes_per_sec": 0, 00:13:34.364 "w_mbytes_per_sec": 0 00:13:34.364 }, 00:13:34.364 "claimed": false, 00:13:34.364 "zoned": false, 00:13:34.364 "supported_io_types": { 00:13:34.364 "read": true, 00:13:34.364 "write": true, 00:13:34.364 "unmap": false, 00:13:34.364 "flush": false, 00:13:34.364 "reset": true, 00:13:34.364 "nvme_admin": false, 00:13:34.364 "nvme_io": false, 00:13:34.364 "nvme_io_md": false, 00:13:34.364 "write_zeroes": true, 00:13:34.364 "zcopy": false, 00:13:34.364 "get_zone_info": false, 00:13:34.364 "zone_management": false, 00:13:34.364 "zone_append": false, 00:13:34.364 "compare": false, 00:13:34.364 "compare_and_write": false, 00:13:34.364 "abort": false, 00:13:34.364 "seek_hole": false, 00:13:34.364 "seek_data": false, 00:13:34.364 "copy": false, 00:13:34.364 "nvme_iov_md": false 00:13:34.364 }, 00:13:34.364 "memory_domains": [ 00:13:34.364 { 00:13:34.364 "dma_device_id": "system", 00:13:34.364 "dma_device_type": 1 00:13:34.364 }, 00:13:34.364 { 00:13:34.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.364 "dma_device_type": 2 00:13:34.364 }, 00:13:34.364 { 00:13:34.364 "dma_device_id": "system", 00:13:34.364 "dma_device_type": 1 00:13:34.364 }, 00:13:34.364 { 00:13:34.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.364 "dma_device_type": 2 00:13:34.364 } 00:13:34.364 ], 00:13:34.364 "driver_specific": { 00:13:34.364 "raid": { 00:13:34.364 "uuid": "30f853a8-a7cb-4ae8-8094-a1178f85ab05", 00:13:34.364 "strip_size_kb": 0, 00:13:34.364 "state": "online", 00:13:34.364 "raid_level": "raid1", 00:13:34.364 "superblock": true, 00:13:34.364 "num_base_bdevs": 2, 00:13:34.364 "num_base_bdevs_discovered": 2, 00:13:34.364 "num_base_bdevs_operational": 2, 00:13:34.364 "base_bdevs_list": [ 00:13:34.364 { 00:13:34.364 "name": "pt1", 00:13:34.364 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:34.364 "is_configured": true, 00:13:34.364 "data_offset": 2048, 00:13:34.364 "data_size": 63488 00:13:34.364 }, 00:13:34.364 { 00:13:34.364 "name": "pt2", 00:13:34.364 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:34.364 "is_configured": true, 00:13:34.364 "data_offset": 2048, 00:13:34.364 "data_size": 63488 00:13:34.364 } 00:13:34.364 ] 00:13:34.364 } 00:13:34.364 } 00:13:34.364 }' 00:13:34.364 21:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:34.364 21:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:13:34.364 pt2' 00:13:34.364 21:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:34.365 21:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:13:34.365 21:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:34.365 21:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:34.365 "name": "pt1", 00:13:34.365 "aliases": [ 00:13:34.365 "00000000-0000-0000-0000-000000000001" 00:13:34.365 ], 00:13:34.365 "product_name": "passthru", 00:13:34.365 "block_size": 512, 00:13:34.365 "num_blocks": 65536, 00:13:34.365 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:34.365 "assigned_rate_limits": { 00:13:34.365 "rw_ios_per_sec": 0, 00:13:34.365 "rw_mbytes_per_sec": 0, 00:13:34.365 "r_mbytes_per_sec": 0, 00:13:34.365 "w_mbytes_per_sec": 0 00:13:34.365 }, 00:13:34.365 "claimed": true, 00:13:34.365 "claim_type": "exclusive_write", 00:13:34.365 "zoned": false, 00:13:34.365 "supported_io_types": { 00:13:34.365 "read": true, 00:13:34.365 "write": true, 00:13:34.365 "unmap": true, 00:13:34.365 "flush": true, 00:13:34.365 "reset": true, 00:13:34.365 "nvme_admin": false, 00:13:34.365 "nvme_io": false, 00:13:34.365 "nvme_io_md": false, 00:13:34.365 "write_zeroes": true, 00:13:34.365 "zcopy": true, 00:13:34.365 "get_zone_info": false, 00:13:34.365 "zone_management": false, 00:13:34.365 "zone_append": false, 00:13:34.365 "compare": false, 00:13:34.365 "compare_and_write": false, 00:13:34.365 "abort": true, 00:13:34.365 "seek_hole": false, 00:13:34.365 "seek_data": false, 00:13:34.365 "copy": true, 00:13:34.365 "nvme_iov_md": false 00:13:34.365 }, 00:13:34.365 "memory_domains": [ 00:13:34.365 { 00:13:34.365 "dma_device_id": "system", 00:13:34.365 "dma_device_type": 1 00:13:34.365 }, 00:13:34.365 { 00:13:34.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.365 "dma_device_type": 2 00:13:34.365 } 00:13:34.365 ], 00:13:34.365 "driver_specific": { 00:13:34.365 "passthru": { 00:13:34.365 "name": "pt1", 00:13:34.365 "base_bdev_name": "malloc1" 00:13:34.365 } 00:13:34.365 } 00:13:34.365 }' 00:13:34.365 21:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:34.623 21:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:34.623 21:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:34.623 21:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:34.623 21:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:34.623 21:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:34.623 21:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:34.623 21:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:34.623 21:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:34.623 21:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:34.881 21:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:34.881 21:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:34.881 21:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:34.881 21:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:13:34.881 21:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:34.881 21:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:34.881 "name": "pt2", 00:13:34.881 "aliases": [ 00:13:34.881 "00000000-0000-0000-0000-000000000002" 00:13:34.881 ], 00:13:34.881 "product_name": "passthru", 00:13:34.881 "block_size": 512, 00:13:34.881 "num_blocks": 65536, 00:13:34.881 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:34.881 "assigned_rate_limits": { 00:13:34.881 "rw_ios_per_sec": 0, 00:13:34.881 "rw_mbytes_per_sec": 0, 00:13:34.881 "r_mbytes_per_sec": 0, 00:13:34.881 "w_mbytes_per_sec": 0 00:13:34.881 }, 00:13:34.881 "claimed": true, 00:13:34.881 "claim_type": "exclusive_write", 00:13:34.881 "zoned": false, 00:13:34.881 "supported_io_types": { 00:13:34.881 "read": true, 00:13:34.881 "write": true, 00:13:34.881 "unmap": true, 00:13:34.881 "flush": true, 00:13:34.881 "reset": true, 00:13:34.881 "nvme_admin": false, 00:13:34.881 "nvme_io": false, 00:13:34.881 "nvme_io_md": false, 00:13:34.881 "write_zeroes": true, 00:13:34.881 "zcopy": true, 00:13:34.881 "get_zone_info": false, 00:13:34.881 "zone_management": false, 00:13:34.881 "zone_append": false, 00:13:34.881 "compare": false, 00:13:34.881 "compare_and_write": false, 00:13:34.881 "abort": true, 00:13:34.881 "seek_hole": false, 00:13:34.881 "seek_data": false, 00:13:34.881 "copy": true, 00:13:34.881 "nvme_iov_md": false 00:13:34.881 }, 00:13:34.881 "memory_domains": [ 00:13:34.881 { 00:13:34.881 "dma_device_id": "system", 00:13:34.881 "dma_device_type": 1 00:13:34.881 }, 00:13:34.881 { 00:13:34.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.881 "dma_device_type": 2 00:13:34.881 } 00:13:34.881 ], 00:13:34.881 "driver_specific": { 00:13:34.881 "passthru": { 00:13:34.881 "name": "pt2", 00:13:34.881 "base_bdev_name": "malloc2" 00:13:34.881 } 00:13:34.881 } 00:13:34.881 }' 00:13:34.881 21:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:34.881 21:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:35.140 21:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:35.140 21:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:35.140 21:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:35.140 21:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:35.140 21:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:35.140 21:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:35.140 21:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:35.140 21:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:35.140 21:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:35.399 21:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:35.399 21:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:35.399 21:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:13:35.399 [2024-07-13 21:56:54.689431] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:35.399 21:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=30f853a8-a7cb-4ae8-8094-a1178f85ab05 00:13:35.399 21:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 30f853a8-a7cb-4ae8-8094-a1178f85ab05 ']' 00:13:35.399 21:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:35.658 [2024-07-13 21:56:54.861621] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:35.658 [2024-07-13 21:56:54.861649] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:35.658 [2024-07-13 21:56:54.861718] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:35.658 [2024-07-13 21:56:54.861774] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:35.658 [2024-07-13 21:56:54.861792] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040880 name raid_bdev1, state offline 00:13:35.658 21:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:35.658 21:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:13:35.918 21:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:13:35.918 21:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:13:35.918 21:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:13:35.918 21:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:13:35.918 21:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:13:35.918 21:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:36.177 21:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:13:36.177 21:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:36.435 21:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:13:36.435 21:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:13:36.435 21:56:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:13:36.435 21:56:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:13:36.435 21:56:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:36.435 21:56:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:36.435 21:56:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:36.435 21:56:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:36.435 21:56:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:36.435 21:56:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:36.435 21:56:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:36.435 21:56:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:13:36.435 21:56:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:13:36.435 [2024-07-13 21:56:55.763992] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:36.435 [2024-07-13 21:56:55.765795] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:36.435 [2024-07-13 21:56:55.765855] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:36.435 [2024-07-13 21:56:55.765900] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:36.435 [2024-07-13 21:56:55.765928] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:36.435 [2024-07-13 21:56:55.765941] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040e80 name raid_bdev1, state configuring 00:13:36.435 request: 00:13:36.435 { 00:13:36.435 "name": "raid_bdev1", 00:13:36.435 "raid_level": "raid1", 00:13:36.435 "base_bdevs": [ 00:13:36.435 "malloc1", 00:13:36.435 "malloc2" 00:13:36.435 ], 00:13:36.435 "superblock": false, 00:13:36.435 "method": "bdev_raid_create", 00:13:36.435 "req_id": 1 00:13:36.435 } 00:13:36.435 Got JSON-RPC error response 00:13:36.435 response: 00:13:36.435 { 00:13:36.435 "code": -17, 00:13:36.435 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:36.435 } 00:13:36.435 21:56:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:13:36.435 21:56:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:36.435 21:56:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:36.435 21:56:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:36.435 21:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:36.435 21:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:13:36.692 21:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:13:36.692 21:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:13:36.692 21:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:36.951 [2024-07-13 21:56:56.108831] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:36.951 [2024-07-13 21:56:56.108887] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:36.951 [2024-07-13 21:56:56.108915] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:13:36.951 [2024-07-13 21:56:56.108929] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:36.951 [2024-07-13 21:56:56.111163] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:36.951 [2024-07-13 21:56:56.111195] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:36.951 [2024-07-13 21:56:56.111270] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:36.951 [2024-07-13 21:56:56.111334] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:36.951 pt1 00:13:36.951 21:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:13:36.951 21:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:36.951 21:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:36.951 21:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:36.951 21:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:36.951 21:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:36.951 21:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:36.951 21:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:36.951 21:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:36.951 21:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:36.951 21:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:36.951 21:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.951 21:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:36.951 "name": "raid_bdev1", 00:13:36.951 "uuid": "30f853a8-a7cb-4ae8-8094-a1178f85ab05", 00:13:36.951 "strip_size_kb": 0, 00:13:36.951 "state": "configuring", 00:13:36.951 "raid_level": "raid1", 00:13:36.951 "superblock": true, 00:13:36.951 "num_base_bdevs": 2, 00:13:36.951 "num_base_bdevs_discovered": 1, 00:13:36.951 "num_base_bdevs_operational": 2, 00:13:36.951 "base_bdevs_list": [ 00:13:36.951 { 00:13:36.951 "name": "pt1", 00:13:36.951 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:36.951 "is_configured": true, 00:13:36.951 "data_offset": 2048, 00:13:36.951 "data_size": 63488 00:13:36.951 }, 00:13:36.951 { 00:13:36.951 "name": null, 00:13:36.951 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:36.951 "is_configured": false, 00:13:36.951 "data_offset": 2048, 00:13:36.951 "data_size": 63488 00:13:36.951 } 00:13:36.951 ] 00:13:36.951 }' 00:13:36.951 21:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:36.951 21:56:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.518 21:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:13:37.518 21:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:13:37.518 21:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:13:37.518 21:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:37.777 [2024-07-13 21:56:56.939025] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:37.777 [2024-07-13 21:56:56.939083] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:37.777 [2024-07-13 21:56:56.939103] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041d80 00:13:37.777 [2024-07-13 21:56:56.939116] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:37.777 [2024-07-13 21:56:56.939567] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:37.777 [2024-07-13 21:56:56.939587] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:37.777 [2024-07-13 21:56:56.939665] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:37.777 [2024-07-13 21:56:56.939693] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:37.777 [2024-07-13 21:56:56.939831] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041a80 00:13:37.777 [2024-07-13 21:56:56.939844] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:37.777 [2024-07-13 21:56:56.940078] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:13:37.777 [2024-07-13 21:56:56.940238] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041a80 00:13:37.777 [2024-07-13 21:56:56.940249] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041a80 00:13:37.777 [2024-07-13 21:56:56.940385] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:37.777 pt2 00:13:37.777 21:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:13:37.777 21:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:13:37.777 21:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:37.777 21:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:37.777 21:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:37.777 21:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:37.777 21:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:37.777 21:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:37.777 21:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:37.777 21:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:37.777 21:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:37.777 21:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:37.777 21:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:37.777 21:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.777 21:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:37.777 "name": "raid_bdev1", 00:13:37.777 "uuid": "30f853a8-a7cb-4ae8-8094-a1178f85ab05", 00:13:37.777 "strip_size_kb": 0, 00:13:37.777 "state": "online", 00:13:37.777 "raid_level": "raid1", 00:13:37.777 "superblock": true, 00:13:37.777 "num_base_bdevs": 2, 00:13:37.777 "num_base_bdevs_discovered": 2, 00:13:37.777 "num_base_bdevs_operational": 2, 00:13:37.777 "base_bdevs_list": [ 00:13:37.777 { 00:13:37.777 "name": "pt1", 00:13:37.777 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:37.777 "is_configured": true, 00:13:37.777 "data_offset": 2048, 00:13:37.777 "data_size": 63488 00:13:37.777 }, 00:13:37.777 { 00:13:37.777 "name": "pt2", 00:13:37.777 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:37.777 "is_configured": true, 00:13:37.777 "data_offset": 2048, 00:13:37.777 "data_size": 63488 00:13:37.777 } 00:13:37.777 ] 00:13:37.777 }' 00:13:37.778 21:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:37.778 21:56:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.345 21:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:13:38.345 21:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:13:38.345 21:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:38.345 21:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:38.345 21:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:38.345 21:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:38.345 21:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:38.345 21:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:38.604 [2024-07-13 21:56:57.769449] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:38.604 21:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:38.604 "name": "raid_bdev1", 00:13:38.604 "aliases": [ 00:13:38.604 "30f853a8-a7cb-4ae8-8094-a1178f85ab05" 00:13:38.604 ], 00:13:38.604 "product_name": "Raid Volume", 00:13:38.604 "block_size": 512, 00:13:38.604 "num_blocks": 63488, 00:13:38.604 "uuid": "30f853a8-a7cb-4ae8-8094-a1178f85ab05", 00:13:38.604 "assigned_rate_limits": { 00:13:38.604 "rw_ios_per_sec": 0, 00:13:38.604 "rw_mbytes_per_sec": 0, 00:13:38.604 "r_mbytes_per_sec": 0, 00:13:38.604 "w_mbytes_per_sec": 0 00:13:38.604 }, 00:13:38.604 "claimed": false, 00:13:38.604 "zoned": false, 00:13:38.604 "supported_io_types": { 00:13:38.604 "read": true, 00:13:38.604 "write": true, 00:13:38.604 "unmap": false, 00:13:38.604 "flush": false, 00:13:38.604 "reset": true, 00:13:38.604 "nvme_admin": false, 00:13:38.604 "nvme_io": false, 00:13:38.604 "nvme_io_md": false, 00:13:38.604 "write_zeroes": true, 00:13:38.604 "zcopy": false, 00:13:38.604 "get_zone_info": false, 00:13:38.604 "zone_management": false, 00:13:38.604 "zone_append": false, 00:13:38.604 "compare": false, 00:13:38.604 "compare_and_write": false, 00:13:38.604 "abort": false, 00:13:38.604 "seek_hole": false, 00:13:38.604 "seek_data": false, 00:13:38.604 "copy": false, 00:13:38.604 "nvme_iov_md": false 00:13:38.604 }, 00:13:38.604 "memory_domains": [ 00:13:38.604 { 00:13:38.604 "dma_device_id": "system", 00:13:38.604 "dma_device_type": 1 00:13:38.604 }, 00:13:38.604 { 00:13:38.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.604 "dma_device_type": 2 00:13:38.604 }, 00:13:38.604 { 00:13:38.604 "dma_device_id": "system", 00:13:38.604 "dma_device_type": 1 00:13:38.604 }, 00:13:38.604 { 00:13:38.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.604 "dma_device_type": 2 00:13:38.604 } 00:13:38.604 ], 00:13:38.604 "driver_specific": { 00:13:38.604 "raid": { 00:13:38.604 "uuid": "30f853a8-a7cb-4ae8-8094-a1178f85ab05", 00:13:38.604 "strip_size_kb": 0, 00:13:38.604 "state": "online", 00:13:38.604 "raid_level": "raid1", 00:13:38.604 "superblock": true, 00:13:38.604 "num_base_bdevs": 2, 00:13:38.604 "num_base_bdevs_discovered": 2, 00:13:38.604 "num_base_bdevs_operational": 2, 00:13:38.604 "base_bdevs_list": [ 00:13:38.604 { 00:13:38.604 "name": "pt1", 00:13:38.604 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:38.604 "is_configured": true, 00:13:38.604 "data_offset": 2048, 00:13:38.604 "data_size": 63488 00:13:38.604 }, 00:13:38.604 { 00:13:38.604 "name": "pt2", 00:13:38.604 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:38.604 "is_configured": true, 00:13:38.604 "data_offset": 2048, 00:13:38.604 "data_size": 63488 00:13:38.604 } 00:13:38.604 ] 00:13:38.604 } 00:13:38.604 } 00:13:38.604 }' 00:13:38.604 21:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:38.604 21:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:13:38.604 pt2' 00:13:38.604 21:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:38.604 21:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:13:38.604 21:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:38.862 21:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:38.862 "name": "pt1", 00:13:38.862 "aliases": [ 00:13:38.862 "00000000-0000-0000-0000-000000000001" 00:13:38.862 ], 00:13:38.862 "product_name": "passthru", 00:13:38.862 "block_size": 512, 00:13:38.862 "num_blocks": 65536, 00:13:38.862 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:38.862 "assigned_rate_limits": { 00:13:38.862 "rw_ios_per_sec": 0, 00:13:38.862 "rw_mbytes_per_sec": 0, 00:13:38.862 "r_mbytes_per_sec": 0, 00:13:38.862 "w_mbytes_per_sec": 0 00:13:38.862 }, 00:13:38.862 "claimed": true, 00:13:38.862 "claim_type": "exclusive_write", 00:13:38.862 "zoned": false, 00:13:38.862 "supported_io_types": { 00:13:38.862 "read": true, 00:13:38.862 "write": true, 00:13:38.862 "unmap": true, 00:13:38.862 "flush": true, 00:13:38.862 "reset": true, 00:13:38.862 "nvme_admin": false, 00:13:38.862 "nvme_io": false, 00:13:38.862 "nvme_io_md": false, 00:13:38.862 "write_zeroes": true, 00:13:38.862 "zcopy": true, 00:13:38.862 "get_zone_info": false, 00:13:38.862 "zone_management": false, 00:13:38.862 "zone_append": false, 00:13:38.862 "compare": false, 00:13:38.862 "compare_and_write": false, 00:13:38.862 "abort": true, 00:13:38.862 "seek_hole": false, 00:13:38.862 "seek_data": false, 00:13:38.862 "copy": true, 00:13:38.862 "nvme_iov_md": false 00:13:38.862 }, 00:13:38.862 "memory_domains": [ 00:13:38.862 { 00:13:38.862 "dma_device_id": "system", 00:13:38.862 "dma_device_type": 1 00:13:38.862 }, 00:13:38.862 { 00:13:38.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.862 "dma_device_type": 2 00:13:38.862 } 00:13:38.862 ], 00:13:38.862 "driver_specific": { 00:13:38.862 "passthru": { 00:13:38.862 "name": "pt1", 00:13:38.862 "base_bdev_name": "malloc1" 00:13:38.862 } 00:13:38.862 } 00:13:38.862 }' 00:13:38.863 21:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:38.863 21:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:38.863 21:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:38.863 21:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:38.863 21:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:38.863 21:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:38.863 21:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:38.863 21:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:38.863 21:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:38.863 21:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:39.121 21:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:39.121 21:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:39.121 21:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:39.121 21:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:13:39.121 21:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:39.121 21:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:39.121 "name": "pt2", 00:13:39.121 "aliases": [ 00:13:39.121 "00000000-0000-0000-0000-000000000002" 00:13:39.121 ], 00:13:39.121 "product_name": "passthru", 00:13:39.121 "block_size": 512, 00:13:39.121 "num_blocks": 65536, 00:13:39.121 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:39.121 "assigned_rate_limits": { 00:13:39.121 "rw_ios_per_sec": 0, 00:13:39.121 "rw_mbytes_per_sec": 0, 00:13:39.121 "r_mbytes_per_sec": 0, 00:13:39.121 "w_mbytes_per_sec": 0 00:13:39.121 }, 00:13:39.121 "claimed": true, 00:13:39.121 "claim_type": "exclusive_write", 00:13:39.121 "zoned": false, 00:13:39.121 "supported_io_types": { 00:13:39.121 "read": true, 00:13:39.121 "write": true, 00:13:39.121 "unmap": true, 00:13:39.121 "flush": true, 00:13:39.121 "reset": true, 00:13:39.121 "nvme_admin": false, 00:13:39.121 "nvme_io": false, 00:13:39.121 "nvme_io_md": false, 00:13:39.121 "write_zeroes": true, 00:13:39.121 "zcopy": true, 00:13:39.121 "get_zone_info": false, 00:13:39.122 "zone_management": false, 00:13:39.122 "zone_append": false, 00:13:39.122 "compare": false, 00:13:39.122 "compare_and_write": false, 00:13:39.122 "abort": true, 00:13:39.122 "seek_hole": false, 00:13:39.122 "seek_data": false, 00:13:39.122 "copy": true, 00:13:39.122 "nvme_iov_md": false 00:13:39.122 }, 00:13:39.122 "memory_domains": [ 00:13:39.122 { 00:13:39.122 "dma_device_id": "system", 00:13:39.122 "dma_device_type": 1 00:13:39.122 }, 00:13:39.122 { 00:13:39.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.122 "dma_device_type": 2 00:13:39.122 } 00:13:39.122 ], 00:13:39.122 "driver_specific": { 00:13:39.122 "passthru": { 00:13:39.122 "name": "pt2", 00:13:39.122 "base_bdev_name": "malloc2" 00:13:39.122 } 00:13:39.122 } 00:13:39.122 }' 00:13:39.122 21:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:39.122 21:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:39.380 21:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:39.380 21:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:39.380 21:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:39.380 21:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:39.380 21:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:39.380 21:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:39.380 21:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:39.380 21:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:39.380 21:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:39.380 21:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:39.380 21:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:39.380 21:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:13:39.639 [2024-07-13 21:56:58.924455] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:39.639 21:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 30f853a8-a7cb-4ae8-8094-a1178f85ab05 '!=' 30f853a8-a7cb-4ae8-8094-a1178f85ab05 ']' 00:13:39.639 21:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:13:39.639 21:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:39.639 21:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:13:39.639 21:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:13:39.897 [2024-07-13 21:56:59.100738] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:39.897 21:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:39.897 21:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:39.897 21:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:39.897 21:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:39.897 21:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:39.897 21:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:13:39.897 21:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:39.897 21:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:39.897 21:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:39.897 21:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:39.897 21:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:39.897 21:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.155 21:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:40.155 "name": "raid_bdev1", 00:13:40.155 "uuid": "30f853a8-a7cb-4ae8-8094-a1178f85ab05", 00:13:40.155 "strip_size_kb": 0, 00:13:40.155 "state": "online", 00:13:40.155 "raid_level": "raid1", 00:13:40.155 "superblock": true, 00:13:40.155 "num_base_bdevs": 2, 00:13:40.155 "num_base_bdevs_discovered": 1, 00:13:40.155 "num_base_bdevs_operational": 1, 00:13:40.155 "base_bdevs_list": [ 00:13:40.155 { 00:13:40.155 "name": null, 00:13:40.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.155 "is_configured": false, 00:13:40.155 "data_offset": 2048, 00:13:40.155 "data_size": 63488 00:13:40.155 }, 00:13:40.155 { 00:13:40.155 "name": "pt2", 00:13:40.155 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:40.155 "is_configured": true, 00:13:40.155 "data_offset": 2048, 00:13:40.155 "data_size": 63488 00:13:40.155 } 00:13:40.155 ] 00:13:40.155 }' 00:13:40.155 21:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:40.155 21:56:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.412 21:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:40.671 [2024-07-13 21:56:59.922872] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:40.671 [2024-07-13 21:56:59.922900] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:40.671 [2024-07-13 21:56:59.922971] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:40.671 [2024-07-13 21:56:59.923013] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:40.671 [2024-07-13 21:56:59.923026] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041a80 name raid_bdev1, state offline 00:13:40.671 21:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:40.671 21:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:13:40.929 21:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:13:40.929 21:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:13:40.929 21:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:13:40.929 21:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:13:40.929 21:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:40.929 21:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:13:40.929 21:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:13:40.929 21:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:13:40.929 21:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:13:40.929 21:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=1 00:13:40.929 21:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:41.188 [2024-07-13 21:57:00.424185] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:41.188 [2024-07-13 21:57:00.424267] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.188 [2024-07-13 21:57:00.424289] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042080 00:13:41.188 [2024-07-13 21:57:00.424303] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.188 [2024-07-13 21:57:00.426424] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.188 [2024-07-13 21:57:00.426456] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:41.188 [2024-07-13 21:57:00.426531] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:41.188 [2024-07-13 21:57:00.426579] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:41.188 [2024-07-13 21:57:00.426701] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042680 00:13:41.188 [2024-07-13 21:57:00.426714] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:41.188 [2024-07-13 21:57:00.426963] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:13:41.188 [2024-07-13 21:57:00.427140] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042680 00:13:41.188 [2024-07-13 21:57:00.427151] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000042680 00:13:41.188 [2024-07-13 21:57:00.427287] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.188 pt2 00:13:41.188 21:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:41.188 21:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:41.188 21:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:41.188 21:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:41.188 21:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:41.188 21:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:13:41.188 21:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:41.188 21:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:41.188 21:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:41.188 21:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:41.188 21:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:41.188 21:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.450 21:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:41.450 "name": "raid_bdev1", 00:13:41.450 "uuid": "30f853a8-a7cb-4ae8-8094-a1178f85ab05", 00:13:41.450 "strip_size_kb": 0, 00:13:41.450 "state": "online", 00:13:41.450 "raid_level": "raid1", 00:13:41.450 "superblock": true, 00:13:41.450 "num_base_bdevs": 2, 00:13:41.450 "num_base_bdevs_discovered": 1, 00:13:41.450 "num_base_bdevs_operational": 1, 00:13:41.450 "base_bdevs_list": [ 00:13:41.450 { 00:13:41.450 "name": null, 00:13:41.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.450 "is_configured": false, 00:13:41.450 "data_offset": 2048, 00:13:41.450 "data_size": 63488 00:13:41.450 }, 00:13:41.450 { 00:13:41.450 "name": "pt2", 00:13:41.450 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:41.450 "is_configured": true, 00:13:41.450 "data_offset": 2048, 00:13:41.450 "data_size": 63488 00:13:41.450 } 00:13:41.450 ] 00:13:41.450 }' 00:13:41.450 21:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:41.450 21:57:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.710 21:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:41.969 [2024-07-13 21:57:01.242326] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:41.969 [2024-07-13 21:57:01.242357] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:41.969 [2024-07-13 21:57:01.242422] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:41.969 [2024-07-13 21:57:01.242473] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:41.969 [2024-07-13 21:57:01.242484] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042680 name raid_bdev1, state offline 00:13:41.969 21:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:13:41.969 21:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:42.228 21:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:13:42.228 21:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:13:42.228 21:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:13:42.228 21:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:42.228 [2024-07-13 21:57:01.579219] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:42.228 [2024-07-13 21:57:01.579276] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.228 [2024-07-13 21:57:01.579297] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042980 00:13:42.228 [2024-07-13 21:57:01.579309] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.228 [2024-07-13 21:57:01.581286] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.228 [2024-07-13 21:57:01.581315] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:42.228 [2024-07-13 21:57:01.581397] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:42.228 [2024-07-13 21:57:01.581458] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:42.228 [2024-07-13 21:57:01.581600] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:42.228 [2024-07-13 21:57:01.581613] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:42.228 [2024-07-13 21:57:01.581631] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042f80 name raid_bdev1, state configuring 00:13:42.228 [2024-07-13 21:57:01.581682] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:42.228 [2024-07-13 21:57:01.581748] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000043280 00:13:42.228 [2024-07-13 21:57:01.581759] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:42.228 [2024-07-13 21:57:01.581976] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:13:42.228 [2024-07-13 21:57:01.582143] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000043280 00:13:42.228 [2024-07-13 21:57:01.582156] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000043280 00:13:42.229 [2024-07-13 21:57:01.582280] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.229 pt1 00:13:42.229 21:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:13:42.229 21:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:42.229 21:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:42.229 21:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:42.229 21:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:42.229 21:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:42.229 21:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:13:42.229 21:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:42.229 21:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:42.229 21:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:42.229 21:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:42.229 21:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:42.229 21:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.488 21:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:42.488 "name": "raid_bdev1", 00:13:42.488 "uuid": "30f853a8-a7cb-4ae8-8094-a1178f85ab05", 00:13:42.488 "strip_size_kb": 0, 00:13:42.488 "state": "online", 00:13:42.488 "raid_level": "raid1", 00:13:42.488 "superblock": true, 00:13:42.488 "num_base_bdevs": 2, 00:13:42.488 "num_base_bdevs_discovered": 1, 00:13:42.488 "num_base_bdevs_operational": 1, 00:13:42.488 "base_bdevs_list": [ 00:13:42.488 { 00:13:42.488 "name": null, 00:13:42.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.488 "is_configured": false, 00:13:42.488 "data_offset": 2048, 00:13:42.488 "data_size": 63488 00:13:42.488 }, 00:13:42.488 { 00:13:42.488 "name": "pt2", 00:13:42.488 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:42.488 "is_configured": true, 00:13:42.488 "data_offset": 2048, 00:13:42.488 "data_size": 63488 00:13:42.488 } 00:13:42.488 ] 00:13:42.488 }' 00:13:42.488 21:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:42.488 21:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.053 21:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:13:43.053 21:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:43.312 21:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:13:43.312 21:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:43.312 21:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:13:43.312 [2024-07-13 21:57:02.602141] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:43.312 21:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 30f853a8-a7cb-4ae8-8094-a1178f85ab05 '!=' 30f853a8-a7cb-4ae8-8094-a1178f85ab05 ']' 00:13:43.312 21:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 1363030 00:13:43.312 21:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 1363030 ']' 00:13:43.312 21:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 1363030 00:13:43.312 21:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:13:43.312 21:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:43.312 21:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1363030 00:13:43.312 21:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:43.312 21:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:43.312 21:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1363030' 00:13:43.312 killing process with pid 1363030 00:13:43.312 21:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 1363030 00:13:43.312 [2024-07-13 21:57:02.677867] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:43.312 [2024-07-13 21:57:02.677953] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:43.312 [2024-07-13 21:57:02.678000] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:43.312 [2024-07-13 21:57:02.678014] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000043280 name raid_bdev1, state offline 00:13:43.312 21:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 1363030 00:13:43.571 [2024-07-13 21:57:02.818811] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:44.948 21:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:13:44.948 00:13:44.948 real 0m13.150s 00:13:44.948 user 0m22.535s 00:13:44.948 sys 0m2.476s 00:13:44.948 21:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:44.948 21:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.948 ************************************ 00:13:44.948 END TEST raid_superblock_test 00:13:44.948 ************************************ 00:13:44.948 21:57:04 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:44.948 21:57:04 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:13:44.948 21:57:04 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:44.948 21:57:04 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:44.948 21:57:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:44.948 ************************************ 00:13:44.948 START TEST raid_read_error_test 00:13:44.948 ************************************ 00:13:44.948 21:57:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 2 read 00:13:44.948 21:57:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:13:44.948 21:57:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:13:44.948 21:57:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:13:44.948 21:57:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:13:44.948 21:57:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:44.948 21:57:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:13:44.948 21:57:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:44.948 21:57:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:44.948 21:57:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:13:44.948 21:57:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:44.948 21:57:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:44.948 21:57:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:44.948 21:57:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:13:44.948 21:57:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:13:44.948 21:57:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:13:44.948 21:57:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:13:44.948 21:57:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:13:44.948 21:57:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:13:44.948 21:57:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:13:44.948 21:57:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:13:44.948 21:57:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:13:44.948 21:57:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.Wf1JfaXjg5 00:13:44.948 21:57:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1365899 00:13:44.948 21:57:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1365899 /var/tmp/spdk-raid.sock 00:13:44.948 21:57:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:44.948 21:57:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 1365899 ']' 00:13:44.948 21:57:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:44.949 21:57:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:44.949 21:57:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:44.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:44.949 21:57:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:44.949 21:57:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.949 [2024-07-13 21:57:04.236750] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:44.949 [2024-07-13 21:57:04.236847] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1365899 ] 00:13:44.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:44.949 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:44.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:44.949 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:44.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:44.949 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:44.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:44.949 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:44.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:44.949 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:44.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:44.949 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:44.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:44.949 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:44.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:44.949 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:44.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:44.949 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:44.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:44.949 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:44.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:44.949 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:44.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:44.949 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:44.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:44.949 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:44.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:44.949 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:44.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:44.949 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:44.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:44.949 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:44.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:44.949 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:44.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:44.949 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:44.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:44.949 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:44.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:45.208 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:45.208 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:45.208 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:45.208 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:45.208 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:45.208 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:45.208 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:45.208 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:45.208 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:45.208 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:45.208 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:45.208 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:45.208 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:45.208 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:45.208 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:45.208 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:45.208 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:45.208 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:45.208 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:45.208 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:45.208 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:45.208 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:45.208 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:45.208 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:45.208 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:45.208 [2024-07-13 21:57:04.402592] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.467 [2024-07-13 21:57:04.621067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.726 [2024-07-13 21:57:04.874252] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:45.726 [2024-07-13 21:57:04.874282] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:45.726 21:57:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:45.726 21:57:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:13:45.726 21:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:45.726 21:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:45.986 BaseBdev1_malloc 00:13:45.986 21:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:13:46.245 true 00:13:46.245 21:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:46.245 [2024-07-13 21:57:05.566540] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:46.245 [2024-07-13 21:57:05.566594] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.245 [2024-07-13 21:57:05.566615] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:13:46.246 [2024-07-13 21:57:05.566632] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.246 [2024-07-13 21:57:05.568766] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.246 [2024-07-13 21:57:05.568797] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:46.246 BaseBdev1 00:13:46.246 21:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:46.246 21:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:46.505 BaseBdev2_malloc 00:13:46.505 21:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:13:46.765 true 00:13:46.765 21:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:46.765 [2024-07-13 21:57:06.106591] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:46.765 [2024-07-13 21:57:06.106640] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.765 [2024-07-13 21:57:06.106661] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:13:46.765 [2024-07-13 21:57:06.106677] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.765 [2024-07-13 21:57:06.108605] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.765 [2024-07-13 21:57:06.108635] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:46.765 BaseBdev2 00:13:46.765 21:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:13:47.024 [2024-07-13 21:57:06.275100] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:47.025 [2024-07-13 21:57:06.276889] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:47.025 [2024-07-13 21:57:06.277083] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000040e80 00:13:47.025 [2024-07-13 21:57:06.277101] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:47.025 [2024-07-13 21:57:06.277352] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:13:47.025 [2024-07-13 21:57:06.277544] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000040e80 00:13:47.025 [2024-07-13 21:57:06.277555] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000040e80 00:13:47.025 [2024-07-13 21:57:06.277713] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:47.025 21:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:47.025 21:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:47.025 21:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:47.025 21:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:47.025 21:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:47.025 21:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:47.025 21:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:47.025 21:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:47.025 21:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:47.025 21:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:47.025 21:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:47.025 21:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.284 21:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:47.284 "name": "raid_bdev1", 00:13:47.284 "uuid": "61464f65-6188-45e9-8d51-cbc75db21575", 00:13:47.284 "strip_size_kb": 0, 00:13:47.284 "state": "online", 00:13:47.284 "raid_level": "raid1", 00:13:47.284 "superblock": true, 00:13:47.284 "num_base_bdevs": 2, 00:13:47.284 "num_base_bdevs_discovered": 2, 00:13:47.284 "num_base_bdevs_operational": 2, 00:13:47.284 "base_bdevs_list": [ 00:13:47.284 { 00:13:47.284 "name": "BaseBdev1", 00:13:47.284 "uuid": "cadbfd46-26a3-5a55-9c73-37c467ec9c00", 00:13:47.284 "is_configured": true, 00:13:47.284 "data_offset": 2048, 00:13:47.284 "data_size": 63488 00:13:47.284 }, 00:13:47.284 { 00:13:47.284 "name": "BaseBdev2", 00:13:47.284 "uuid": "082ac3c3-4376-554a-88d4-0974c5814a9d", 00:13:47.284 "is_configured": true, 00:13:47.284 "data_offset": 2048, 00:13:47.284 "data_size": 63488 00:13:47.284 } 00:13:47.284 ] 00:13:47.284 }' 00:13:47.284 21:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:47.284 21:57:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.853 21:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:13:47.853 21:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:13:47.853 [2024-07-13 21:57:07.054597] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:13:48.791 21:57:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:48.791 21:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:13:48.792 21:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:48.792 21:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:13:48.792 21:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:13:48.792 21:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:48.792 21:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:48.792 21:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:48.792 21:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:48.792 21:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:48.792 21:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:48.792 21:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:48.792 21:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:48.792 21:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:48.792 21:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:48.792 21:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:48.792 21:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.051 21:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:49.051 "name": "raid_bdev1", 00:13:49.051 "uuid": "61464f65-6188-45e9-8d51-cbc75db21575", 00:13:49.051 "strip_size_kb": 0, 00:13:49.051 "state": "online", 00:13:49.051 "raid_level": "raid1", 00:13:49.051 "superblock": true, 00:13:49.051 "num_base_bdevs": 2, 00:13:49.051 "num_base_bdevs_discovered": 2, 00:13:49.051 "num_base_bdevs_operational": 2, 00:13:49.051 "base_bdevs_list": [ 00:13:49.051 { 00:13:49.051 "name": "BaseBdev1", 00:13:49.051 "uuid": "cadbfd46-26a3-5a55-9c73-37c467ec9c00", 00:13:49.051 "is_configured": true, 00:13:49.051 "data_offset": 2048, 00:13:49.051 "data_size": 63488 00:13:49.051 }, 00:13:49.051 { 00:13:49.051 "name": "BaseBdev2", 00:13:49.051 "uuid": "082ac3c3-4376-554a-88d4-0974c5814a9d", 00:13:49.051 "is_configured": true, 00:13:49.051 "data_offset": 2048, 00:13:49.051 "data_size": 63488 00:13:49.051 } 00:13:49.051 ] 00:13:49.051 }' 00:13:49.051 21:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:49.051 21:57:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.619 21:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:49.620 [2024-07-13 21:57:08.994560] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:49.620 [2024-07-13 21:57:08.994606] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:49.620 [2024-07-13 21:57:08.996869] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:49.620 [2024-07-13 21:57:08.996919] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:49.620 [2024-07-13 21:57:08.997011] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:49.620 [2024-07-13 21:57:08.997028] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040e80 name raid_bdev1, state offline 00:13:49.620 0 00:13:49.879 21:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1365899 00:13:49.879 21:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 1365899 ']' 00:13:49.879 21:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 1365899 00:13:49.879 21:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:13:49.879 21:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:49.879 21:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1365899 00:13:49.879 21:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:49.879 21:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:49.879 21:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1365899' 00:13:49.879 killing process with pid 1365899 00:13:49.879 21:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 1365899 00:13:49.879 [2024-07-13 21:57:09.066092] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:49.879 21:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 1365899 00:13:49.879 [2024-07-13 21:57:09.136958] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:51.261 21:57:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.Wf1JfaXjg5 00:13:51.261 21:57:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:13:51.261 21:57:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:13:51.261 21:57:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:13:51.261 21:57:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:13:51.261 21:57:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:51.261 21:57:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:13:51.261 21:57:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:51.261 00:13:51.261 real 0m6.299s 00:13:51.261 user 0m8.676s 00:13:51.261 sys 0m1.052s 00:13:51.261 21:57:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:51.261 21:57:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.261 ************************************ 00:13:51.261 END TEST raid_read_error_test 00:13:51.261 ************************************ 00:13:51.261 21:57:10 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:51.261 21:57:10 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:13:51.261 21:57:10 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:51.261 21:57:10 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:51.261 21:57:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:51.261 ************************************ 00:13:51.261 START TEST raid_write_error_test 00:13:51.261 ************************************ 00:13:51.261 21:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 2 write 00:13:51.261 21:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:13:51.261 21:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:13:51.261 21:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:13:51.261 21:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:13:51.261 21:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:51.261 21:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:13:51.261 21:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:51.261 21:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:51.261 21:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:13:51.261 21:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:51.261 21:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:51.261 21:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:51.261 21:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:13:51.261 21:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:13:51.261 21:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:13:51.261 21:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:13:51.261 21:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:13:51.261 21:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:13:51.261 21:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:13:51.261 21:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:13:51.261 21:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:13:51.261 21:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.4rCTNHhK1k 00:13:51.261 21:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1367335 00:13:51.261 21:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1367335 /var/tmp/spdk-raid.sock 00:13:51.261 21:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:51.261 21:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 1367335 ']' 00:13:51.261 21:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:51.261 21:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:51.261 21:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:51.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:51.261 21:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:51.261 21:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.261 [2024-07-13 21:57:10.622626] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:51.261 [2024-07-13 21:57:10.622719] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1367335 ] 00:13:51.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.522 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:51.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.522 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:51.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.522 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:51.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.522 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:51.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.522 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:51.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.522 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:51.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.522 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:51.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.522 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:51.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.522 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:51.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.522 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:51.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.522 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:51.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.522 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:51.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.522 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:51.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.522 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:51.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.522 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:51.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.522 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:51.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.522 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:51.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.522 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:51.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.522 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:51.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.522 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:51.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.522 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:51.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.522 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:51.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.522 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:51.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.522 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:51.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.522 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:51.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.522 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:51.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.522 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:51.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.522 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:51.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.522 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:51.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.522 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:51.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.522 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:51.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.522 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:51.522 [2024-07-13 21:57:10.783621] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.782 [2024-07-13 21:57:10.984111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.041 [2024-07-13 21:57:11.235974] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:52.041 [2024-07-13 21:57:11.236009] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:52.041 21:57:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:52.041 21:57:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:13:52.041 21:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:52.041 21:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:52.300 BaseBdev1_malloc 00:13:52.300 21:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:13:52.560 true 00:13:52.560 21:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:52.560 [2024-07-13 21:57:11.882880] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:52.560 [2024-07-13 21:57:11.882955] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.560 [2024-07-13 21:57:11.882975] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:13:52.560 [2024-07-13 21:57:11.882992] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.560 [2024-07-13 21:57:11.885073] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.560 [2024-07-13 21:57:11.885102] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:52.560 BaseBdev1 00:13:52.560 21:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:52.560 21:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:52.819 BaseBdev2_malloc 00:13:52.819 21:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:13:53.078 true 00:13:53.078 21:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:53.078 [2024-07-13 21:57:12.386971] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:53.078 [2024-07-13 21:57:12.387018] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.078 [2024-07-13 21:57:12.387062] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:13:53.078 [2024-07-13 21:57:12.387079] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.078 [2024-07-13 21:57:12.389176] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.078 [2024-07-13 21:57:12.389206] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:53.078 BaseBdev2 00:13:53.078 21:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:13:53.345 [2024-07-13 21:57:12.555474] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:53.345 [2024-07-13 21:57:12.557266] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:53.345 [2024-07-13 21:57:12.557459] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000040e80 00:13:53.345 [2024-07-13 21:57:12.557477] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:53.345 [2024-07-13 21:57:12.557726] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:13:53.345 [2024-07-13 21:57:12.557930] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000040e80 00:13:53.345 [2024-07-13 21:57:12.557942] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000040e80 00:13:53.345 [2024-07-13 21:57:12.558100] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:53.345 21:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:53.345 21:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:53.345 21:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:53.345 21:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:53.345 21:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:53.345 21:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:53.345 21:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:53.345 21:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:53.345 21:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:53.345 21:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:53.346 21:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:53.346 21:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.630 21:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:53.630 "name": "raid_bdev1", 00:13:53.630 "uuid": "bfb70044-7717-4418-a0ca-6c0a1c9f37c6", 00:13:53.630 "strip_size_kb": 0, 00:13:53.630 "state": "online", 00:13:53.630 "raid_level": "raid1", 00:13:53.630 "superblock": true, 00:13:53.630 "num_base_bdevs": 2, 00:13:53.630 "num_base_bdevs_discovered": 2, 00:13:53.630 "num_base_bdevs_operational": 2, 00:13:53.630 "base_bdevs_list": [ 00:13:53.630 { 00:13:53.630 "name": "BaseBdev1", 00:13:53.630 "uuid": "ff6b7158-e4d7-5401-bc79-4931d5885531", 00:13:53.630 "is_configured": true, 00:13:53.630 "data_offset": 2048, 00:13:53.630 "data_size": 63488 00:13:53.630 }, 00:13:53.630 { 00:13:53.630 "name": "BaseBdev2", 00:13:53.630 "uuid": "3c1f028f-1e00-563d-887e-b76ab1cd7662", 00:13:53.630 "is_configured": true, 00:13:53.630 "data_offset": 2048, 00:13:53.630 "data_size": 63488 00:13:53.630 } 00:13:53.630 ] 00:13:53.630 }' 00:13:53.630 21:57:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:53.630 21:57:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.888 21:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:13:53.888 21:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:13:54.146 [2024-07-13 21:57:13.322979] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:13:55.078 21:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:55.078 [2024-07-13 21:57:14.403573] bdev_raid.c:2221:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:13:55.078 [2024-07-13 21:57:14.403633] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:55.078 [2024-07-13 21:57:14.403849] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000010710 00:13:55.078 21:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:13:55.078 21:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:55.078 21:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:13:55.078 21:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=1 00:13:55.078 21:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:55.078 21:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:55.078 21:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:55.078 21:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:55.078 21:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:55.078 21:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:13:55.078 21:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:55.078 21:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:55.078 21:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:55.078 21:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:55.078 21:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.078 21:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:55.335 21:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:55.335 "name": "raid_bdev1", 00:13:55.335 "uuid": "bfb70044-7717-4418-a0ca-6c0a1c9f37c6", 00:13:55.335 "strip_size_kb": 0, 00:13:55.335 "state": "online", 00:13:55.335 "raid_level": "raid1", 00:13:55.335 "superblock": true, 00:13:55.335 "num_base_bdevs": 2, 00:13:55.335 "num_base_bdevs_discovered": 1, 00:13:55.335 "num_base_bdevs_operational": 1, 00:13:55.335 "base_bdevs_list": [ 00:13:55.335 { 00:13:55.335 "name": null, 00:13:55.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.335 "is_configured": false, 00:13:55.335 "data_offset": 2048, 00:13:55.335 "data_size": 63488 00:13:55.335 }, 00:13:55.335 { 00:13:55.335 "name": "BaseBdev2", 00:13:55.335 "uuid": "3c1f028f-1e00-563d-887e-b76ab1cd7662", 00:13:55.335 "is_configured": true, 00:13:55.335 "data_offset": 2048, 00:13:55.335 "data_size": 63488 00:13:55.335 } 00:13:55.335 ] 00:13:55.335 }' 00:13:55.335 21:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:55.335 21:57:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.899 21:57:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:55.899 [2024-07-13 21:57:15.252707] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:55.899 [2024-07-13 21:57:15.252748] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:55.899 [2024-07-13 21:57:15.255067] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:55.899 [2024-07-13 21:57:15.255119] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:55.899 [2024-07-13 21:57:15.255168] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:55.899 [2024-07-13 21:57:15.255180] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040e80 name raid_bdev1, state offline 00:13:55.899 0 00:13:55.899 21:57:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1367335 00:13:55.899 21:57:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 1367335 ']' 00:13:55.899 21:57:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 1367335 00:13:55.899 21:57:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:13:55.899 21:57:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:55.899 21:57:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1367335 00:13:56.157 21:57:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:56.157 21:57:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:56.157 21:57:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1367335' 00:13:56.157 killing process with pid 1367335 00:13:56.157 21:57:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 1367335 00:13:56.157 [2024-07-13 21:57:15.327391] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:56.157 21:57:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 1367335 00:13:56.157 [2024-07-13 21:57:15.398912] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:57.603 21:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.4rCTNHhK1k 00:13:57.603 21:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:13:57.603 21:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:13:57.603 21:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:13:57.603 21:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:13:57.603 21:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:57.603 21:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:13:57.603 21:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:57.603 00:13:57.603 real 0m6.163s 00:13:57.603 user 0m8.523s 00:13:57.603 sys 0m0.975s 00:13:57.603 21:57:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:57.603 21:57:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.603 ************************************ 00:13:57.603 END TEST raid_write_error_test 00:13:57.603 ************************************ 00:13:57.603 21:57:16 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:57.603 21:57:16 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:13:57.603 21:57:16 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:13:57.603 21:57:16 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:13:57.603 21:57:16 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:57.603 21:57:16 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:57.603 21:57:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:57.603 ************************************ 00:13:57.603 START TEST raid_state_function_test 00:13:57.603 ************************************ 00:13:57.604 21:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 3 false 00:13:57.604 21:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:13:57.604 21:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:13:57.604 21:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:13:57.604 21:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:13:57.604 21:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:13:57.604 21:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:57.604 21:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:13:57.604 21:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:57.604 21:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:57.604 21:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:13:57.604 21:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:57.604 21:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:57.604 21:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:13:57.604 21:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:57.604 21:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:57.604 21:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:57.604 21:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:13:57.604 21:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:13:57.604 21:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:13:57.604 21:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:13:57.604 21:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:13:57.604 21:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:13:57.604 21:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:13:57.604 21:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:13:57.604 21:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:13:57.604 21:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:13:57.604 21:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=1368491 00:13:57.604 21:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1368491' 00:13:57.604 Process raid pid: 1368491 00:13:57.604 21:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 1368491 /var/tmp/spdk-raid.sock 00:13:57.604 21:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 1368491 ']' 00:13:57.604 21:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:57.604 21:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:57.604 21:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:57.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:57.604 21:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:57.604 21:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:57.604 21:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.604 [2024-07-13 21:57:16.850979] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:57.604 [2024-07-13 21:57:16.851069] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:57.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.604 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:57.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.604 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:57.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.604 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:57.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.604 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:57.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.604 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:57.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.604 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:57.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.604 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:57.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.604 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:57.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.604 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:57.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.604 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:57.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.604 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:57.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.604 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:57.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.604 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:57.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.604 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:57.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.604 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:57.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.604 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:57.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.604 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:57.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.604 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:57.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.604 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:57.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.604 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:57.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.604 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:57.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.604 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:57.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.604 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:57.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.604 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:57.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.604 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:57.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.604 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:57.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.604 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:57.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.604 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:57.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.604 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:57.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.604 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:57.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.604 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:57.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.604 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:57.863 [2024-07-13 21:57:17.015665] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.863 [2024-07-13 21:57:17.221017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.122 [2024-07-13 21:57:17.470533] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:58.122 [2024-07-13 21:57:17.470561] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:58.381 21:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:58.381 21:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:13:58.381 21:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:58.381 [2024-07-13 21:57:17.756665] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:58.381 [2024-07-13 21:57:17.756714] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:58.381 [2024-07-13 21:57:17.756726] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:58.381 [2024-07-13 21:57:17.756754] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:58.381 [2024-07-13 21:57:17.756763] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:58.381 [2024-07-13 21:57:17.756775] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:58.638 21:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:58.639 21:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:58.639 21:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:58.639 21:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:58.639 21:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:58.639 21:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:58.639 21:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:58.639 21:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:58.639 21:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:58.639 21:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:58.639 21:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:58.639 21:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.639 21:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:58.639 "name": "Existed_Raid", 00:13:58.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.639 "strip_size_kb": 64, 00:13:58.639 "state": "configuring", 00:13:58.639 "raid_level": "raid0", 00:13:58.639 "superblock": false, 00:13:58.639 "num_base_bdevs": 3, 00:13:58.639 "num_base_bdevs_discovered": 0, 00:13:58.639 "num_base_bdevs_operational": 3, 00:13:58.639 "base_bdevs_list": [ 00:13:58.639 { 00:13:58.639 "name": "BaseBdev1", 00:13:58.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.639 "is_configured": false, 00:13:58.639 "data_offset": 0, 00:13:58.639 "data_size": 0 00:13:58.639 }, 00:13:58.639 { 00:13:58.639 "name": "BaseBdev2", 00:13:58.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.639 "is_configured": false, 00:13:58.639 "data_offset": 0, 00:13:58.639 "data_size": 0 00:13:58.639 }, 00:13:58.639 { 00:13:58.639 "name": "BaseBdev3", 00:13:58.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.639 "is_configured": false, 00:13:58.639 "data_offset": 0, 00:13:58.639 "data_size": 0 00:13:58.639 } 00:13:58.639 ] 00:13:58.639 }' 00:13:58.639 21:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:58.639 21:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.206 21:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:59.206 [2024-07-13 21:57:18.542619] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:59.206 [2024-07-13 21:57:18.542654] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:13:59.206 21:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:59.465 [2024-07-13 21:57:18.703101] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:59.465 [2024-07-13 21:57:18.703140] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:59.465 [2024-07-13 21:57:18.703150] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:59.465 [2024-07-13 21:57:18.703180] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:59.465 [2024-07-13 21:57:18.703188] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:59.465 [2024-07-13 21:57:18.703199] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:59.465 21:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:59.724 [2024-07-13 21:57:18.915889] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:59.724 BaseBdev1 00:13:59.724 21:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:13:59.724 21:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:13:59.724 21:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:59.724 21:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:59.724 21:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:59.724 21:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:59.724 21:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:59.724 21:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:59.983 [ 00:13:59.983 { 00:13:59.983 "name": "BaseBdev1", 00:13:59.983 "aliases": [ 00:13:59.983 "d426a139-7654-47ac-83b2-855ed4434243" 00:13:59.983 ], 00:13:59.983 "product_name": "Malloc disk", 00:13:59.983 "block_size": 512, 00:13:59.983 "num_blocks": 65536, 00:13:59.983 "uuid": "d426a139-7654-47ac-83b2-855ed4434243", 00:13:59.983 "assigned_rate_limits": { 00:13:59.983 "rw_ios_per_sec": 0, 00:13:59.983 "rw_mbytes_per_sec": 0, 00:13:59.983 "r_mbytes_per_sec": 0, 00:13:59.983 "w_mbytes_per_sec": 0 00:13:59.983 }, 00:13:59.983 "claimed": true, 00:13:59.983 "claim_type": "exclusive_write", 00:13:59.983 "zoned": false, 00:13:59.983 "supported_io_types": { 00:13:59.983 "read": true, 00:13:59.983 "write": true, 00:13:59.983 "unmap": true, 00:13:59.983 "flush": true, 00:13:59.983 "reset": true, 00:13:59.983 "nvme_admin": false, 00:13:59.983 "nvme_io": false, 00:13:59.983 "nvme_io_md": false, 00:13:59.983 "write_zeroes": true, 00:13:59.983 "zcopy": true, 00:13:59.983 "get_zone_info": false, 00:13:59.983 "zone_management": false, 00:13:59.983 "zone_append": false, 00:13:59.983 "compare": false, 00:13:59.983 "compare_and_write": false, 00:13:59.983 "abort": true, 00:13:59.983 "seek_hole": false, 00:13:59.983 "seek_data": false, 00:13:59.983 "copy": true, 00:13:59.983 "nvme_iov_md": false 00:13:59.983 }, 00:13:59.983 "memory_domains": [ 00:13:59.983 { 00:13:59.983 "dma_device_id": "system", 00:13:59.983 "dma_device_type": 1 00:13:59.983 }, 00:13:59.983 { 00:13:59.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.983 "dma_device_type": 2 00:13:59.983 } 00:13:59.983 ], 00:13:59.983 "driver_specific": {} 00:13:59.983 } 00:13:59.983 ] 00:13:59.983 21:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:59.983 21:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:59.983 21:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:59.983 21:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:59.983 21:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:59.983 21:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:59.983 21:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:59.983 21:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:59.983 21:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:59.983 21:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:59.983 21:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:59.983 21:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:59.983 21:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.242 21:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:00.242 "name": "Existed_Raid", 00:14:00.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.242 "strip_size_kb": 64, 00:14:00.242 "state": "configuring", 00:14:00.242 "raid_level": "raid0", 00:14:00.242 "superblock": false, 00:14:00.242 "num_base_bdevs": 3, 00:14:00.242 "num_base_bdevs_discovered": 1, 00:14:00.242 "num_base_bdevs_operational": 3, 00:14:00.242 "base_bdevs_list": [ 00:14:00.242 { 00:14:00.242 "name": "BaseBdev1", 00:14:00.242 "uuid": "d426a139-7654-47ac-83b2-855ed4434243", 00:14:00.242 "is_configured": true, 00:14:00.242 "data_offset": 0, 00:14:00.242 "data_size": 65536 00:14:00.242 }, 00:14:00.242 { 00:14:00.242 "name": "BaseBdev2", 00:14:00.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.242 "is_configured": false, 00:14:00.242 "data_offset": 0, 00:14:00.242 "data_size": 0 00:14:00.242 }, 00:14:00.242 { 00:14:00.242 "name": "BaseBdev3", 00:14:00.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.242 "is_configured": false, 00:14:00.242 "data_offset": 0, 00:14:00.242 "data_size": 0 00:14:00.242 } 00:14:00.242 ] 00:14:00.242 }' 00:14:00.242 21:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:00.242 21:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.809 21:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:00.809 [2024-07-13 21:57:20.038884] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:00.809 [2024-07-13 21:57:20.038950] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:14:00.809 21:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:01.068 [2024-07-13 21:57:20.203369] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:01.068 [2024-07-13 21:57:20.205095] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:01.068 [2024-07-13 21:57:20.205129] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:01.068 [2024-07-13 21:57:20.205139] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:01.068 [2024-07-13 21:57:20.205167] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:01.068 21:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:01.068 21:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:01.068 21:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:01.068 21:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:01.068 21:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:01.068 21:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:01.068 21:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:01.068 21:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:01.068 21:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:01.068 21:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:01.068 21:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:01.068 21:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:01.068 21:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.068 21:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:01.068 21:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:01.068 "name": "Existed_Raid", 00:14:01.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.068 "strip_size_kb": 64, 00:14:01.068 "state": "configuring", 00:14:01.068 "raid_level": "raid0", 00:14:01.068 "superblock": false, 00:14:01.068 "num_base_bdevs": 3, 00:14:01.068 "num_base_bdevs_discovered": 1, 00:14:01.068 "num_base_bdevs_operational": 3, 00:14:01.068 "base_bdevs_list": [ 00:14:01.068 { 00:14:01.068 "name": "BaseBdev1", 00:14:01.068 "uuid": "d426a139-7654-47ac-83b2-855ed4434243", 00:14:01.068 "is_configured": true, 00:14:01.068 "data_offset": 0, 00:14:01.068 "data_size": 65536 00:14:01.068 }, 00:14:01.068 { 00:14:01.068 "name": "BaseBdev2", 00:14:01.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.068 "is_configured": false, 00:14:01.068 "data_offset": 0, 00:14:01.068 "data_size": 0 00:14:01.068 }, 00:14:01.068 { 00:14:01.068 "name": "BaseBdev3", 00:14:01.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.068 "is_configured": false, 00:14:01.068 "data_offset": 0, 00:14:01.068 "data_size": 0 00:14:01.068 } 00:14:01.068 ] 00:14:01.068 }' 00:14:01.068 21:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:01.068 21:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.636 21:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:01.895 [2024-07-13 21:57:21.083519] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:01.895 BaseBdev2 00:14:01.895 21:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:01.895 21:57:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:01.895 21:57:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:01.895 21:57:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:01.896 21:57:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:01.896 21:57:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:01.896 21:57:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:01.896 21:57:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:02.154 [ 00:14:02.154 { 00:14:02.154 "name": "BaseBdev2", 00:14:02.154 "aliases": [ 00:14:02.154 "638abdba-c120-4d61-b530-57a5bb71ae0f" 00:14:02.154 ], 00:14:02.154 "product_name": "Malloc disk", 00:14:02.154 "block_size": 512, 00:14:02.154 "num_blocks": 65536, 00:14:02.154 "uuid": "638abdba-c120-4d61-b530-57a5bb71ae0f", 00:14:02.154 "assigned_rate_limits": { 00:14:02.154 "rw_ios_per_sec": 0, 00:14:02.154 "rw_mbytes_per_sec": 0, 00:14:02.154 "r_mbytes_per_sec": 0, 00:14:02.154 "w_mbytes_per_sec": 0 00:14:02.154 }, 00:14:02.154 "claimed": true, 00:14:02.154 "claim_type": "exclusive_write", 00:14:02.154 "zoned": false, 00:14:02.154 "supported_io_types": { 00:14:02.154 "read": true, 00:14:02.154 "write": true, 00:14:02.154 "unmap": true, 00:14:02.154 "flush": true, 00:14:02.154 "reset": true, 00:14:02.154 "nvme_admin": false, 00:14:02.154 "nvme_io": false, 00:14:02.154 "nvme_io_md": false, 00:14:02.154 "write_zeroes": true, 00:14:02.154 "zcopy": true, 00:14:02.154 "get_zone_info": false, 00:14:02.154 "zone_management": false, 00:14:02.154 "zone_append": false, 00:14:02.154 "compare": false, 00:14:02.154 "compare_and_write": false, 00:14:02.154 "abort": true, 00:14:02.154 "seek_hole": false, 00:14:02.154 "seek_data": false, 00:14:02.154 "copy": true, 00:14:02.154 "nvme_iov_md": false 00:14:02.154 }, 00:14:02.154 "memory_domains": [ 00:14:02.154 { 00:14:02.154 "dma_device_id": "system", 00:14:02.154 "dma_device_type": 1 00:14:02.154 }, 00:14:02.154 { 00:14:02.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.154 "dma_device_type": 2 00:14:02.154 } 00:14:02.154 ], 00:14:02.154 "driver_specific": {} 00:14:02.154 } 00:14:02.154 ] 00:14:02.154 21:57:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:02.154 21:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:02.154 21:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:02.154 21:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:02.154 21:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:02.154 21:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:02.154 21:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:02.154 21:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:02.154 21:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:02.154 21:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:02.154 21:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:02.154 21:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:02.154 21:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:02.155 21:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:02.155 21:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:02.413 21:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:02.413 "name": "Existed_Raid", 00:14:02.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.413 "strip_size_kb": 64, 00:14:02.413 "state": "configuring", 00:14:02.413 "raid_level": "raid0", 00:14:02.413 "superblock": false, 00:14:02.413 "num_base_bdevs": 3, 00:14:02.413 "num_base_bdevs_discovered": 2, 00:14:02.413 "num_base_bdevs_operational": 3, 00:14:02.413 "base_bdevs_list": [ 00:14:02.413 { 00:14:02.414 "name": "BaseBdev1", 00:14:02.414 "uuid": "d426a139-7654-47ac-83b2-855ed4434243", 00:14:02.414 "is_configured": true, 00:14:02.414 "data_offset": 0, 00:14:02.414 "data_size": 65536 00:14:02.414 }, 00:14:02.414 { 00:14:02.414 "name": "BaseBdev2", 00:14:02.414 "uuid": "638abdba-c120-4d61-b530-57a5bb71ae0f", 00:14:02.414 "is_configured": true, 00:14:02.414 "data_offset": 0, 00:14:02.414 "data_size": 65536 00:14:02.414 }, 00:14:02.414 { 00:14:02.414 "name": "BaseBdev3", 00:14:02.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.414 "is_configured": false, 00:14:02.414 "data_offset": 0, 00:14:02.414 "data_size": 0 00:14:02.414 } 00:14:02.414 ] 00:14:02.414 }' 00:14:02.414 21:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:02.414 21:57:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.981 21:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:02.981 [2024-07-13 21:57:22.288988] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:02.981 [2024-07-13 21:57:22.289024] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:14:02.981 [2024-07-13 21:57:22.289038] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:02.981 [2024-07-13 21:57:22.289276] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:14:02.981 [2024-07-13 21:57:22.289456] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:14:02.981 [2024-07-13 21:57:22.289467] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:14:02.981 [2024-07-13 21:57:22.289719] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.981 BaseBdev3 00:14:02.981 21:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:14:02.982 21:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:14:02.982 21:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:02.982 21:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:02.982 21:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:02.982 21:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:02.982 21:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:03.240 21:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:03.499 [ 00:14:03.499 { 00:14:03.499 "name": "BaseBdev3", 00:14:03.499 "aliases": [ 00:14:03.499 "4c0330ec-92a8-4fa4-b090-932c93cdaa28" 00:14:03.499 ], 00:14:03.499 "product_name": "Malloc disk", 00:14:03.499 "block_size": 512, 00:14:03.499 "num_blocks": 65536, 00:14:03.499 "uuid": "4c0330ec-92a8-4fa4-b090-932c93cdaa28", 00:14:03.499 "assigned_rate_limits": { 00:14:03.499 "rw_ios_per_sec": 0, 00:14:03.499 "rw_mbytes_per_sec": 0, 00:14:03.499 "r_mbytes_per_sec": 0, 00:14:03.499 "w_mbytes_per_sec": 0 00:14:03.499 }, 00:14:03.499 "claimed": true, 00:14:03.499 "claim_type": "exclusive_write", 00:14:03.499 "zoned": false, 00:14:03.499 "supported_io_types": { 00:14:03.499 "read": true, 00:14:03.499 "write": true, 00:14:03.499 "unmap": true, 00:14:03.499 "flush": true, 00:14:03.499 "reset": true, 00:14:03.499 "nvme_admin": false, 00:14:03.499 "nvme_io": false, 00:14:03.499 "nvme_io_md": false, 00:14:03.499 "write_zeroes": true, 00:14:03.499 "zcopy": true, 00:14:03.499 "get_zone_info": false, 00:14:03.499 "zone_management": false, 00:14:03.499 "zone_append": false, 00:14:03.499 "compare": false, 00:14:03.499 "compare_and_write": false, 00:14:03.499 "abort": true, 00:14:03.499 "seek_hole": false, 00:14:03.499 "seek_data": false, 00:14:03.499 "copy": true, 00:14:03.499 "nvme_iov_md": false 00:14:03.499 }, 00:14:03.499 "memory_domains": [ 00:14:03.499 { 00:14:03.499 "dma_device_id": "system", 00:14:03.499 "dma_device_type": 1 00:14:03.499 }, 00:14:03.499 { 00:14:03.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.499 "dma_device_type": 2 00:14:03.499 } 00:14:03.499 ], 00:14:03.499 "driver_specific": {} 00:14:03.499 } 00:14:03.499 ] 00:14:03.499 21:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:03.499 21:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:03.499 21:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:03.499 21:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:14:03.499 21:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:03.499 21:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:03.499 21:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:03.499 21:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:03.499 21:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:03.499 21:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:03.499 21:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:03.499 21:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:03.499 21:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:03.499 21:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:03.499 21:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:03.499 21:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:03.499 "name": "Existed_Raid", 00:14:03.499 "uuid": "8db7bbde-a1d8-4fca-836b-a712fb040cb8", 00:14:03.499 "strip_size_kb": 64, 00:14:03.499 "state": "online", 00:14:03.499 "raid_level": "raid0", 00:14:03.499 "superblock": false, 00:14:03.499 "num_base_bdevs": 3, 00:14:03.499 "num_base_bdevs_discovered": 3, 00:14:03.499 "num_base_bdevs_operational": 3, 00:14:03.499 "base_bdevs_list": [ 00:14:03.499 { 00:14:03.500 "name": "BaseBdev1", 00:14:03.500 "uuid": "d426a139-7654-47ac-83b2-855ed4434243", 00:14:03.500 "is_configured": true, 00:14:03.500 "data_offset": 0, 00:14:03.500 "data_size": 65536 00:14:03.500 }, 00:14:03.500 { 00:14:03.500 "name": "BaseBdev2", 00:14:03.500 "uuid": "638abdba-c120-4d61-b530-57a5bb71ae0f", 00:14:03.500 "is_configured": true, 00:14:03.500 "data_offset": 0, 00:14:03.500 "data_size": 65536 00:14:03.500 }, 00:14:03.500 { 00:14:03.500 "name": "BaseBdev3", 00:14:03.500 "uuid": "4c0330ec-92a8-4fa4-b090-932c93cdaa28", 00:14:03.500 "is_configured": true, 00:14:03.500 "data_offset": 0, 00:14:03.500 "data_size": 65536 00:14:03.500 } 00:14:03.500 ] 00:14:03.500 }' 00:14:03.500 21:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:03.500 21:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.067 21:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:04.067 21:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:04.067 21:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:04.067 21:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:04.067 21:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:04.067 21:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:04.067 21:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:04.067 21:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:04.326 [2024-07-13 21:57:23.468363] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:04.326 21:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:04.326 "name": "Existed_Raid", 00:14:04.326 "aliases": [ 00:14:04.326 "8db7bbde-a1d8-4fca-836b-a712fb040cb8" 00:14:04.326 ], 00:14:04.326 "product_name": "Raid Volume", 00:14:04.326 "block_size": 512, 00:14:04.326 "num_blocks": 196608, 00:14:04.326 "uuid": "8db7bbde-a1d8-4fca-836b-a712fb040cb8", 00:14:04.326 "assigned_rate_limits": { 00:14:04.326 "rw_ios_per_sec": 0, 00:14:04.326 "rw_mbytes_per_sec": 0, 00:14:04.326 "r_mbytes_per_sec": 0, 00:14:04.326 "w_mbytes_per_sec": 0 00:14:04.326 }, 00:14:04.326 "claimed": false, 00:14:04.326 "zoned": false, 00:14:04.326 "supported_io_types": { 00:14:04.326 "read": true, 00:14:04.326 "write": true, 00:14:04.326 "unmap": true, 00:14:04.326 "flush": true, 00:14:04.326 "reset": true, 00:14:04.326 "nvme_admin": false, 00:14:04.326 "nvme_io": false, 00:14:04.326 "nvme_io_md": false, 00:14:04.326 "write_zeroes": true, 00:14:04.326 "zcopy": false, 00:14:04.326 "get_zone_info": false, 00:14:04.326 "zone_management": false, 00:14:04.326 "zone_append": false, 00:14:04.326 "compare": false, 00:14:04.326 "compare_and_write": false, 00:14:04.326 "abort": false, 00:14:04.326 "seek_hole": false, 00:14:04.326 "seek_data": false, 00:14:04.326 "copy": false, 00:14:04.326 "nvme_iov_md": false 00:14:04.326 }, 00:14:04.326 "memory_domains": [ 00:14:04.326 { 00:14:04.326 "dma_device_id": "system", 00:14:04.326 "dma_device_type": 1 00:14:04.326 }, 00:14:04.326 { 00:14:04.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.326 "dma_device_type": 2 00:14:04.326 }, 00:14:04.326 { 00:14:04.326 "dma_device_id": "system", 00:14:04.326 "dma_device_type": 1 00:14:04.326 }, 00:14:04.326 { 00:14:04.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.326 "dma_device_type": 2 00:14:04.326 }, 00:14:04.326 { 00:14:04.326 "dma_device_id": "system", 00:14:04.326 "dma_device_type": 1 00:14:04.326 }, 00:14:04.326 { 00:14:04.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.326 "dma_device_type": 2 00:14:04.326 } 00:14:04.326 ], 00:14:04.326 "driver_specific": { 00:14:04.326 "raid": { 00:14:04.326 "uuid": "8db7bbde-a1d8-4fca-836b-a712fb040cb8", 00:14:04.326 "strip_size_kb": 64, 00:14:04.326 "state": "online", 00:14:04.326 "raid_level": "raid0", 00:14:04.326 "superblock": false, 00:14:04.326 "num_base_bdevs": 3, 00:14:04.326 "num_base_bdevs_discovered": 3, 00:14:04.326 "num_base_bdevs_operational": 3, 00:14:04.326 "base_bdevs_list": [ 00:14:04.326 { 00:14:04.326 "name": "BaseBdev1", 00:14:04.326 "uuid": "d426a139-7654-47ac-83b2-855ed4434243", 00:14:04.326 "is_configured": true, 00:14:04.326 "data_offset": 0, 00:14:04.326 "data_size": 65536 00:14:04.326 }, 00:14:04.326 { 00:14:04.326 "name": "BaseBdev2", 00:14:04.326 "uuid": "638abdba-c120-4d61-b530-57a5bb71ae0f", 00:14:04.326 "is_configured": true, 00:14:04.326 "data_offset": 0, 00:14:04.326 "data_size": 65536 00:14:04.326 }, 00:14:04.326 { 00:14:04.326 "name": "BaseBdev3", 00:14:04.326 "uuid": "4c0330ec-92a8-4fa4-b090-932c93cdaa28", 00:14:04.326 "is_configured": true, 00:14:04.326 "data_offset": 0, 00:14:04.326 "data_size": 65536 00:14:04.326 } 00:14:04.326 ] 00:14:04.326 } 00:14:04.326 } 00:14:04.326 }' 00:14:04.326 21:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:04.326 21:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:04.326 BaseBdev2 00:14:04.326 BaseBdev3' 00:14:04.326 21:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:04.326 21:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:04.326 21:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:04.326 21:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:04.326 "name": "BaseBdev1", 00:14:04.326 "aliases": [ 00:14:04.326 "d426a139-7654-47ac-83b2-855ed4434243" 00:14:04.326 ], 00:14:04.326 "product_name": "Malloc disk", 00:14:04.326 "block_size": 512, 00:14:04.326 "num_blocks": 65536, 00:14:04.326 "uuid": "d426a139-7654-47ac-83b2-855ed4434243", 00:14:04.326 "assigned_rate_limits": { 00:14:04.326 "rw_ios_per_sec": 0, 00:14:04.326 "rw_mbytes_per_sec": 0, 00:14:04.326 "r_mbytes_per_sec": 0, 00:14:04.326 "w_mbytes_per_sec": 0 00:14:04.326 }, 00:14:04.326 "claimed": true, 00:14:04.326 "claim_type": "exclusive_write", 00:14:04.326 "zoned": false, 00:14:04.326 "supported_io_types": { 00:14:04.326 "read": true, 00:14:04.326 "write": true, 00:14:04.326 "unmap": true, 00:14:04.326 "flush": true, 00:14:04.326 "reset": true, 00:14:04.326 "nvme_admin": false, 00:14:04.326 "nvme_io": false, 00:14:04.326 "nvme_io_md": false, 00:14:04.326 "write_zeroes": true, 00:14:04.326 "zcopy": true, 00:14:04.326 "get_zone_info": false, 00:14:04.326 "zone_management": false, 00:14:04.326 "zone_append": false, 00:14:04.326 "compare": false, 00:14:04.326 "compare_and_write": false, 00:14:04.326 "abort": true, 00:14:04.326 "seek_hole": false, 00:14:04.326 "seek_data": false, 00:14:04.326 "copy": true, 00:14:04.326 "nvme_iov_md": false 00:14:04.326 }, 00:14:04.326 "memory_domains": [ 00:14:04.326 { 00:14:04.326 "dma_device_id": "system", 00:14:04.326 "dma_device_type": 1 00:14:04.326 }, 00:14:04.326 { 00:14:04.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.326 "dma_device_type": 2 00:14:04.326 } 00:14:04.326 ], 00:14:04.326 "driver_specific": {} 00:14:04.326 }' 00:14:04.326 21:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:04.584 21:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:04.584 21:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:04.584 21:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:04.584 21:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:04.584 21:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:04.584 21:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:04.584 21:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:04.584 21:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:04.584 21:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:04.843 21:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:04.843 21:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:04.843 21:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:04.843 21:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:04.843 21:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:04.843 21:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:04.843 "name": "BaseBdev2", 00:14:04.843 "aliases": [ 00:14:04.843 "638abdba-c120-4d61-b530-57a5bb71ae0f" 00:14:04.843 ], 00:14:04.843 "product_name": "Malloc disk", 00:14:04.843 "block_size": 512, 00:14:04.843 "num_blocks": 65536, 00:14:04.843 "uuid": "638abdba-c120-4d61-b530-57a5bb71ae0f", 00:14:04.843 "assigned_rate_limits": { 00:14:04.843 "rw_ios_per_sec": 0, 00:14:04.843 "rw_mbytes_per_sec": 0, 00:14:04.843 "r_mbytes_per_sec": 0, 00:14:04.843 "w_mbytes_per_sec": 0 00:14:04.843 }, 00:14:04.843 "claimed": true, 00:14:04.843 "claim_type": "exclusive_write", 00:14:04.843 "zoned": false, 00:14:04.843 "supported_io_types": { 00:14:04.843 "read": true, 00:14:04.843 "write": true, 00:14:04.843 "unmap": true, 00:14:04.843 "flush": true, 00:14:04.843 "reset": true, 00:14:04.843 "nvme_admin": false, 00:14:04.843 "nvme_io": false, 00:14:04.843 "nvme_io_md": false, 00:14:04.843 "write_zeroes": true, 00:14:04.843 "zcopy": true, 00:14:04.843 "get_zone_info": false, 00:14:04.843 "zone_management": false, 00:14:04.843 "zone_append": false, 00:14:04.843 "compare": false, 00:14:04.843 "compare_and_write": false, 00:14:04.843 "abort": true, 00:14:04.843 "seek_hole": false, 00:14:04.843 "seek_data": false, 00:14:04.843 "copy": true, 00:14:04.843 "nvme_iov_md": false 00:14:04.843 }, 00:14:04.843 "memory_domains": [ 00:14:04.843 { 00:14:04.843 "dma_device_id": "system", 00:14:04.843 "dma_device_type": 1 00:14:04.843 }, 00:14:04.843 { 00:14:04.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.843 "dma_device_type": 2 00:14:04.843 } 00:14:04.843 ], 00:14:04.843 "driver_specific": {} 00:14:04.843 }' 00:14:04.843 21:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:04.843 21:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:05.102 21:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:05.102 21:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:05.102 21:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:05.102 21:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:05.102 21:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:05.102 21:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:05.102 21:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:05.102 21:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:05.102 21:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:05.361 21:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:05.361 21:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:05.361 21:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:05.361 21:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:05.361 21:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:05.361 "name": "BaseBdev3", 00:14:05.361 "aliases": [ 00:14:05.361 "4c0330ec-92a8-4fa4-b090-932c93cdaa28" 00:14:05.361 ], 00:14:05.361 "product_name": "Malloc disk", 00:14:05.361 "block_size": 512, 00:14:05.361 "num_blocks": 65536, 00:14:05.361 "uuid": "4c0330ec-92a8-4fa4-b090-932c93cdaa28", 00:14:05.361 "assigned_rate_limits": { 00:14:05.361 "rw_ios_per_sec": 0, 00:14:05.361 "rw_mbytes_per_sec": 0, 00:14:05.361 "r_mbytes_per_sec": 0, 00:14:05.361 "w_mbytes_per_sec": 0 00:14:05.361 }, 00:14:05.361 "claimed": true, 00:14:05.361 "claim_type": "exclusive_write", 00:14:05.361 "zoned": false, 00:14:05.361 "supported_io_types": { 00:14:05.361 "read": true, 00:14:05.361 "write": true, 00:14:05.361 "unmap": true, 00:14:05.361 "flush": true, 00:14:05.361 "reset": true, 00:14:05.361 "nvme_admin": false, 00:14:05.361 "nvme_io": false, 00:14:05.361 "nvme_io_md": false, 00:14:05.361 "write_zeroes": true, 00:14:05.361 "zcopy": true, 00:14:05.361 "get_zone_info": false, 00:14:05.361 "zone_management": false, 00:14:05.361 "zone_append": false, 00:14:05.361 "compare": false, 00:14:05.361 "compare_and_write": false, 00:14:05.361 "abort": true, 00:14:05.361 "seek_hole": false, 00:14:05.361 "seek_data": false, 00:14:05.361 "copy": true, 00:14:05.361 "nvme_iov_md": false 00:14:05.361 }, 00:14:05.361 "memory_domains": [ 00:14:05.361 { 00:14:05.361 "dma_device_id": "system", 00:14:05.361 "dma_device_type": 1 00:14:05.361 }, 00:14:05.361 { 00:14:05.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.361 "dma_device_type": 2 00:14:05.361 } 00:14:05.361 ], 00:14:05.361 "driver_specific": {} 00:14:05.361 }' 00:14:05.361 21:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:05.361 21:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:05.619 21:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:05.619 21:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:05.619 21:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:05.619 21:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:05.619 21:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:05.619 21:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:05.619 21:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:05.619 21:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:05.619 21:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:05.619 21:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:05.619 21:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:05.878 [2024-07-13 21:57:25.128555] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:05.878 [2024-07-13 21:57:25.128582] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:05.878 [2024-07-13 21:57:25.128629] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:05.878 21:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:05.878 21:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:14:05.878 21:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:05.878 21:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:05.878 21:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:14:05.878 21:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:14:05.878 21:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:05.878 21:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:14:05.878 21:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:05.878 21:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:05.878 21:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:05.878 21:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:05.878 21:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:05.878 21:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:05.878 21:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:05.878 21:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:05.878 21:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:06.137 21:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:06.137 "name": "Existed_Raid", 00:14:06.137 "uuid": "8db7bbde-a1d8-4fca-836b-a712fb040cb8", 00:14:06.137 "strip_size_kb": 64, 00:14:06.137 "state": "offline", 00:14:06.137 "raid_level": "raid0", 00:14:06.137 "superblock": false, 00:14:06.137 "num_base_bdevs": 3, 00:14:06.137 "num_base_bdevs_discovered": 2, 00:14:06.137 "num_base_bdevs_operational": 2, 00:14:06.137 "base_bdevs_list": [ 00:14:06.138 { 00:14:06.138 "name": null, 00:14:06.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.138 "is_configured": false, 00:14:06.138 "data_offset": 0, 00:14:06.138 "data_size": 65536 00:14:06.138 }, 00:14:06.138 { 00:14:06.138 "name": "BaseBdev2", 00:14:06.138 "uuid": "638abdba-c120-4d61-b530-57a5bb71ae0f", 00:14:06.138 "is_configured": true, 00:14:06.138 "data_offset": 0, 00:14:06.138 "data_size": 65536 00:14:06.138 }, 00:14:06.138 { 00:14:06.138 "name": "BaseBdev3", 00:14:06.138 "uuid": "4c0330ec-92a8-4fa4-b090-932c93cdaa28", 00:14:06.138 "is_configured": true, 00:14:06.138 "data_offset": 0, 00:14:06.138 "data_size": 65536 00:14:06.138 } 00:14:06.138 ] 00:14:06.138 }' 00:14:06.138 21:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:06.138 21:57:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.706 21:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:06.706 21:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:06.706 21:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:06.706 21:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:06.706 21:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:06.706 21:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:06.706 21:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:06.965 [2024-07-13 21:57:26.138834] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:06.965 21:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:06.965 21:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:06.965 21:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:06.965 21:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:07.224 21:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:07.224 21:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:07.224 21:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:07.224 [2024-07-13 21:57:26.572357] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:07.224 [2024-07-13 21:57:26.572406] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:14:07.484 21:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:07.484 21:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:07.484 21:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:07.484 21:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:07.484 21:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:07.484 21:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:07.484 21:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:14:07.484 21:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:14:07.484 21:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:07.484 21:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:07.743 BaseBdev2 00:14:07.743 21:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:14:07.743 21:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:07.743 21:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:07.743 21:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:07.743 21:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:07.743 21:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:07.743 21:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:08.002 21:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:08.262 [ 00:14:08.262 { 00:14:08.262 "name": "BaseBdev2", 00:14:08.262 "aliases": [ 00:14:08.262 "422a475e-8361-47dc-a523-6bfe066e86ab" 00:14:08.262 ], 00:14:08.262 "product_name": "Malloc disk", 00:14:08.262 "block_size": 512, 00:14:08.262 "num_blocks": 65536, 00:14:08.262 "uuid": "422a475e-8361-47dc-a523-6bfe066e86ab", 00:14:08.262 "assigned_rate_limits": { 00:14:08.262 "rw_ios_per_sec": 0, 00:14:08.262 "rw_mbytes_per_sec": 0, 00:14:08.262 "r_mbytes_per_sec": 0, 00:14:08.262 "w_mbytes_per_sec": 0 00:14:08.262 }, 00:14:08.262 "claimed": false, 00:14:08.262 "zoned": false, 00:14:08.262 "supported_io_types": { 00:14:08.262 "read": true, 00:14:08.262 "write": true, 00:14:08.262 "unmap": true, 00:14:08.262 "flush": true, 00:14:08.262 "reset": true, 00:14:08.262 "nvme_admin": false, 00:14:08.262 "nvme_io": false, 00:14:08.262 "nvme_io_md": false, 00:14:08.262 "write_zeroes": true, 00:14:08.262 "zcopy": true, 00:14:08.262 "get_zone_info": false, 00:14:08.262 "zone_management": false, 00:14:08.262 "zone_append": false, 00:14:08.262 "compare": false, 00:14:08.262 "compare_and_write": false, 00:14:08.262 "abort": true, 00:14:08.262 "seek_hole": false, 00:14:08.262 "seek_data": false, 00:14:08.262 "copy": true, 00:14:08.262 "nvme_iov_md": false 00:14:08.262 }, 00:14:08.262 "memory_domains": [ 00:14:08.262 { 00:14:08.262 "dma_device_id": "system", 00:14:08.262 "dma_device_type": 1 00:14:08.262 }, 00:14:08.262 { 00:14:08.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.262 "dma_device_type": 2 00:14:08.262 } 00:14:08.262 ], 00:14:08.262 "driver_specific": {} 00:14:08.262 } 00:14:08.262 ] 00:14:08.262 21:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:08.262 21:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:08.262 21:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:08.262 21:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:08.262 BaseBdev3 00:14:08.262 21:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:14:08.262 21:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:14:08.262 21:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:08.262 21:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:08.262 21:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:08.262 21:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:08.262 21:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:08.521 21:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:08.781 [ 00:14:08.781 { 00:14:08.781 "name": "BaseBdev3", 00:14:08.781 "aliases": [ 00:14:08.781 "a7ed8b9a-21c0-460b-a5e3-2dcc7cd1fa2f" 00:14:08.781 ], 00:14:08.781 "product_name": "Malloc disk", 00:14:08.781 "block_size": 512, 00:14:08.781 "num_blocks": 65536, 00:14:08.781 "uuid": "a7ed8b9a-21c0-460b-a5e3-2dcc7cd1fa2f", 00:14:08.781 "assigned_rate_limits": { 00:14:08.781 "rw_ios_per_sec": 0, 00:14:08.781 "rw_mbytes_per_sec": 0, 00:14:08.781 "r_mbytes_per_sec": 0, 00:14:08.781 "w_mbytes_per_sec": 0 00:14:08.781 }, 00:14:08.781 "claimed": false, 00:14:08.781 "zoned": false, 00:14:08.781 "supported_io_types": { 00:14:08.781 "read": true, 00:14:08.781 "write": true, 00:14:08.781 "unmap": true, 00:14:08.781 "flush": true, 00:14:08.781 "reset": true, 00:14:08.781 "nvme_admin": false, 00:14:08.781 "nvme_io": false, 00:14:08.781 "nvme_io_md": false, 00:14:08.781 "write_zeroes": true, 00:14:08.781 "zcopy": true, 00:14:08.781 "get_zone_info": false, 00:14:08.781 "zone_management": false, 00:14:08.781 "zone_append": false, 00:14:08.781 "compare": false, 00:14:08.781 "compare_and_write": false, 00:14:08.781 "abort": true, 00:14:08.781 "seek_hole": false, 00:14:08.781 "seek_data": false, 00:14:08.781 "copy": true, 00:14:08.781 "nvme_iov_md": false 00:14:08.781 }, 00:14:08.781 "memory_domains": [ 00:14:08.781 { 00:14:08.781 "dma_device_id": "system", 00:14:08.781 "dma_device_type": 1 00:14:08.781 }, 00:14:08.781 { 00:14:08.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.781 "dma_device_type": 2 00:14:08.781 } 00:14:08.781 ], 00:14:08.781 "driver_specific": {} 00:14:08.781 } 00:14:08.781 ] 00:14:08.781 21:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:08.781 21:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:08.781 21:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:08.781 21:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:08.781 [2024-07-13 21:57:28.097262] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:08.781 [2024-07-13 21:57:28.097301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:08.781 [2024-07-13 21:57:28.097340] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:08.781 [2024-07-13 21:57:28.099043] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:08.781 21:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:08.781 21:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:08.781 21:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:08.781 21:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:08.781 21:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:08.781 21:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:08.781 21:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:08.781 21:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:08.781 21:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:08.781 21:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:08.781 21:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:08.781 21:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.041 21:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:09.041 "name": "Existed_Raid", 00:14:09.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.041 "strip_size_kb": 64, 00:14:09.041 "state": "configuring", 00:14:09.041 "raid_level": "raid0", 00:14:09.041 "superblock": false, 00:14:09.041 "num_base_bdevs": 3, 00:14:09.041 "num_base_bdevs_discovered": 2, 00:14:09.041 "num_base_bdevs_operational": 3, 00:14:09.041 "base_bdevs_list": [ 00:14:09.041 { 00:14:09.041 "name": "BaseBdev1", 00:14:09.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.041 "is_configured": false, 00:14:09.041 "data_offset": 0, 00:14:09.041 "data_size": 0 00:14:09.041 }, 00:14:09.041 { 00:14:09.041 "name": "BaseBdev2", 00:14:09.041 "uuid": "422a475e-8361-47dc-a523-6bfe066e86ab", 00:14:09.041 "is_configured": true, 00:14:09.041 "data_offset": 0, 00:14:09.041 "data_size": 65536 00:14:09.041 }, 00:14:09.041 { 00:14:09.041 "name": "BaseBdev3", 00:14:09.041 "uuid": "a7ed8b9a-21c0-460b-a5e3-2dcc7cd1fa2f", 00:14:09.041 "is_configured": true, 00:14:09.041 "data_offset": 0, 00:14:09.041 "data_size": 65536 00:14:09.041 } 00:14:09.041 ] 00:14:09.041 }' 00:14:09.041 21:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:09.041 21:57:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.610 21:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:14:09.610 [2024-07-13 21:57:28.919435] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:09.610 21:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:09.610 21:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:09.610 21:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:09.610 21:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:09.610 21:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:09.610 21:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:09.610 21:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:09.610 21:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:09.610 21:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:09.610 21:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:09.610 21:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:09.610 21:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.870 21:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:09.870 "name": "Existed_Raid", 00:14:09.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.870 "strip_size_kb": 64, 00:14:09.870 "state": "configuring", 00:14:09.870 "raid_level": "raid0", 00:14:09.870 "superblock": false, 00:14:09.870 "num_base_bdevs": 3, 00:14:09.870 "num_base_bdevs_discovered": 1, 00:14:09.870 "num_base_bdevs_operational": 3, 00:14:09.870 "base_bdevs_list": [ 00:14:09.870 { 00:14:09.870 "name": "BaseBdev1", 00:14:09.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.870 "is_configured": false, 00:14:09.870 "data_offset": 0, 00:14:09.870 "data_size": 0 00:14:09.870 }, 00:14:09.870 { 00:14:09.870 "name": null, 00:14:09.870 "uuid": "422a475e-8361-47dc-a523-6bfe066e86ab", 00:14:09.870 "is_configured": false, 00:14:09.870 "data_offset": 0, 00:14:09.870 "data_size": 65536 00:14:09.870 }, 00:14:09.870 { 00:14:09.870 "name": "BaseBdev3", 00:14:09.870 "uuid": "a7ed8b9a-21c0-460b-a5e3-2dcc7cd1fa2f", 00:14:09.870 "is_configured": true, 00:14:09.870 "data_offset": 0, 00:14:09.870 "data_size": 65536 00:14:09.870 } 00:14:09.870 ] 00:14:09.870 }' 00:14:09.870 21:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:09.870 21:57:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.439 21:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:10.439 21:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:10.439 21:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:14:10.439 21:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:10.699 [2024-07-13 21:57:29.933174] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:10.699 BaseBdev1 00:14:10.699 21:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:14:10.699 21:57:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:10.699 21:57:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:10.699 21:57:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:10.699 21:57:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:10.699 21:57:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:10.699 21:57:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:10.959 21:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:10.959 [ 00:14:10.959 { 00:14:10.959 "name": "BaseBdev1", 00:14:10.959 "aliases": [ 00:14:10.959 "5ca85bdb-e598-46d9-b932-78ddf6044ac4" 00:14:10.959 ], 00:14:10.959 "product_name": "Malloc disk", 00:14:10.959 "block_size": 512, 00:14:10.959 "num_blocks": 65536, 00:14:10.959 "uuid": "5ca85bdb-e598-46d9-b932-78ddf6044ac4", 00:14:10.959 "assigned_rate_limits": { 00:14:10.959 "rw_ios_per_sec": 0, 00:14:10.959 "rw_mbytes_per_sec": 0, 00:14:10.959 "r_mbytes_per_sec": 0, 00:14:10.959 "w_mbytes_per_sec": 0 00:14:10.959 }, 00:14:10.959 "claimed": true, 00:14:10.959 "claim_type": "exclusive_write", 00:14:10.959 "zoned": false, 00:14:10.959 "supported_io_types": { 00:14:10.959 "read": true, 00:14:10.959 "write": true, 00:14:10.959 "unmap": true, 00:14:10.959 "flush": true, 00:14:10.959 "reset": true, 00:14:10.959 "nvme_admin": false, 00:14:10.959 "nvme_io": false, 00:14:10.959 "nvme_io_md": false, 00:14:10.959 "write_zeroes": true, 00:14:10.959 "zcopy": true, 00:14:10.959 "get_zone_info": false, 00:14:10.959 "zone_management": false, 00:14:10.959 "zone_append": false, 00:14:10.959 "compare": false, 00:14:10.959 "compare_and_write": false, 00:14:10.959 "abort": true, 00:14:10.959 "seek_hole": false, 00:14:10.959 "seek_data": false, 00:14:10.959 "copy": true, 00:14:10.959 "nvme_iov_md": false 00:14:10.959 }, 00:14:10.959 "memory_domains": [ 00:14:10.959 { 00:14:10.959 "dma_device_id": "system", 00:14:10.959 "dma_device_type": 1 00:14:10.959 }, 00:14:10.959 { 00:14:10.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.960 "dma_device_type": 2 00:14:10.960 } 00:14:10.960 ], 00:14:10.960 "driver_specific": {} 00:14:10.960 } 00:14:10.960 ] 00:14:10.960 21:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:10.960 21:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:10.960 21:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:10.960 21:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:10.960 21:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:10.960 21:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:10.960 21:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:10.960 21:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:10.960 21:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:10.960 21:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:10.960 21:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:10.960 21:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:10.960 21:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:11.219 21:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:11.219 "name": "Existed_Raid", 00:14:11.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.219 "strip_size_kb": 64, 00:14:11.219 "state": "configuring", 00:14:11.219 "raid_level": "raid0", 00:14:11.219 "superblock": false, 00:14:11.219 "num_base_bdevs": 3, 00:14:11.219 "num_base_bdevs_discovered": 2, 00:14:11.219 "num_base_bdevs_operational": 3, 00:14:11.219 "base_bdevs_list": [ 00:14:11.219 { 00:14:11.219 "name": "BaseBdev1", 00:14:11.219 "uuid": "5ca85bdb-e598-46d9-b932-78ddf6044ac4", 00:14:11.219 "is_configured": true, 00:14:11.219 "data_offset": 0, 00:14:11.219 "data_size": 65536 00:14:11.219 }, 00:14:11.219 { 00:14:11.219 "name": null, 00:14:11.219 "uuid": "422a475e-8361-47dc-a523-6bfe066e86ab", 00:14:11.219 "is_configured": false, 00:14:11.219 "data_offset": 0, 00:14:11.219 "data_size": 65536 00:14:11.219 }, 00:14:11.219 { 00:14:11.219 "name": "BaseBdev3", 00:14:11.219 "uuid": "a7ed8b9a-21c0-460b-a5e3-2dcc7cd1fa2f", 00:14:11.219 "is_configured": true, 00:14:11.219 "data_offset": 0, 00:14:11.219 "data_size": 65536 00:14:11.219 } 00:14:11.219 ] 00:14:11.219 }' 00:14:11.219 21:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:11.219 21:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.787 21:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:11.787 21:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:11.787 21:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:14:11.787 21:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:14:12.046 [2024-07-13 21:57:31.292853] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:12.046 21:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:12.046 21:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:12.046 21:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:12.046 21:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:12.046 21:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:12.046 21:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:12.046 21:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:12.046 21:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:12.046 21:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:12.046 21:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:12.046 21:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:12.046 21:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:12.304 21:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:12.304 "name": "Existed_Raid", 00:14:12.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.304 "strip_size_kb": 64, 00:14:12.304 "state": "configuring", 00:14:12.304 "raid_level": "raid0", 00:14:12.304 "superblock": false, 00:14:12.304 "num_base_bdevs": 3, 00:14:12.304 "num_base_bdevs_discovered": 1, 00:14:12.304 "num_base_bdevs_operational": 3, 00:14:12.304 "base_bdevs_list": [ 00:14:12.304 { 00:14:12.304 "name": "BaseBdev1", 00:14:12.304 "uuid": "5ca85bdb-e598-46d9-b932-78ddf6044ac4", 00:14:12.304 "is_configured": true, 00:14:12.304 "data_offset": 0, 00:14:12.304 "data_size": 65536 00:14:12.304 }, 00:14:12.304 { 00:14:12.304 "name": null, 00:14:12.304 "uuid": "422a475e-8361-47dc-a523-6bfe066e86ab", 00:14:12.304 "is_configured": false, 00:14:12.304 "data_offset": 0, 00:14:12.304 "data_size": 65536 00:14:12.304 }, 00:14:12.304 { 00:14:12.304 "name": null, 00:14:12.304 "uuid": "a7ed8b9a-21c0-460b-a5e3-2dcc7cd1fa2f", 00:14:12.304 "is_configured": false, 00:14:12.304 "data_offset": 0, 00:14:12.304 "data_size": 65536 00:14:12.304 } 00:14:12.304 ] 00:14:12.304 }' 00:14:12.304 21:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:12.304 21:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.871 21:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:12.871 21:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:12.871 21:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:14:12.871 21:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:13.130 [2024-07-13 21:57:32.271414] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:13.130 21:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:13.130 21:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:13.130 21:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:13.130 21:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:13.130 21:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:13.130 21:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:13.130 21:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:13.130 21:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:13.130 21:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:13.130 21:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:13.130 21:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.130 21:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:13.130 21:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:13.130 "name": "Existed_Raid", 00:14:13.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.130 "strip_size_kb": 64, 00:14:13.130 "state": "configuring", 00:14:13.130 "raid_level": "raid0", 00:14:13.130 "superblock": false, 00:14:13.130 "num_base_bdevs": 3, 00:14:13.130 "num_base_bdevs_discovered": 2, 00:14:13.130 "num_base_bdevs_operational": 3, 00:14:13.130 "base_bdevs_list": [ 00:14:13.130 { 00:14:13.130 "name": "BaseBdev1", 00:14:13.130 "uuid": "5ca85bdb-e598-46d9-b932-78ddf6044ac4", 00:14:13.130 "is_configured": true, 00:14:13.130 "data_offset": 0, 00:14:13.130 "data_size": 65536 00:14:13.130 }, 00:14:13.130 { 00:14:13.130 "name": null, 00:14:13.130 "uuid": "422a475e-8361-47dc-a523-6bfe066e86ab", 00:14:13.130 "is_configured": false, 00:14:13.130 "data_offset": 0, 00:14:13.130 "data_size": 65536 00:14:13.130 }, 00:14:13.130 { 00:14:13.130 "name": "BaseBdev3", 00:14:13.130 "uuid": "a7ed8b9a-21c0-460b-a5e3-2dcc7cd1fa2f", 00:14:13.130 "is_configured": true, 00:14:13.130 "data_offset": 0, 00:14:13.130 "data_size": 65536 00:14:13.130 } 00:14:13.130 ] 00:14:13.130 }' 00:14:13.130 21:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:13.130 21:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.697 21:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:13.697 21:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:13.955 21:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:14:13.956 21:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:13.956 [2024-07-13 21:57:33.278109] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:14.214 21:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:14.214 21:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:14.214 21:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:14.214 21:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:14.214 21:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:14.214 21:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:14.214 21:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:14.214 21:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:14.214 21:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:14.214 21:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:14.215 21:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:14.215 21:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.215 21:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:14.215 "name": "Existed_Raid", 00:14:14.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.215 "strip_size_kb": 64, 00:14:14.215 "state": "configuring", 00:14:14.215 "raid_level": "raid0", 00:14:14.215 "superblock": false, 00:14:14.215 "num_base_bdevs": 3, 00:14:14.215 "num_base_bdevs_discovered": 1, 00:14:14.215 "num_base_bdevs_operational": 3, 00:14:14.215 "base_bdevs_list": [ 00:14:14.215 { 00:14:14.215 "name": null, 00:14:14.215 "uuid": "5ca85bdb-e598-46d9-b932-78ddf6044ac4", 00:14:14.215 "is_configured": false, 00:14:14.215 "data_offset": 0, 00:14:14.215 "data_size": 65536 00:14:14.215 }, 00:14:14.215 { 00:14:14.215 "name": null, 00:14:14.215 "uuid": "422a475e-8361-47dc-a523-6bfe066e86ab", 00:14:14.215 "is_configured": false, 00:14:14.215 "data_offset": 0, 00:14:14.215 "data_size": 65536 00:14:14.215 }, 00:14:14.215 { 00:14:14.215 "name": "BaseBdev3", 00:14:14.215 "uuid": "a7ed8b9a-21c0-460b-a5e3-2dcc7cd1fa2f", 00:14:14.215 "is_configured": true, 00:14:14.215 "data_offset": 0, 00:14:14.215 "data_size": 65536 00:14:14.215 } 00:14:14.215 ] 00:14:14.215 }' 00:14:14.215 21:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:14.215 21:57:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.783 21:57:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:14.783 21:57:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:15.043 21:57:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:14:15.043 21:57:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:15.043 [2024-07-13 21:57:34.363465] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:15.043 21:57:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:15.043 21:57:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:15.043 21:57:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:15.043 21:57:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:15.043 21:57:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:15.043 21:57:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:15.043 21:57:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:15.043 21:57:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:15.043 21:57:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:15.043 21:57:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:15.043 21:57:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:15.043 21:57:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.302 21:57:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:15.302 "name": "Existed_Raid", 00:14:15.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.302 "strip_size_kb": 64, 00:14:15.302 "state": "configuring", 00:14:15.302 "raid_level": "raid0", 00:14:15.302 "superblock": false, 00:14:15.302 "num_base_bdevs": 3, 00:14:15.302 "num_base_bdevs_discovered": 2, 00:14:15.302 "num_base_bdevs_operational": 3, 00:14:15.302 "base_bdevs_list": [ 00:14:15.302 { 00:14:15.302 "name": null, 00:14:15.302 "uuid": "5ca85bdb-e598-46d9-b932-78ddf6044ac4", 00:14:15.302 "is_configured": false, 00:14:15.302 "data_offset": 0, 00:14:15.302 "data_size": 65536 00:14:15.302 }, 00:14:15.302 { 00:14:15.302 "name": "BaseBdev2", 00:14:15.302 "uuid": "422a475e-8361-47dc-a523-6bfe066e86ab", 00:14:15.302 "is_configured": true, 00:14:15.302 "data_offset": 0, 00:14:15.302 "data_size": 65536 00:14:15.302 }, 00:14:15.302 { 00:14:15.302 "name": "BaseBdev3", 00:14:15.302 "uuid": "a7ed8b9a-21c0-460b-a5e3-2dcc7cd1fa2f", 00:14:15.302 "is_configured": true, 00:14:15.302 "data_offset": 0, 00:14:15.302 "data_size": 65536 00:14:15.302 } 00:14:15.302 ] 00:14:15.302 }' 00:14:15.302 21:57:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:15.302 21:57:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.869 21:57:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:15.869 21:57:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:15.869 21:57:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:14:15.869 21:57:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:15.869 21:57:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:16.128 21:57:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 5ca85bdb-e598-46d9-b932-78ddf6044ac4 00:14:16.388 [2024-07-13 21:57:35.550149] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:16.388 [2024-07-13 21:57:35.550186] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041780 00:14:16.388 [2024-07-13 21:57:35.550197] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:16.388 [2024-07-13 21:57:35.550416] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010980 00:14:16.388 [2024-07-13 21:57:35.550571] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041780 00:14:16.388 [2024-07-13 21:57:35.550581] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000041780 00:14:16.388 [2024-07-13 21:57:35.550842] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:16.388 NewBaseBdev 00:14:16.388 21:57:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:14:16.388 21:57:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:14:16.388 21:57:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:16.388 21:57:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:16.388 21:57:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:16.388 21:57:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:16.388 21:57:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:16.388 21:57:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:16.648 [ 00:14:16.648 { 00:14:16.648 "name": "NewBaseBdev", 00:14:16.648 "aliases": [ 00:14:16.648 "5ca85bdb-e598-46d9-b932-78ddf6044ac4" 00:14:16.648 ], 00:14:16.648 "product_name": "Malloc disk", 00:14:16.648 "block_size": 512, 00:14:16.648 "num_blocks": 65536, 00:14:16.648 "uuid": "5ca85bdb-e598-46d9-b932-78ddf6044ac4", 00:14:16.648 "assigned_rate_limits": { 00:14:16.648 "rw_ios_per_sec": 0, 00:14:16.648 "rw_mbytes_per_sec": 0, 00:14:16.648 "r_mbytes_per_sec": 0, 00:14:16.648 "w_mbytes_per_sec": 0 00:14:16.648 }, 00:14:16.648 "claimed": true, 00:14:16.648 "claim_type": "exclusive_write", 00:14:16.648 "zoned": false, 00:14:16.648 "supported_io_types": { 00:14:16.648 "read": true, 00:14:16.648 "write": true, 00:14:16.648 "unmap": true, 00:14:16.648 "flush": true, 00:14:16.648 "reset": true, 00:14:16.648 "nvme_admin": false, 00:14:16.648 "nvme_io": false, 00:14:16.648 "nvme_io_md": false, 00:14:16.648 "write_zeroes": true, 00:14:16.648 "zcopy": true, 00:14:16.648 "get_zone_info": false, 00:14:16.648 "zone_management": false, 00:14:16.648 "zone_append": false, 00:14:16.648 "compare": false, 00:14:16.648 "compare_and_write": false, 00:14:16.648 "abort": true, 00:14:16.648 "seek_hole": false, 00:14:16.648 "seek_data": false, 00:14:16.648 "copy": true, 00:14:16.648 "nvme_iov_md": false 00:14:16.648 }, 00:14:16.648 "memory_domains": [ 00:14:16.648 { 00:14:16.648 "dma_device_id": "system", 00:14:16.648 "dma_device_type": 1 00:14:16.648 }, 00:14:16.648 { 00:14:16.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.648 "dma_device_type": 2 00:14:16.648 } 00:14:16.648 ], 00:14:16.648 "driver_specific": {} 00:14:16.648 } 00:14:16.648 ] 00:14:16.648 21:57:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:16.648 21:57:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:14:16.648 21:57:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:16.648 21:57:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:16.648 21:57:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:16.648 21:57:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:16.648 21:57:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:16.648 21:57:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:16.648 21:57:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:16.648 21:57:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:16.648 21:57:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:16.648 21:57:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.648 21:57:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:16.908 21:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:16.908 "name": "Existed_Raid", 00:14:16.908 "uuid": "f8eb94d5-7297-43f2-ae32-4e73e0c83e54", 00:14:16.908 "strip_size_kb": 64, 00:14:16.908 "state": "online", 00:14:16.908 "raid_level": "raid0", 00:14:16.908 "superblock": false, 00:14:16.908 "num_base_bdevs": 3, 00:14:16.908 "num_base_bdevs_discovered": 3, 00:14:16.908 "num_base_bdevs_operational": 3, 00:14:16.908 "base_bdevs_list": [ 00:14:16.908 { 00:14:16.908 "name": "NewBaseBdev", 00:14:16.908 "uuid": "5ca85bdb-e598-46d9-b932-78ddf6044ac4", 00:14:16.908 "is_configured": true, 00:14:16.908 "data_offset": 0, 00:14:16.908 "data_size": 65536 00:14:16.908 }, 00:14:16.908 { 00:14:16.908 "name": "BaseBdev2", 00:14:16.908 "uuid": "422a475e-8361-47dc-a523-6bfe066e86ab", 00:14:16.908 "is_configured": true, 00:14:16.908 "data_offset": 0, 00:14:16.908 "data_size": 65536 00:14:16.908 }, 00:14:16.908 { 00:14:16.908 "name": "BaseBdev3", 00:14:16.908 "uuid": "a7ed8b9a-21c0-460b-a5e3-2dcc7cd1fa2f", 00:14:16.908 "is_configured": true, 00:14:16.908 "data_offset": 0, 00:14:16.908 "data_size": 65536 00:14:16.908 } 00:14:16.908 ] 00:14:16.908 }' 00:14:16.908 21:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:16.908 21:57:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.168 21:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:14:17.168 21:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:17.168 21:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:17.168 21:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:17.168 21:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:17.168 21:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:17.168 21:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:17.168 21:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:17.429 [2024-07-13 21:57:36.693566] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:17.430 21:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:17.430 "name": "Existed_Raid", 00:14:17.430 "aliases": [ 00:14:17.430 "f8eb94d5-7297-43f2-ae32-4e73e0c83e54" 00:14:17.430 ], 00:14:17.430 "product_name": "Raid Volume", 00:14:17.430 "block_size": 512, 00:14:17.430 "num_blocks": 196608, 00:14:17.430 "uuid": "f8eb94d5-7297-43f2-ae32-4e73e0c83e54", 00:14:17.430 "assigned_rate_limits": { 00:14:17.430 "rw_ios_per_sec": 0, 00:14:17.430 "rw_mbytes_per_sec": 0, 00:14:17.430 "r_mbytes_per_sec": 0, 00:14:17.430 "w_mbytes_per_sec": 0 00:14:17.430 }, 00:14:17.430 "claimed": false, 00:14:17.430 "zoned": false, 00:14:17.430 "supported_io_types": { 00:14:17.430 "read": true, 00:14:17.430 "write": true, 00:14:17.430 "unmap": true, 00:14:17.430 "flush": true, 00:14:17.430 "reset": true, 00:14:17.430 "nvme_admin": false, 00:14:17.430 "nvme_io": false, 00:14:17.430 "nvme_io_md": false, 00:14:17.430 "write_zeroes": true, 00:14:17.430 "zcopy": false, 00:14:17.430 "get_zone_info": false, 00:14:17.430 "zone_management": false, 00:14:17.430 "zone_append": false, 00:14:17.430 "compare": false, 00:14:17.430 "compare_and_write": false, 00:14:17.430 "abort": false, 00:14:17.430 "seek_hole": false, 00:14:17.430 "seek_data": false, 00:14:17.430 "copy": false, 00:14:17.430 "nvme_iov_md": false 00:14:17.430 }, 00:14:17.430 "memory_domains": [ 00:14:17.430 { 00:14:17.430 "dma_device_id": "system", 00:14:17.430 "dma_device_type": 1 00:14:17.430 }, 00:14:17.430 { 00:14:17.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.430 "dma_device_type": 2 00:14:17.430 }, 00:14:17.430 { 00:14:17.430 "dma_device_id": "system", 00:14:17.430 "dma_device_type": 1 00:14:17.430 }, 00:14:17.430 { 00:14:17.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.430 "dma_device_type": 2 00:14:17.430 }, 00:14:17.430 { 00:14:17.430 "dma_device_id": "system", 00:14:17.430 "dma_device_type": 1 00:14:17.430 }, 00:14:17.430 { 00:14:17.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.430 "dma_device_type": 2 00:14:17.430 } 00:14:17.430 ], 00:14:17.430 "driver_specific": { 00:14:17.430 "raid": { 00:14:17.430 "uuid": "f8eb94d5-7297-43f2-ae32-4e73e0c83e54", 00:14:17.430 "strip_size_kb": 64, 00:14:17.430 "state": "online", 00:14:17.430 "raid_level": "raid0", 00:14:17.430 "superblock": false, 00:14:17.430 "num_base_bdevs": 3, 00:14:17.430 "num_base_bdevs_discovered": 3, 00:14:17.430 "num_base_bdevs_operational": 3, 00:14:17.430 "base_bdevs_list": [ 00:14:17.430 { 00:14:17.430 "name": "NewBaseBdev", 00:14:17.430 "uuid": "5ca85bdb-e598-46d9-b932-78ddf6044ac4", 00:14:17.430 "is_configured": true, 00:14:17.430 "data_offset": 0, 00:14:17.430 "data_size": 65536 00:14:17.430 }, 00:14:17.430 { 00:14:17.430 "name": "BaseBdev2", 00:14:17.430 "uuid": "422a475e-8361-47dc-a523-6bfe066e86ab", 00:14:17.430 "is_configured": true, 00:14:17.430 "data_offset": 0, 00:14:17.430 "data_size": 65536 00:14:17.430 }, 00:14:17.430 { 00:14:17.430 "name": "BaseBdev3", 00:14:17.430 "uuid": "a7ed8b9a-21c0-460b-a5e3-2dcc7cd1fa2f", 00:14:17.430 "is_configured": true, 00:14:17.430 "data_offset": 0, 00:14:17.430 "data_size": 65536 00:14:17.430 } 00:14:17.430 ] 00:14:17.430 } 00:14:17.430 } 00:14:17.430 }' 00:14:17.430 21:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:17.430 21:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:14:17.430 BaseBdev2 00:14:17.430 BaseBdev3' 00:14:17.430 21:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:17.430 21:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:14:17.430 21:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:17.745 21:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:17.745 "name": "NewBaseBdev", 00:14:17.745 "aliases": [ 00:14:17.745 "5ca85bdb-e598-46d9-b932-78ddf6044ac4" 00:14:17.745 ], 00:14:17.745 "product_name": "Malloc disk", 00:14:17.745 "block_size": 512, 00:14:17.745 "num_blocks": 65536, 00:14:17.745 "uuid": "5ca85bdb-e598-46d9-b932-78ddf6044ac4", 00:14:17.745 "assigned_rate_limits": { 00:14:17.745 "rw_ios_per_sec": 0, 00:14:17.745 "rw_mbytes_per_sec": 0, 00:14:17.745 "r_mbytes_per_sec": 0, 00:14:17.745 "w_mbytes_per_sec": 0 00:14:17.745 }, 00:14:17.745 "claimed": true, 00:14:17.745 "claim_type": "exclusive_write", 00:14:17.745 "zoned": false, 00:14:17.745 "supported_io_types": { 00:14:17.745 "read": true, 00:14:17.745 "write": true, 00:14:17.745 "unmap": true, 00:14:17.745 "flush": true, 00:14:17.745 "reset": true, 00:14:17.745 "nvme_admin": false, 00:14:17.745 "nvme_io": false, 00:14:17.745 "nvme_io_md": false, 00:14:17.745 "write_zeroes": true, 00:14:17.745 "zcopy": true, 00:14:17.745 "get_zone_info": false, 00:14:17.745 "zone_management": false, 00:14:17.745 "zone_append": false, 00:14:17.745 "compare": false, 00:14:17.745 "compare_and_write": false, 00:14:17.745 "abort": true, 00:14:17.745 "seek_hole": false, 00:14:17.745 "seek_data": false, 00:14:17.745 "copy": true, 00:14:17.745 "nvme_iov_md": false 00:14:17.745 }, 00:14:17.745 "memory_domains": [ 00:14:17.745 { 00:14:17.745 "dma_device_id": "system", 00:14:17.745 "dma_device_type": 1 00:14:17.745 }, 00:14:17.745 { 00:14:17.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.745 "dma_device_type": 2 00:14:17.745 } 00:14:17.745 ], 00:14:17.745 "driver_specific": {} 00:14:17.745 }' 00:14:17.745 21:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:17.745 21:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:17.745 21:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:17.745 21:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:17.745 21:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:17.745 21:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:17.745 21:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:17.745 21:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:18.027 21:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:18.027 21:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:18.027 21:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:18.027 21:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:18.028 21:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:18.028 21:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:18.028 21:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:18.028 21:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:18.028 "name": "BaseBdev2", 00:14:18.028 "aliases": [ 00:14:18.028 "422a475e-8361-47dc-a523-6bfe066e86ab" 00:14:18.028 ], 00:14:18.028 "product_name": "Malloc disk", 00:14:18.028 "block_size": 512, 00:14:18.028 "num_blocks": 65536, 00:14:18.028 "uuid": "422a475e-8361-47dc-a523-6bfe066e86ab", 00:14:18.028 "assigned_rate_limits": { 00:14:18.028 "rw_ios_per_sec": 0, 00:14:18.028 "rw_mbytes_per_sec": 0, 00:14:18.028 "r_mbytes_per_sec": 0, 00:14:18.028 "w_mbytes_per_sec": 0 00:14:18.028 }, 00:14:18.028 "claimed": true, 00:14:18.028 "claim_type": "exclusive_write", 00:14:18.028 "zoned": false, 00:14:18.028 "supported_io_types": { 00:14:18.028 "read": true, 00:14:18.028 "write": true, 00:14:18.028 "unmap": true, 00:14:18.028 "flush": true, 00:14:18.028 "reset": true, 00:14:18.028 "nvme_admin": false, 00:14:18.028 "nvme_io": false, 00:14:18.028 "nvme_io_md": false, 00:14:18.028 "write_zeroes": true, 00:14:18.028 "zcopy": true, 00:14:18.028 "get_zone_info": false, 00:14:18.028 "zone_management": false, 00:14:18.028 "zone_append": false, 00:14:18.028 "compare": false, 00:14:18.028 "compare_and_write": false, 00:14:18.028 "abort": true, 00:14:18.028 "seek_hole": false, 00:14:18.028 "seek_data": false, 00:14:18.028 "copy": true, 00:14:18.028 "nvme_iov_md": false 00:14:18.028 }, 00:14:18.028 "memory_domains": [ 00:14:18.028 { 00:14:18.028 "dma_device_id": "system", 00:14:18.028 "dma_device_type": 1 00:14:18.028 }, 00:14:18.028 { 00:14:18.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.028 "dma_device_type": 2 00:14:18.028 } 00:14:18.028 ], 00:14:18.028 "driver_specific": {} 00:14:18.028 }' 00:14:18.028 21:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:18.287 21:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:18.287 21:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:18.287 21:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:18.287 21:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:18.287 21:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:18.287 21:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:18.287 21:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:18.287 21:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:18.287 21:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:18.546 21:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:18.546 21:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:18.546 21:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:18.546 21:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:18.546 21:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:18.546 21:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:18.546 "name": "BaseBdev3", 00:14:18.546 "aliases": [ 00:14:18.546 "a7ed8b9a-21c0-460b-a5e3-2dcc7cd1fa2f" 00:14:18.546 ], 00:14:18.546 "product_name": "Malloc disk", 00:14:18.546 "block_size": 512, 00:14:18.546 "num_blocks": 65536, 00:14:18.546 "uuid": "a7ed8b9a-21c0-460b-a5e3-2dcc7cd1fa2f", 00:14:18.546 "assigned_rate_limits": { 00:14:18.546 "rw_ios_per_sec": 0, 00:14:18.546 "rw_mbytes_per_sec": 0, 00:14:18.546 "r_mbytes_per_sec": 0, 00:14:18.546 "w_mbytes_per_sec": 0 00:14:18.546 }, 00:14:18.546 "claimed": true, 00:14:18.546 "claim_type": "exclusive_write", 00:14:18.546 "zoned": false, 00:14:18.546 "supported_io_types": { 00:14:18.546 "read": true, 00:14:18.546 "write": true, 00:14:18.546 "unmap": true, 00:14:18.546 "flush": true, 00:14:18.546 "reset": true, 00:14:18.546 "nvme_admin": false, 00:14:18.546 "nvme_io": false, 00:14:18.546 "nvme_io_md": false, 00:14:18.546 "write_zeroes": true, 00:14:18.546 "zcopy": true, 00:14:18.546 "get_zone_info": false, 00:14:18.546 "zone_management": false, 00:14:18.546 "zone_append": false, 00:14:18.546 "compare": false, 00:14:18.546 "compare_and_write": false, 00:14:18.546 "abort": true, 00:14:18.546 "seek_hole": false, 00:14:18.546 "seek_data": false, 00:14:18.546 "copy": true, 00:14:18.546 "nvme_iov_md": false 00:14:18.546 }, 00:14:18.546 "memory_domains": [ 00:14:18.546 { 00:14:18.546 "dma_device_id": "system", 00:14:18.546 "dma_device_type": 1 00:14:18.546 }, 00:14:18.546 { 00:14:18.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.546 "dma_device_type": 2 00:14:18.546 } 00:14:18.546 ], 00:14:18.546 "driver_specific": {} 00:14:18.546 }' 00:14:18.546 21:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:18.805 21:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:18.805 21:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:18.805 21:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:18.805 21:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:18.805 21:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:18.805 21:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:18.805 21:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:18.805 21:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:18.805 21:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:19.065 21:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:19.065 21:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:19.065 21:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:19.065 [2024-07-13 21:57:38.361718] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:19.065 [2024-07-13 21:57:38.361744] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:19.065 [2024-07-13 21:57:38.361818] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:19.065 [2024-07-13 21:57:38.361872] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:19.065 [2024-07-13 21:57:38.361889] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041780 name Existed_Raid, state offline 00:14:19.065 21:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 1368491 00:14:19.065 21:57:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 1368491 ']' 00:14:19.065 21:57:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 1368491 00:14:19.065 21:57:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:14:19.065 21:57:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:19.065 21:57:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1368491 00:14:19.065 21:57:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:19.065 21:57:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:19.065 21:57:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1368491' 00:14:19.065 killing process with pid 1368491 00:14:19.065 21:57:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 1368491 00:14:19.065 [2024-07-13 21:57:38.436998] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:19.065 21:57:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 1368491 00:14:19.325 [2024-07-13 21:57:38.663462] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:20.704 21:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:14:20.704 00:14:20.704 real 0m23.127s 00:14:20.704 user 0m40.535s 00:14:20.704 sys 0m4.323s 00:14:20.704 21:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:20.704 21:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.704 ************************************ 00:14:20.704 END TEST raid_state_function_test 00:14:20.704 ************************************ 00:14:20.704 21:57:39 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:20.704 21:57:39 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:14:20.704 21:57:39 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:20.704 21:57:39 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:20.704 21:57:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:20.704 ************************************ 00:14:20.704 START TEST raid_state_function_test_sb 00:14:20.704 ************************************ 00:14:20.704 21:57:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 3 true 00:14:20.704 21:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:14:20.704 21:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:14:20.704 21:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:14:20.704 21:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:20.704 21:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:20.704 21:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:20.704 21:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:14:20.704 21:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:20.704 21:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:20.704 21:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:14:20.704 21:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:20.704 21:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:20.704 21:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:14:20.704 21:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:20.704 21:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:20.704 21:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:20.704 21:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:20.704 21:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:20.704 21:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:20.704 21:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:20.704 21:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:20.704 21:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:14:20.704 21:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:14:20.704 21:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:14:20.704 21:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:14:20.704 21:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:14:20.704 21:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=1373058 00:14:20.704 21:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1373058' 00:14:20.704 Process raid pid: 1373058 00:14:20.704 21:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:20.704 21:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 1373058 /var/tmp/spdk-raid.sock 00:14:20.704 21:57:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 1373058 ']' 00:14:20.704 21:57:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:20.704 21:57:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:20.704 21:57:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:20.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:20.704 21:57:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:20.704 21:57:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.704 [2024-07-13 21:57:40.075823] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:20.704 [2024-07-13 21:57:40.075931] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:20.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.963 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:20.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.963 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:20.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.963 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:20.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.963 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:20.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.963 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:20.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.963 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:20.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.963 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:20.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.963 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:20.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.963 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:20.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.963 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:20.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.963 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:20.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.963 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:20.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.963 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:20.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.963 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:20.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.963 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:20.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.963 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:20.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.963 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:20.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.963 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:20.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.963 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:20.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.963 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:20.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.963 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:20.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.963 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:20.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.963 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:20.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.963 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:20.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.964 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:20.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.964 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:20.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.964 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:20.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.964 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:20.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.964 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:20.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.964 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:20.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.964 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:20.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.964 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:20.964 [2024-07-13 21:57:40.242387] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.222 [2024-07-13 21:57:40.461819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.482 [2024-07-13 21:57:40.711074] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:21.482 [2024-07-13 21:57:40.711107] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:21.482 21:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:21.482 21:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:14:21.482 21:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:21.740 [2024-07-13 21:57:41.013390] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:21.741 [2024-07-13 21:57:41.013440] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:21.741 [2024-07-13 21:57:41.013451] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:21.741 [2024-07-13 21:57:41.013478] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:21.741 [2024-07-13 21:57:41.013486] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:21.741 [2024-07-13 21:57:41.013500] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:21.741 21:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:21.741 21:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:21.741 21:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:21.741 21:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:21.741 21:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:21.741 21:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:21.741 21:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:21.741 21:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:21.741 21:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:21.741 21:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:21.741 21:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:21.741 21:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.000 21:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:22.000 "name": "Existed_Raid", 00:14:22.000 "uuid": "b77b8a34-214a-4f86-bc21-51d8b9773928", 00:14:22.000 "strip_size_kb": 64, 00:14:22.000 "state": "configuring", 00:14:22.000 "raid_level": "raid0", 00:14:22.000 "superblock": true, 00:14:22.000 "num_base_bdevs": 3, 00:14:22.000 "num_base_bdevs_discovered": 0, 00:14:22.000 "num_base_bdevs_operational": 3, 00:14:22.000 "base_bdevs_list": [ 00:14:22.000 { 00:14:22.000 "name": "BaseBdev1", 00:14:22.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.000 "is_configured": false, 00:14:22.000 "data_offset": 0, 00:14:22.000 "data_size": 0 00:14:22.000 }, 00:14:22.000 { 00:14:22.000 "name": "BaseBdev2", 00:14:22.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.000 "is_configured": false, 00:14:22.000 "data_offset": 0, 00:14:22.000 "data_size": 0 00:14:22.000 }, 00:14:22.000 { 00:14:22.000 "name": "BaseBdev3", 00:14:22.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.000 "is_configured": false, 00:14:22.000 "data_offset": 0, 00:14:22.000 "data_size": 0 00:14:22.000 } 00:14:22.000 ] 00:14:22.000 }' 00:14:22.000 21:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:22.000 21:57:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.568 21:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:22.568 [2024-07-13 21:57:41.843569] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:22.568 [2024-07-13 21:57:41.843605] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:14:22.568 21:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:22.827 [2024-07-13 21:57:42.012085] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:22.827 [2024-07-13 21:57:42.012128] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:22.827 [2024-07-13 21:57:42.012139] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:22.827 [2024-07-13 21:57:42.012154] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:22.827 [2024-07-13 21:57:42.012162] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:22.827 [2024-07-13 21:57:42.012173] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:22.827 21:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:23.086 [2024-07-13 21:57:42.221950] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:23.086 BaseBdev1 00:14:23.086 21:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:23.086 21:57:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:23.086 21:57:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:23.086 21:57:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:23.086 21:57:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:23.086 21:57:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:23.086 21:57:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:23.086 21:57:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:23.345 [ 00:14:23.345 { 00:14:23.345 "name": "BaseBdev1", 00:14:23.345 "aliases": [ 00:14:23.345 "15e49550-a244-442a-b5f3-a026dd009813" 00:14:23.345 ], 00:14:23.345 "product_name": "Malloc disk", 00:14:23.345 "block_size": 512, 00:14:23.345 "num_blocks": 65536, 00:14:23.345 "uuid": "15e49550-a244-442a-b5f3-a026dd009813", 00:14:23.345 "assigned_rate_limits": { 00:14:23.345 "rw_ios_per_sec": 0, 00:14:23.345 "rw_mbytes_per_sec": 0, 00:14:23.345 "r_mbytes_per_sec": 0, 00:14:23.345 "w_mbytes_per_sec": 0 00:14:23.345 }, 00:14:23.345 "claimed": true, 00:14:23.345 "claim_type": "exclusive_write", 00:14:23.345 "zoned": false, 00:14:23.345 "supported_io_types": { 00:14:23.345 "read": true, 00:14:23.345 "write": true, 00:14:23.345 "unmap": true, 00:14:23.345 "flush": true, 00:14:23.345 "reset": true, 00:14:23.345 "nvme_admin": false, 00:14:23.345 "nvme_io": false, 00:14:23.345 "nvme_io_md": false, 00:14:23.345 "write_zeroes": true, 00:14:23.345 "zcopy": true, 00:14:23.345 "get_zone_info": false, 00:14:23.345 "zone_management": false, 00:14:23.345 "zone_append": false, 00:14:23.345 "compare": false, 00:14:23.345 "compare_and_write": false, 00:14:23.345 "abort": true, 00:14:23.345 "seek_hole": false, 00:14:23.345 "seek_data": false, 00:14:23.345 "copy": true, 00:14:23.345 "nvme_iov_md": false 00:14:23.345 }, 00:14:23.345 "memory_domains": [ 00:14:23.345 { 00:14:23.345 "dma_device_id": "system", 00:14:23.345 "dma_device_type": 1 00:14:23.345 }, 00:14:23.345 { 00:14:23.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.345 "dma_device_type": 2 00:14:23.345 } 00:14:23.345 ], 00:14:23.345 "driver_specific": {} 00:14:23.345 } 00:14:23.345 ] 00:14:23.346 21:57:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:23.346 21:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:23.346 21:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:23.346 21:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:23.346 21:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:23.346 21:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:23.346 21:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:23.346 21:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:23.346 21:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:23.346 21:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:23.346 21:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:23.346 21:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.346 21:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:23.604 21:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:23.604 "name": "Existed_Raid", 00:14:23.604 "uuid": "d84c705a-8f32-4b9f-a9e4-489b4cba2c19", 00:14:23.604 "strip_size_kb": 64, 00:14:23.604 "state": "configuring", 00:14:23.604 "raid_level": "raid0", 00:14:23.604 "superblock": true, 00:14:23.604 "num_base_bdevs": 3, 00:14:23.604 "num_base_bdevs_discovered": 1, 00:14:23.604 "num_base_bdevs_operational": 3, 00:14:23.605 "base_bdevs_list": [ 00:14:23.605 { 00:14:23.605 "name": "BaseBdev1", 00:14:23.605 "uuid": "15e49550-a244-442a-b5f3-a026dd009813", 00:14:23.605 "is_configured": true, 00:14:23.605 "data_offset": 2048, 00:14:23.605 "data_size": 63488 00:14:23.605 }, 00:14:23.605 { 00:14:23.605 "name": "BaseBdev2", 00:14:23.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.605 "is_configured": false, 00:14:23.605 "data_offset": 0, 00:14:23.605 "data_size": 0 00:14:23.605 }, 00:14:23.605 { 00:14:23.605 "name": "BaseBdev3", 00:14:23.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.605 "is_configured": false, 00:14:23.605 "data_offset": 0, 00:14:23.605 "data_size": 0 00:14:23.605 } 00:14:23.605 ] 00:14:23.605 }' 00:14:23.605 21:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:23.605 21:57:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.863 21:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:24.121 [2024-07-13 21:57:43.401070] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:24.121 [2024-07-13 21:57:43.401121] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:14:24.121 21:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:24.380 [2024-07-13 21:57:43.569610] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:24.380 [2024-07-13 21:57:43.571395] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:24.380 [2024-07-13 21:57:43.571433] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:24.380 [2024-07-13 21:57:43.571444] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:24.380 [2024-07-13 21:57:43.571456] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:24.380 21:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:24.380 21:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:24.380 21:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:24.380 21:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:24.380 21:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:24.380 21:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:24.380 21:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:24.380 21:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:24.380 21:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:24.380 21:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:24.380 21:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:24.380 21:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:24.380 21:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:24.380 21:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.380 21:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:24.380 "name": "Existed_Raid", 00:14:24.380 "uuid": "421ddc5d-6aff-4f2b-8b75-330c566ced57", 00:14:24.380 "strip_size_kb": 64, 00:14:24.380 "state": "configuring", 00:14:24.380 "raid_level": "raid0", 00:14:24.380 "superblock": true, 00:14:24.380 "num_base_bdevs": 3, 00:14:24.380 "num_base_bdevs_discovered": 1, 00:14:24.380 "num_base_bdevs_operational": 3, 00:14:24.380 "base_bdevs_list": [ 00:14:24.380 { 00:14:24.380 "name": "BaseBdev1", 00:14:24.380 "uuid": "15e49550-a244-442a-b5f3-a026dd009813", 00:14:24.380 "is_configured": true, 00:14:24.380 "data_offset": 2048, 00:14:24.380 "data_size": 63488 00:14:24.380 }, 00:14:24.380 { 00:14:24.380 "name": "BaseBdev2", 00:14:24.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.380 "is_configured": false, 00:14:24.380 "data_offset": 0, 00:14:24.380 "data_size": 0 00:14:24.380 }, 00:14:24.380 { 00:14:24.380 "name": "BaseBdev3", 00:14:24.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.380 "is_configured": false, 00:14:24.380 "data_offset": 0, 00:14:24.380 "data_size": 0 00:14:24.380 } 00:14:24.380 ] 00:14:24.380 }' 00:14:24.380 21:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:24.380 21:57:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.947 21:57:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:25.206 [2024-07-13 21:57:44.462721] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:25.206 BaseBdev2 00:14:25.206 21:57:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:25.206 21:57:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:25.206 21:57:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:25.206 21:57:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:25.206 21:57:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:25.206 21:57:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:25.206 21:57:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:25.465 21:57:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:25.465 [ 00:14:25.465 { 00:14:25.465 "name": "BaseBdev2", 00:14:25.465 "aliases": [ 00:14:25.465 "25823e30-2c0b-43e6-b438-88fc291f12f5" 00:14:25.465 ], 00:14:25.465 "product_name": "Malloc disk", 00:14:25.465 "block_size": 512, 00:14:25.465 "num_blocks": 65536, 00:14:25.465 "uuid": "25823e30-2c0b-43e6-b438-88fc291f12f5", 00:14:25.465 "assigned_rate_limits": { 00:14:25.465 "rw_ios_per_sec": 0, 00:14:25.465 "rw_mbytes_per_sec": 0, 00:14:25.465 "r_mbytes_per_sec": 0, 00:14:25.465 "w_mbytes_per_sec": 0 00:14:25.465 }, 00:14:25.465 "claimed": true, 00:14:25.465 "claim_type": "exclusive_write", 00:14:25.465 "zoned": false, 00:14:25.465 "supported_io_types": { 00:14:25.465 "read": true, 00:14:25.465 "write": true, 00:14:25.465 "unmap": true, 00:14:25.465 "flush": true, 00:14:25.465 "reset": true, 00:14:25.465 "nvme_admin": false, 00:14:25.465 "nvme_io": false, 00:14:25.465 "nvme_io_md": false, 00:14:25.465 "write_zeroes": true, 00:14:25.465 "zcopy": true, 00:14:25.465 "get_zone_info": false, 00:14:25.465 "zone_management": false, 00:14:25.465 "zone_append": false, 00:14:25.465 "compare": false, 00:14:25.465 "compare_and_write": false, 00:14:25.465 "abort": true, 00:14:25.465 "seek_hole": false, 00:14:25.465 "seek_data": false, 00:14:25.465 "copy": true, 00:14:25.465 "nvme_iov_md": false 00:14:25.465 }, 00:14:25.465 "memory_domains": [ 00:14:25.465 { 00:14:25.465 "dma_device_id": "system", 00:14:25.465 "dma_device_type": 1 00:14:25.465 }, 00:14:25.465 { 00:14:25.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:25.465 "dma_device_type": 2 00:14:25.465 } 00:14:25.465 ], 00:14:25.465 "driver_specific": {} 00:14:25.465 } 00:14:25.465 ] 00:14:25.465 21:57:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:25.465 21:57:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:25.465 21:57:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:25.465 21:57:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:25.465 21:57:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:25.465 21:57:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:25.465 21:57:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:25.465 21:57:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:25.465 21:57:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:25.465 21:57:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:25.465 21:57:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:25.465 21:57:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:25.465 21:57:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:25.465 21:57:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:25.465 21:57:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.724 21:57:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:25.724 "name": "Existed_Raid", 00:14:25.724 "uuid": "421ddc5d-6aff-4f2b-8b75-330c566ced57", 00:14:25.724 "strip_size_kb": 64, 00:14:25.724 "state": "configuring", 00:14:25.724 "raid_level": "raid0", 00:14:25.724 "superblock": true, 00:14:25.724 "num_base_bdevs": 3, 00:14:25.724 "num_base_bdevs_discovered": 2, 00:14:25.724 "num_base_bdevs_operational": 3, 00:14:25.724 "base_bdevs_list": [ 00:14:25.724 { 00:14:25.724 "name": "BaseBdev1", 00:14:25.724 "uuid": "15e49550-a244-442a-b5f3-a026dd009813", 00:14:25.724 "is_configured": true, 00:14:25.724 "data_offset": 2048, 00:14:25.724 "data_size": 63488 00:14:25.724 }, 00:14:25.724 { 00:14:25.724 "name": "BaseBdev2", 00:14:25.724 "uuid": "25823e30-2c0b-43e6-b438-88fc291f12f5", 00:14:25.724 "is_configured": true, 00:14:25.724 "data_offset": 2048, 00:14:25.724 "data_size": 63488 00:14:25.724 }, 00:14:25.724 { 00:14:25.724 "name": "BaseBdev3", 00:14:25.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.724 "is_configured": false, 00:14:25.724 "data_offset": 0, 00:14:25.724 "data_size": 0 00:14:25.724 } 00:14:25.724 ] 00:14:25.724 }' 00:14:25.724 21:57:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:25.725 21:57:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.292 21:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:26.292 [2024-07-13 21:57:45.670076] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:26.292 [2024-07-13 21:57:45.670308] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:14:26.292 [2024-07-13 21:57:45.670327] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:26.292 [2024-07-13 21:57:45.670583] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:14:26.292 [2024-07-13 21:57:45.670767] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:14:26.292 [2024-07-13 21:57:45.670778] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:14:26.292 [2024-07-13 21:57:45.670922] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:26.292 BaseBdev3 00:14:26.550 21:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:14:26.550 21:57:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:14:26.550 21:57:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:26.550 21:57:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:26.550 21:57:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:26.550 21:57:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:26.551 21:57:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:26.551 21:57:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:26.810 [ 00:14:26.810 { 00:14:26.810 "name": "BaseBdev3", 00:14:26.810 "aliases": [ 00:14:26.810 "08afed3e-cbe4-461f-a1bd-4899855ae079" 00:14:26.810 ], 00:14:26.810 "product_name": "Malloc disk", 00:14:26.810 "block_size": 512, 00:14:26.810 "num_blocks": 65536, 00:14:26.810 "uuid": "08afed3e-cbe4-461f-a1bd-4899855ae079", 00:14:26.810 "assigned_rate_limits": { 00:14:26.810 "rw_ios_per_sec": 0, 00:14:26.810 "rw_mbytes_per_sec": 0, 00:14:26.810 "r_mbytes_per_sec": 0, 00:14:26.810 "w_mbytes_per_sec": 0 00:14:26.810 }, 00:14:26.810 "claimed": true, 00:14:26.810 "claim_type": "exclusive_write", 00:14:26.810 "zoned": false, 00:14:26.810 "supported_io_types": { 00:14:26.810 "read": true, 00:14:26.810 "write": true, 00:14:26.810 "unmap": true, 00:14:26.810 "flush": true, 00:14:26.810 "reset": true, 00:14:26.810 "nvme_admin": false, 00:14:26.810 "nvme_io": false, 00:14:26.810 "nvme_io_md": false, 00:14:26.810 "write_zeroes": true, 00:14:26.810 "zcopy": true, 00:14:26.810 "get_zone_info": false, 00:14:26.810 "zone_management": false, 00:14:26.810 "zone_append": false, 00:14:26.810 "compare": false, 00:14:26.810 "compare_and_write": false, 00:14:26.810 "abort": true, 00:14:26.810 "seek_hole": false, 00:14:26.810 "seek_data": false, 00:14:26.810 "copy": true, 00:14:26.810 "nvme_iov_md": false 00:14:26.810 }, 00:14:26.810 "memory_domains": [ 00:14:26.810 { 00:14:26.810 "dma_device_id": "system", 00:14:26.810 "dma_device_type": 1 00:14:26.810 }, 00:14:26.810 { 00:14:26.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.810 "dma_device_type": 2 00:14:26.810 } 00:14:26.810 ], 00:14:26.810 "driver_specific": {} 00:14:26.810 } 00:14:26.810 ] 00:14:26.810 21:57:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:26.810 21:57:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:26.810 21:57:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:26.810 21:57:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:14:26.810 21:57:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:26.810 21:57:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:26.810 21:57:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:26.810 21:57:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:26.810 21:57:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:26.810 21:57:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:26.810 21:57:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:26.810 21:57:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:26.810 21:57:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:26.810 21:57:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:26.810 21:57:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:27.069 21:57:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:27.069 "name": "Existed_Raid", 00:14:27.069 "uuid": "421ddc5d-6aff-4f2b-8b75-330c566ced57", 00:14:27.069 "strip_size_kb": 64, 00:14:27.069 "state": "online", 00:14:27.069 "raid_level": "raid0", 00:14:27.069 "superblock": true, 00:14:27.069 "num_base_bdevs": 3, 00:14:27.069 "num_base_bdevs_discovered": 3, 00:14:27.069 "num_base_bdevs_operational": 3, 00:14:27.069 "base_bdevs_list": [ 00:14:27.069 { 00:14:27.069 "name": "BaseBdev1", 00:14:27.069 "uuid": "15e49550-a244-442a-b5f3-a026dd009813", 00:14:27.069 "is_configured": true, 00:14:27.069 "data_offset": 2048, 00:14:27.069 "data_size": 63488 00:14:27.069 }, 00:14:27.069 { 00:14:27.069 "name": "BaseBdev2", 00:14:27.069 "uuid": "25823e30-2c0b-43e6-b438-88fc291f12f5", 00:14:27.069 "is_configured": true, 00:14:27.069 "data_offset": 2048, 00:14:27.069 "data_size": 63488 00:14:27.069 }, 00:14:27.069 { 00:14:27.069 "name": "BaseBdev3", 00:14:27.069 "uuid": "08afed3e-cbe4-461f-a1bd-4899855ae079", 00:14:27.069 "is_configured": true, 00:14:27.069 "data_offset": 2048, 00:14:27.069 "data_size": 63488 00:14:27.069 } 00:14:27.069 ] 00:14:27.069 }' 00:14:27.069 21:57:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:27.069 21:57:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.327 21:57:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:27.327 21:57:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:27.327 21:57:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:27.327 21:57:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:27.327 21:57:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:27.327 21:57:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:14:27.327 21:57:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:27.327 21:57:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:27.586 [2024-07-13 21:57:46.817398] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:27.586 21:57:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:27.586 "name": "Existed_Raid", 00:14:27.586 "aliases": [ 00:14:27.586 "421ddc5d-6aff-4f2b-8b75-330c566ced57" 00:14:27.586 ], 00:14:27.586 "product_name": "Raid Volume", 00:14:27.586 "block_size": 512, 00:14:27.586 "num_blocks": 190464, 00:14:27.586 "uuid": "421ddc5d-6aff-4f2b-8b75-330c566ced57", 00:14:27.586 "assigned_rate_limits": { 00:14:27.586 "rw_ios_per_sec": 0, 00:14:27.586 "rw_mbytes_per_sec": 0, 00:14:27.586 "r_mbytes_per_sec": 0, 00:14:27.586 "w_mbytes_per_sec": 0 00:14:27.586 }, 00:14:27.586 "claimed": false, 00:14:27.586 "zoned": false, 00:14:27.586 "supported_io_types": { 00:14:27.586 "read": true, 00:14:27.586 "write": true, 00:14:27.586 "unmap": true, 00:14:27.586 "flush": true, 00:14:27.586 "reset": true, 00:14:27.586 "nvme_admin": false, 00:14:27.586 "nvme_io": false, 00:14:27.586 "nvme_io_md": false, 00:14:27.586 "write_zeroes": true, 00:14:27.586 "zcopy": false, 00:14:27.586 "get_zone_info": false, 00:14:27.586 "zone_management": false, 00:14:27.586 "zone_append": false, 00:14:27.586 "compare": false, 00:14:27.586 "compare_and_write": false, 00:14:27.586 "abort": false, 00:14:27.586 "seek_hole": false, 00:14:27.586 "seek_data": false, 00:14:27.586 "copy": false, 00:14:27.586 "nvme_iov_md": false 00:14:27.586 }, 00:14:27.586 "memory_domains": [ 00:14:27.586 { 00:14:27.586 "dma_device_id": "system", 00:14:27.586 "dma_device_type": 1 00:14:27.586 }, 00:14:27.586 { 00:14:27.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.586 "dma_device_type": 2 00:14:27.586 }, 00:14:27.586 { 00:14:27.586 "dma_device_id": "system", 00:14:27.586 "dma_device_type": 1 00:14:27.586 }, 00:14:27.586 { 00:14:27.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.586 "dma_device_type": 2 00:14:27.586 }, 00:14:27.586 { 00:14:27.586 "dma_device_id": "system", 00:14:27.586 "dma_device_type": 1 00:14:27.586 }, 00:14:27.586 { 00:14:27.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.586 "dma_device_type": 2 00:14:27.586 } 00:14:27.586 ], 00:14:27.586 "driver_specific": { 00:14:27.586 "raid": { 00:14:27.586 "uuid": "421ddc5d-6aff-4f2b-8b75-330c566ced57", 00:14:27.586 "strip_size_kb": 64, 00:14:27.586 "state": "online", 00:14:27.586 "raid_level": "raid0", 00:14:27.586 "superblock": true, 00:14:27.586 "num_base_bdevs": 3, 00:14:27.586 "num_base_bdevs_discovered": 3, 00:14:27.586 "num_base_bdevs_operational": 3, 00:14:27.586 "base_bdevs_list": [ 00:14:27.586 { 00:14:27.586 "name": "BaseBdev1", 00:14:27.587 "uuid": "15e49550-a244-442a-b5f3-a026dd009813", 00:14:27.587 "is_configured": true, 00:14:27.587 "data_offset": 2048, 00:14:27.587 "data_size": 63488 00:14:27.587 }, 00:14:27.587 { 00:14:27.587 "name": "BaseBdev2", 00:14:27.587 "uuid": "25823e30-2c0b-43e6-b438-88fc291f12f5", 00:14:27.587 "is_configured": true, 00:14:27.587 "data_offset": 2048, 00:14:27.587 "data_size": 63488 00:14:27.587 }, 00:14:27.587 { 00:14:27.587 "name": "BaseBdev3", 00:14:27.587 "uuid": "08afed3e-cbe4-461f-a1bd-4899855ae079", 00:14:27.587 "is_configured": true, 00:14:27.587 "data_offset": 2048, 00:14:27.587 "data_size": 63488 00:14:27.587 } 00:14:27.587 ] 00:14:27.587 } 00:14:27.587 } 00:14:27.587 }' 00:14:27.587 21:57:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:27.587 21:57:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:27.587 BaseBdev2 00:14:27.587 BaseBdev3' 00:14:27.587 21:57:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:27.587 21:57:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:27.587 21:57:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:27.846 21:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:27.846 "name": "BaseBdev1", 00:14:27.846 "aliases": [ 00:14:27.846 "15e49550-a244-442a-b5f3-a026dd009813" 00:14:27.846 ], 00:14:27.846 "product_name": "Malloc disk", 00:14:27.846 "block_size": 512, 00:14:27.846 "num_blocks": 65536, 00:14:27.846 "uuid": "15e49550-a244-442a-b5f3-a026dd009813", 00:14:27.846 "assigned_rate_limits": { 00:14:27.846 "rw_ios_per_sec": 0, 00:14:27.846 "rw_mbytes_per_sec": 0, 00:14:27.846 "r_mbytes_per_sec": 0, 00:14:27.846 "w_mbytes_per_sec": 0 00:14:27.846 }, 00:14:27.846 "claimed": true, 00:14:27.846 "claim_type": "exclusive_write", 00:14:27.846 "zoned": false, 00:14:27.846 "supported_io_types": { 00:14:27.846 "read": true, 00:14:27.846 "write": true, 00:14:27.846 "unmap": true, 00:14:27.846 "flush": true, 00:14:27.846 "reset": true, 00:14:27.846 "nvme_admin": false, 00:14:27.846 "nvme_io": false, 00:14:27.846 "nvme_io_md": false, 00:14:27.846 "write_zeroes": true, 00:14:27.846 "zcopy": true, 00:14:27.846 "get_zone_info": false, 00:14:27.846 "zone_management": false, 00:14:27.846 "zone_append": false, 00:14:27.846 "compare": false, 00:14:27.846 "compare_and_write": false, 00:14:27.846 "abort": true, 00:14:27.846 "seek_hole": false, 00:14:27.846 "seek_data": false, 00:14:27.846 "copy": true, 00:14:27.846 "nvme_iov_md": false 00:14:27.846 }, 00:14:27.846 "memory_domains": [ 00:14:27.846 { 00:14:27.846 "dma_device_id": "system", 00:14:27.846 "dma_device_type": 1 00:14:27.846 }, 00:14:27.846 { 00:14:27.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.846 "dma_device_type": 2 00:14:27.846 } 00:14:27.846 ], 00:14:27.846 "driver_specific": {} 00:14:27.846 }' 00:14:27.846 21:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:27.846 21:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:27.846 21:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:27.846 21:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:27.846 21:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:27.846 21:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:27.846 21:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:28.106 21:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:28.106 21:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:28.106 21:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:28.106 21:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:28.106 21:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:28.106 21:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:28.106 21:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:28.106 21:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:28.365 21:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:28.365 "name": "BaseBdev2", 00:14:28.365 "aliases": [ 00:14:28.365 "25823e30-2c0b-43e6-b438-88fc291f12f5" 00:14:28.365 ], 00:14:28.365 "product_name": "Malloc disk", 00:14:28.365 "block_size": 512, 00:14:28.365 "num_blocks": 65536, 00:14:28.365 "uuid": "25823e30-2c0b-43e6-b438-88fc291f12f5", 00:14:28.365 "assigned_rate_limits": { 00:14:28.365 "rw_ios_per_sec": 0, 00:14:28.365 "rw_mbytes_per_sec": 0, 00:14:28.365 "r_mbytes_per_sec": 0, 00:14:28.365 "w_mbytes_per_sec": 0 00:14:28.365 }, 00:14:28.365 "claimed": true, 00:14:28.366 "claim_type": "exclusive_write", 00:14:28.366 "zoned": false, 00:14:28.366 "supported_io_types": { 00:14:28.366 "read": true, 00:14:28.366 "write": true, 00:14:28.366 "unmap": true, 00:14:28.366 "flush": true, 00:14:28.366 "reset": true, 00:14:28.366 "nvme_admin": false, 00:14:28.366 "nvme_io": false, 00:14:28.366 "nvme_io_md": false, 00:14:28.366 "write_zeroes": true, 00:14:28.366 "zcopy": true, 00:14:28.366 "get_zone_info": false, 00:14:28.366 "zone_management": false, 00:14:28.366 "zone_append": false, 00:14:28.366 "compare": false, 00:14:28.366 "compare_and_write": false, 00:14:28.366 "abort": true, 00:14:28.366 "seek_hole": false, 00:14:28.366 "seek_data": false, 00:14:28.366 "copy": true, 00:14:28.366 "nvme_iov_md": false 00:14:28.366 }, 00:14:28.366 "memory_domains": [ 00:14:28.366 { 00:14:28.366 "dma_device_id": "system", 00:14:28.366 "dma_device_type": 1 00:14:28.366 }, 00:14:28.366 { 00:14:28.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.366 "dma_device_type": 2 00:14:28.366 } 00:14:28.366 ], 00:14:28.366 "driver_specific": {} 00:14:28.366 }' 00:14:28.366 21:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:28.366 21:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:28.366 21:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:28.366 21:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:28.366 21:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:28.366 21:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:28.366 21:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:28.366 21:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:28.625 21:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:28.625 21:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:28.625 21:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:28.625 21:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:28.625 21:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:28.625 21:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:28.625 21:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:28.625 21:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:28.625 "name": "BaseBdev3", 00:14:28.625 "aliases": [ 00:14:28.625 "08afed3e-cbe4-461f-a1bd-4899855ae079" 00:14:28.625 ], 00:14:28.625 "product_name": "Malloc disk", 00:14:28.625 "block_size": 512, 00:14:28.625 "num_blocks": 65536, 00:14:28.625 "uuid": "08afed3e-cbe4-461f-a1bd-4899855ae079", 00:14:28.625 "assigned_rate_limits": { 00:14:28.625 "rw_ios_per_sec": 0, 00:14:28.625 "rw_mbytes_per_sec": 0, 00:14:28.625 "r_mbytes_per_sec": 0, 00:14:28.625 "w_mbytes_per_sec": 0 00:14:28.625 }, 00:14:28.625 "claimed": true, 00:14:28.625 "claim_type": "exclusive_write", 00:14:28.625 "zoned": false, 00:14:28.626 "supported_io_types": { 00:14:28.626 "read": true, 00:14:28.626 "write": true, 00:14:28.626 "unmap": true, 00:14:28.626 "flush": true, 00:14:28.626 "reset": true, 00:14:28.626 "nvme_admin": false, 00:14:28.626 "nvme_io": false, 00:14:28.626 "nvme_io_md": false, 00:14:28.626 "write_zeroes": true, 00:14:28.626 "zcopy": true, 00:14:28.626 "get_zone_info": false, 00:14:28.626 "zone_management": false, 00:14:28.626 "zone_append": false, 00:14:28.626 "compare": false, 00:14:28.626 "compare_and_write": false, 00:14:28.626 "abort": true, 00:14:28.626 "seek_hole": false, 00:14:28.626 "seek_data": false, 00:14:28.626 "copy": true, 00:14:28.626 "nvme_iov_md": false 00:14:28.626 }, 00:14:28.626 "memory_domains": [ 00:14:28.626 { 00:14:28.626 "dma_device_id": "system", 00:14:28.626 "dma_device_type": 1 00:14:28.626 }, 00:14:28.626 { 00:14:28.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.626 "dma_device_type": 2 00:14:28.626 } 00:14:28.626 ], 00:14:28.626 "driver_specific": {} 00:14:28.626 }' 00:14:28.626 21:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:28.885 21:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:28.885 21:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:28.885 21:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:28.885 21:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:28.885 21:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:28.885 21:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:28.885 21:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:28.885 21:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:28.885 21:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:29.144 21:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:29.144 21:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:29.144 21:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:29.144 [2024-07-13 21:57:48.481650] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:29.144 [2024-07-13 21:57:48.481679] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:29.144 [2024-07-13 21:57:48.481733] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:29.144 21:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:29.144 21:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:14:29.144 21:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:29.144 21:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:14:29.144 21:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:14:29.144 21:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:14:29.144 21:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:29.144 21:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:14:29.144 21:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:29.144 21:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:29.144 21:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:29.144 21:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:29.144 21:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:29.144 21:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:29.144 21:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:29.144 21:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:29.144 21:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:29.467 21:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:29.467 "name": "Existed_Raid", 00:14:29.467 "uuid": "421ddc5d-6aff-4f2b-8b75-330c566ced57", 00:14:29.467 "strip_size_kb": 64, 00:14:29.467 "state": "offline", 00:14:29.467 "raid_level": "raid0", 00:14:29.467 "superblock": true, 00:14:29.467 "num_base_bdevs": 3, 00:14:29.467 "num_base_bdevs_discovered": 2, 00:14:29.467 "num_base_bdevs_operational": 2, 00:14:29.467 "base_bdevs_list": [ 00:14:29.467 { 00:14:29.467 "name": null, 00:14:29.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.467 "is_configured": false, 00:14:29.467 "data_offset": 2048, 00:14:29.467 "data_size": 63488 00:14:29.467 }, 00:14:29.467 { 00:14:29.467 "name": "BaseBdev2", 00:14:29.467 "uuid": "25823e30-2c0b-43e6-b438-88fc291f12f5", 00:14:29.467 "is_configured": true, 00:14:29.467 "data_offset": 2048, 00:14:29.467 "data_size": 63488 00:14:29.467 }, 00:14:29.467 { 00:14:29.467 "name": "BaseBdev3", 00:14:29.467 "uuid": "08afed3e-cbe4-461f-a1bd-4899855ae079", 00:14:29.467 "is_configured": true, 00:14:29.467 "data_offset": 2048, 00:14:29.467 "data_size": 63488 00:14:29.467 } 00:14:29.467 ] 00:14:29.467 }' 00:14:29.467 21:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:29.467 21:57:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.036 21:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:30.036 21:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:30.036 21:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:30.036 21:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:30.036 21:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:30.036 21:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:30.036 21:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:30.294 [2024-07-13 21:57:49.556788] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:30.294 21:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:30.294 21:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:30.294 21:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:30.294 21:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:30.553 21:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:30.553 21:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:30.553 21:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:30.811 [2024-07-13 21:57:49.992596] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:30.812 [2024-07-13 21:57:49.992650] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:14:30.812 21:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:30.812 21:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:30.812 21:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:30.812 21:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:31.070 21:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:31.070 21:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:31.070 21:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:14:31.070 21:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:14:31.070 21:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:31.070 21:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:31.328 BaseBdev2 00:14:31.328 21:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:14:31.328 21:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:31.328 21:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:31.328 21:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:31.328 21:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:31.328 21:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:31.328 21:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:31.328 21:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:31.586 [ 00:14:31.586 { 00:14:31.586 "name": "BaseBdev2", 00:14:31.586 "aliases": [ 00:14:31.586 "5a9dc045-0b62-4d3b-b19f-3cef747e1c6d" 00:14:31.586 ], 00:14:31.586 "product_name": "Malloc disk", 00:14:31.586 "block_size": 512, 00:14:31.586 "num_blocks": 65536, 00:14:31.586 "uuid": "5a9dc045-0b62-4d3b-b19f-3cef747e1c6d", 00:14:31.586 "assigned_rate_limits": { 00:14:31.586 "rw_ios_per_sec": 0, 00:14:31.586 "rw_mbytes_per_sec": 0, 00:14:31.586 "r_mbytes_per_sec": 0, 00:14:31.586 "w_mbytes_per_sec": 0 00:14:31.586 }, 00:14:31.586 "claimed": false, 00:14:31.586 "zoned": false, 00:14:31.586 "supported_io_types": { 00:14:31.586 "read": true, 00:14:31.586 "write": true, 00:14:31.586 "unmap": true, 00:14:31.586 "flush": true, 00:14:31.586 "reset": true, 00:14:31.586 "nvme_admin": false, 00:14:31.586 "nvme_io": false, 00:14:31.586 "nvme_io_md": false, 00:14:31.586 "write_zeroes": true, 00:14:31.586 "zcopy": true, 00:14:31.586 "get_zone_info": false, 00:14:31.586 "zone_management": false, 00:14:31.586 "zone_append": false, 00:14:31.586 "compare": false, 00:14:31.586 "compare_and_write": false, 00:14:31.587 "abort": true, 00:14:31.587 "seek_hole": false, 00:14:31.587 "seek_data": false, 00:14:31.587 "copy": true, 00:14:31.587 "nvme_iov_md": false 00:14:31.587 }, 00:14:31.587 "memory_domains": [ 00:14:31.587 { 00:14:31.587 "dma_device_id": "system", 00:14:31.587 "dma_device_type": 1 00:14:31.587 }, 00:14:31.587 { 00:14:31.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.587 "dma_device_type": 2 00:14:31.587 } 00:14:31.587 ], 00:14:31.587 "driver_specific": {} 00:14:31.587 } 00:14:31.587 ] 00:14:31.587 21:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:31.587 21:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:31.587 21:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:31.587 21:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:31.870 BaseBdev3 00:14:31.870 21:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:14:31.870 21:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:14:31.870 21:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:31.870 21:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:31.870 21:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:31.870 21:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:31.870 21:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:31.870 21:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:32.128 [ 00:14:32.128 { 00:14:32.128 "name": "BaseBdev3", 00:14:32.128 "aliases": [ 00:14:32.128 "2e81ea7b-9228-4e2c-8342-cede176b2264" 00:14:32.128 ], 00:14:32.128 "product_name": "Malloc disk", 00:14:32.128 "block_size": 512, 00:14:32.128 "num_blocks": 65536, 00:14:32.128 "uuid": "2e81ea7b-9228-4e2c-8342-cede176b2264", 00:14:32.128 "assigned_rate_limits": { 00:14:32.128 "rw_ios_per_sec": 0, 00:14:32.128 "rw_mbytes_per_sec": 0, 00:14:32.128 "r_mbytes_per_sec": 0, 00:14:32.128 "w_mbytes_per_sec": 0 00:14:32.128 }, 00:14:32.128 "claimed": false, 00:14:32.128 "zoned": false, 00:14:32.128 "supported_io_types": { 00:14:32.128 "read": true, 00:14:32.128 "write": true, 00:14:32.128 "unmap": true, 00:14:32.128 "flush": true, 00:14:32.128 "reset": true, 00:14:32.128 "nvme_admin": false, 00:14:32.128 "nvme_io": false, 00:14:32.128 "nvme_io_md": false, 00:14:32.128 "write_zeroes": true, 00:14:32.128 "zcopy": true, 00:14:32.128 "get_zone_info": false, 00:14:32.128 "zone_management": false, 00:14:32.128 "zone_append": false, 00:14:32.128 "compare": false, 00:14:32.128 "compare_and_write": false, 00:14:32.128 "abort": true, 00:14:32.128 "seek_hole": false, 00:14:32.128 "seek_data": false, 00:14:32.128 "copy": true, 00:14:32.128 "nvme_iov_md": false 00:14:32.128 }, 00:14:32.128 "memory_domains": [ 00:14:32.128 { 00:14:32.128 "dma_device_id": "system", 00:14:32.128 "dma_device_type": 1 00:14:32.128 }, 00:14:32.128 { 00:14:32.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:32.128 "dma_device_type": 2 00:14:32.128 } 00:14:32.128 ], 00:14:32.128 "driver_specific": {} 00:14:32.128 } 00:14:32.128 ] 00:14:32.128 21:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:32.128 21:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:32.128 21:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:32.128 21:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:32.128 [2024-07-13 21:57:51.510974] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:32.128 [2024-07-13 21:57:51.511020] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:32.128 [2024-07-13 21:57:51.511047] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:32.128 [2024-07-13 21:57:51.512815] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:32.387 21:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:32.387 21:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:32.387 21:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:32.387 21:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:32.387 21:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:32.387 21:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:32.387 21:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:32.387 21:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:32.387 21:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:32.387 21:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:32.387 21:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:32.387 21:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:32.387 21:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:32.387 "name": "Existed_Raid", 00:14:32.387 "uuid": "b802ea2d-75ba-4958-aeeb-2eccefa23273", 00:14:32.387 "strip_size_kb": 64, 00:14:32.387 "state": "configuring", 00:14:32.387 "raid_level": "raid0", 00:14:32.387 "superblock": true, 00:14:32.387 "num_base_bdevs": 3, 00:14:32.387 "num_base_bdevs_discovered": 2, 00:14:32.387 "num_base_bdevs_operational": 3, 00:14:32.387 "base_bdevs_list": [ 00:14:32.387 { 00:14:32.387 "name": "BaseBdev1", 00:14:32.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.387 "is_configured": false, 00:14:32.387 "data_offset": 0, 00:14:32.387 "data_size": 0 00:14:32.387 }, 00:14:32.387 { 00:14:32.387 "name": "BaseBdev2", 00:14:32.387 "uuid": "5a9dc045-0b62-4d3b-b19f-3cef747e1c6d", 00:14:32.387 "is_configured": true, 00:14:32.387 "data_offset": 2048, 00:14:32.387 "data_size": 63488 00:14:32.387 }, 00:14:32.387 { 00:14:32.387 "name": "BaseBdev3", 00:14:32.387 "uuid": "2e81ea7b-9228-4e2c-8342-cede176b2264", 00:14:32.387 "is_configured": true, 00:14:32.387 "data_offset": 2048, 00:14:32.387 "data_size": 63488 00:14:32.387 } 00:14:32.387 ] 00:14:32.387 }' 00:14:32.387 21:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:32.387 21:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.954 21:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:14:32.954 [2024-07-13 21:57:52.337108] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:33.212 21:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:33.212 21:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:33.212 21:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:33.212 21:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:33.212 21:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:33.212 21:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:33.212 21:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:33.212 21:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:33.212 21:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:33.212 21:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:33.212 21:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:33.212 21:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:33.212 21:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:33.212 "name": "Existed_Raid", 00:14:33.212 "uuid": "b802ea2d-75ba-4958-aeeb-2eccefa23273", 00:14:33.212 "strip_size_kb": 64, 00:14:33.212 "state": "configuring", 00:14:33.212 "raid_level": "raid0", 00:14:33.212 "superblock": true, 00:14:33.212 "num_base_bdevs": 3, 00:14:33.212 "num_base_bdevs_discovered": 1, 00:14:33.212 "num_base_bdevs_operational": 3, 00:14:33.212 "base_bdevs_list": [ 00:14:33.212 { 00:14:33.212 "name": "BaseBdev1", 00:14:33.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.212 "is_configured": false, 00:14:33.212 "data_offset": 0, 00:14:33.212 "data_size": 0 00:14:33.212 }, 00:14:33.212 { 00:14:33.212 "name": null, 00:14:33.212 "uuid": "5a9dc045-0b62-4d3b-b19f-3cef747e1c6d", 00:14:33.212 "is_configured": false, 00:14:33.212 "data_offset": 2048, 00:14:33.212 "data_size": 63488 00:14:33.212 }, 00:14:33.212 { 00:14:33.212 "name": "BaseBdev3", 00:14:33.212 "uuid": "2e81ea7b-9228-4e2c-8342-cede176b2264", 00:14:33.212 "is_configured": true, 00:14:33.212 "data_offset": 2048, 00:14:33.212 "data_size": 63488 00:14:33.212 } 00:14:33.212 ] 00:14:33.212 }' 00:14:33.212 21:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:33.212 21:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.778 21:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:33.779 21:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:33.779 21:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:14:33.779 21:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:34.044 [2024-07-13 21:57:53.348617] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:34.044 BaseBdev1 00:14:34.044 21:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:14:34.044 21:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:34.044 21:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:34.044 21:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:34.044 21:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:34.044 21:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:34.044 21:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:34.301 21:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:34.560 [ 00:14:34.560 { 00:14:34.560 "name": "BaseBdev1", 00:14:34.560 "aliases": [ 00:14:34.560 "bd1b3549-6b77-43a0-9091-4734516abd2f" 00:14:34.560 ], 00:14:34.560 "product_name": "Malloc disk", 00:14:34.560 "block_size": 512, 00:14:34.560 "num_blocks": 65536, 00:14:34.560 "uuid": "bd1b3549-6b77-43a0-9091-4734516abd2f", 00:14:34.560 "assigned_rate_limits": { 00:14:34.560 "rw_ios_per_sec": 0, 00:14:34.560 "rw_mbytes_per_sec": 0, 00:14:34.560 "r_mbytes_per_sec": 0, 00:14:34.560 "w_mbytes_per_sec": 0 00:14:34.560 }, 00:14:34.560 "claimed": true, 00:14:34.560 "claim_type": "exclusive_write", 00:14:34.560 "zoned": false, 00:14:34.560 "supported_io_types": { 00:14:34.560 "read": true, 00:14:34.560 "write": true, 00:14:34.560 "unmap": true, 00:14:34.560 "flush": true, 00:14:34.560 "reset": true, 00:14:34.560 "nvme_admin": false, 00:14:34.560 "nvme_io": false, 00:14:34.560 "nvme_io_md": false, 00:14:34.560 "write_zeroes": true, 00:14:34.560 "zcopy": true, 00:14:34.560 "get_zone_info": false, 00:14:34.560 "zone_management": false, 00:14:34.560 "zone_append": false, 00:14:34.560 "compare": false, 00:14:34.560 "compare_and_write": false, 00:14:34.560 "abort": true, 00:14:34.560 "seek_hole": false, 00:14:34.560 "seek_data": false, 00:14:34.560 "copy": true, 00:14:34.560 "nvme_iov_md": false 00:14:34.560 }, 00:14:34.560 "memory_domains": [ 00:14:34.560 { 00:14:34.560 "dma_device_id": "system", 00:14:34.560 "dma_device_type": 1 00:14:34.560 }, 00:14:34.560 { 00:14:34.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.560 "dma_device_type": 2 00:14:34.560 } 00:14:34.560 ], 00:14:34.560 "driver_specific": {} 00:14:34.560 } 00:14:34.560 ] 00:14:34.560 21:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:34.560 21:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:34.560 21:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:34.560 21:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:34.560 21:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:34.560 21:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:34.560 21:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:34.560 21:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:34.560 21:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:34.560 21:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:34.560 21:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:34.560 21:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:34.560 21:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:34.560 21:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:34.560 "name": "Existed_Raid", 00:14:34.560 "uuid": "b802ea2d-75ba-4958-aeeb-2eccefa23273", 00:14:34.560 "strip_size_kb": 64, 00:14:34.560 "state": "configuring", 00:14:34.560 "raid_level": "raid0", 00:14:34.560 "superblock": true, 00:14:34.560 "num_base_bdevs": 3, 00:14:34.560 "num_base_bdevs_discovered": 2, 00:14:34.560 "num_base_bdevs_operational": 3, 00:14:34.560 "base_bdevs_list": [ 00:14:34.560 { 00:14:34.560 "name": "BaseBdev1", 00:14:34.560 "uuid": "bd1b3549-6b77-43a0-9091-4734516abd2f", 00:14:34.560 "is_configured": true, 00:14:34.560 "data_offset": 2048, 00:14:34.560 "data_size": 63488 00:14:34.560 }, 00:14:34.560 { 00:14:34.560 "name": null, 00:14:34.560 "uuid": "5a9dc045-0b62-4d3b-b19f-3cef747e1c6d", 00:14:34.560 "is_configured": false, 00:14:34.560 "data_offset": 2048, 00:14:34.560 "data_size": 63488 00:14:34.560 }, 00:14:34.560 { 00:14:34.560 "name": "BaseBdev3", 00:14:34.560 "uuid": "2e81ea7b-9228-4e2c-8342-cede176b2264", 00:14:34.560 "is_configured": true, 00:14:34.560 "data_offset": 2048, 00:14:34.560 "data_size": 63488 00:14:34.560 } 00:14:34.560 ] 00:14:34.560 }' 00:14:34.560 21:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:34.560 21:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.127 21:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:35.127 21:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:35.127 21:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:14:35.127 21:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:14:35.386 [2024-07-13 21:57:54.648086] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:35.386 21:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:35.386 21:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:35.386 21:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:35.386 21:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:35.386 21:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:35.386 21:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:35.386 21:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:35.386 21:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:35.386 21:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:35.386 21:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:35.386 21:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:35.386 21:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:35.645 21:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:35.645 "name": "Existed_Raid", 00:14:35.645 "uuid": "b802ea2d-75ba-4958-aeeb-2eccefa23273", 00:14:35.645 "strip_size_kb": 64, 00:14:35.645 "state": "configuring", 00:14:35.645 "raid_level": "raid0", 00:14:35.645 "superblock": true, 00:14:35.645 "num_base_bdevs": 3, 00:14:35.645 "num_base_bdevs_discovered": 1, 00:14:35.645 "num_base_bdevs_operational": 3, 00:14:35.645 "base_bdevs_list": [ 00:14:35.645 { 00:14:35.645 "name": "BaseBdev1", 00:14:35.645 "uuid": "bd1b3549-6b77-43a0-9091-4734516abd2f", 00:14:35.645 "is_configured": true, 00:14:35.645 "data_offset": 2048, 00:14:35.646 "data_size": 63488 00:14:35.646 }, 00:14:35.646 { 00:14:35.646 "name": null, 00:14:35.646 "uuid": "5a9dc045-0b62-4d3b-b19f-3cef747e1c6d", 00:14:35.646 "is_configured": false, 00:14:35.646 "data_offset": 2048, 00:14:35.646 "data_size": 63488 00:14:35.646 }, 00:14:35.646 { 00:14:35.646 "name": null, 00:14:35.646 "uuid": "2e81ea7b-9228-4e2c-8342-cede176b2264", 00:14:35.646 "is_configured": false, 00:14:35.646 "data_offset": 2048, 00:14:35.646 "data_size": 63488 00:14:35.646 } 00:14:35.646 ] 00:14:35.646 }' 00:14:35.646 21:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:35.646 21:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.214 21:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:36.214 21:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:36.214 21:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:14:36.214 21:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:36.473 [2024-07-13 21:57:55.650757] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:36.473 21:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:36.473 21:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:36.473 21:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:36.473 21:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:36.473 21:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:36.473 21:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:36.473 21:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:36.473 21:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:36.473 21:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:36.473 21:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:36.473 21:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:36.473 21:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.473 21:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:36.473 "name": "Existed_Raid", 00:14:36.473 "uuid": "b802ea2d-75ba-4958-aeeb-2eccefa23273", 00:14:36.473 "strip_size_kb": 64, 00:14:36.473 "state": "configuring", 00:14:36.473 "raid_level": "raid0", 00:14:36.473 "superblock": true, 00:14:36.473 "num_base_bdevs": 3, 00:14:36.473 "num_base_bdevs_discovered": 2, 00:14:36.473 "num_base_bdevs_operational": 3, 00:14:36.473 "base_bdevs_list": [ 00:14:36.473 { 00:14:36.473 "name": "BaseBdev1", 00:14:36.473 "uuid": "bd1b3549-6b77-43a0-9091-4734516abd2f", 00:14:36.473 "is_configured": true, 00:14:36.473 "data_offset": 2048, 00:14:36.473 "data_size": 63488 00:14:36.473 }, 00:14:36.473 { 00:14:36.473 "name": null, 00:14:36.473 "uuid": "5a9dc045-0b62-4d3b-b19f-3cef747e1c6d", 00:14:36.473 "is_configured": false, 00:14:36.473 "data_offset": 2048, 00:14:36.473 "data_size": 63488 00:14:36.473 }, 00:14:36.473 { 00:14:36.473 "name": "BaseBdev3", 00:14:36.473 "uuid": "2e81ea7b-9228-4e2c-8342-cede176b2264", 00:14:36.473 "is_configured": true, 00:14:36.473 "data_offset": 2048, 00:14:36.473 "data_size": 63488 00:14:36.473 } 00:14:36.473 ] 00:14:36.473 }' 00:14:36.473 21:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:36.473 21:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.040 21:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:37.040 21:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:37.299 21:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:14:37.299 21:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:37.299 [2024-07-13 21:57:56.637391] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:37.559 21:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:37.559 21:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:37.559 21:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:37.559 21:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:37.559 21:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:37.559 21:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:37.559 21:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:37.559 21:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:37.559 21:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:37.559 21:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:37.559 21:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:37.559 21:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.559 21:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:37.559 "name": "Existed_Raid", 00:14:37.559 "uuid": "b802ea2d-75ba-4958-aeeb-2eccefa23273", 00:14:37.559 "strip_size_kb": 64, 00:14:37.559 "state": "configuring", 00:14:37.559 "raid_level": "raid0", 00:14:37.559 "superblock": true, 00:14:37.559 "num_base_bdevs": 3, 00:14:37.559 "num_base_bdevs_discovered": 1, 00:14:37.559 "num_base_bdevs_operational": 3, 00:14:37.559 "base_bdevs_list": [ 00:14:37.559 { 00:14:37.559 "name": null, 00:14:37.559 "uuid": "bd1b3549-6b77-43a0-9091-4734516abd2f", 00:14:37.559 "is_configured": false, 00:14:37.559 "data_offset": 2048, 00:14:37.559 "data_size": 63488 00:14:37.559 }, 00:14:37.559 { 00:14:37.559 "name": null, 00:14:37.559 "uuid": "5a9dc045-0b62-4d3b-b19f-3cef747e1c6d", 00:14:37.559 "is_configured": false, 00:14:37.559 "data_offset": 2048, 00:14:37.559 "data_size": 63488 00:14:37.559 }, 00:14:37.559 { 00:14:37.559 "name": "BaseBdev3", 00:14:37.559 "uuid": "2e81ea7b-9228-4e2c-8342-cede176b2264", 00:14:37.559 "is_configured": true, 00:14:37.559 "data_offset": 2048, 00:14:37.559 "data_size": 63488 00:14:37.559 } 00:14:37.559 ] 00:14:37.559 }' 00:14:37.559 21:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:37.559 21:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.128 21:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:38.128 21:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:38.387 21:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:14:38.387 21:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:38.387 [2024-07-13 21:57:57.729900] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:38.387 21:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:38.387 21:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:38.387 21:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:38.387 21:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:38.387 21:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:38.387 21:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:38.387 21:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:38.387 21:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:38.387 21:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:38.387 21:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:38.387 21:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:38.387 21:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:38.647 21:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:38.647 "name": "Existed_Raid", 00:14:38.647 "uuid": "b802ea2d-75ba-4958-aeeb-2eccefa23273", 00:14:38.647 "strip_size_kb": 64, 00:14:38.647 "state": "configuring", 00:14:38.647 "raid_level": "raid0", 00:14:38.647 "superblock": true, 00:14:38.647 "num_base_bdevs": 3, 00:14:38.647 "num_base_bdevs_discovered": 2, 00:14:38.647 "num_base_bdevs_operational": 3, 00:14:38.647 "base_bdevs_list": [ 00:14:38.647 { 00:14:38.647 "name": null, 00:14:38.647 "uuid": "bd1b3549-6b77-43a0-9091-4734516abd2f", 00:14:38.647 "is_configured": false, 00:14:38.647 "data_offset": 2048, 00:14:38.647 "data_size": 63488 00:14:38.647 }, 00:14:38.647 { 00:14:38.647 "name": "BaseBdev2", 00:14:38.647 "uuid": "5a9dc045-0b62-4d3b-b19f-3cef747e1c6d", 00:14:38.647 "is_configured": true, 00:14:38.647 "data_offset": 2048, 00:14:38.647 "data_size": 63488 00:14:38.647 }, 00:14:38.647 { 00:14:38.647 "name": "BaseBdev3", 00:14:38.647 "uuid": "2e81ea7b-9228-4e2c-8342-cede176b2264", 00:14:38.647 "is_configured": true, 00:14:38.647 "data_offset": 2048, 00:14:38.647 "data_size": 63488 00:14:38.647 } 00:14:38.647 ] 00:14:38.647 }' 00:14:38.647 21:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:38.647 21:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.216 21:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:39.216 21:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:39.216 21:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:14:39.216 21:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:39.216 21:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:39.475 21:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u bd1b3549-6b77-43a0-9091-4734516abd2f 00:14:39.734 [2024-07-13 21:57:58.949078] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:39.734 [2024-07-13 21:57:58.949299] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041780 00:14:39.734 [2024-07-13 21:57:58.949317] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:39.734 [2024-07-13 21:57:58.949554] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010980 00:14:39.734 [2024-07-13 21:57:58.949726] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041780 00:14:39.734 [2024-07-13 21:57:58.949737] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000041780 00:14:39.734 NewBaseBdev 00:14:39.734 [2024-07-13 21:57:58.949884] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.734 21:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:14:39.734 21:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:14:39.734 21:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:39.734 21:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:39.734 21:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:39.734 21:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:39.734 21:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:39.994 21:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:39.994 [ 00:14:39.994 { 00:14:39.994 "name": "NewBaseBdev", 00:14:39.994 "aliases": [ 00:14:39.994 "bd1b3549-6b77-43a0-9091-4734516abd2f" 00:14:39.994 ], 00:14:39.994 "product_name": "Malloc disk", 00:14:39.994 "block_size": 512, 00:14:39.994 "num_blocks": 65536, 00:14:39.994 "uuid": "bd1b3549-6b77-43a0-9091-4734516abd2f", 00:14:39.994 "assigned_rate_limits": { 00:14:39.994 "rw_ios_per_sec": 0, 00:14:39.994 "rw_mbytes_per_sec": 0, 00:14:39.994 "r_mbytes_per_sec": 0, 00:14:39.994 "w_mbytes_per_sec": 0 00:14:39.994 }, 00:14:39.994 "claimed": true, 00:14:39.994 "claim_type": "exclusive_write", 00:14:39.994 "zoned": false, 00:14:39.994 "supported_io_types": { 00:14:39.994 "read": true, 00:14:39.994 "write": true, 00:14:39.994 "unmap": true, 00:14:39.994 "flush": true, 00:14:39.994 "reset": true, 00:14:39.994 "nvme_admin": false, 00:14:39.994 "nvme_io": false, 00:14:39.994 "nvme_io_md": false, 00:14:39.994 "write_zeroes": true, 00:14:39.994 "zcopy": true, 00:14:39.994 "get_zone_info": false, 00:14:39.994 "zone_management": false, 00:14:39.994 "zone_append": false, 00:14:39.994 "compare": false, 00:14:39.994 "compare_and_write": false, 00:14:39.994 "abort": true, 00:14:39.994 "seek_hole": false, 00:14:39.994 "seek_data": false, 00:14:39.994 "copy": true, 00:14:39.994 "nvme_iov_md": false 00:14:39.994 }, 00:14:39.994 "memory_domains": [ 00:14:39.994 { 00:14:39.994 "dma_device_id": "system", 00:14:39.994 "dma_device_type": 1 00:14:39.994 }, 00:14:39.994 { 00:14:39.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.994 "dma_device_type": 2 00:14:39.994 } 00:14:39.994 ], 00:14:39.994 "driver_specific": {} 00:14:39.994 } 00:14:39.994 ] 00:14:39.994 21:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:39.994 21:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:14:39.994 21:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:39.994 21:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:39.994 21:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:39.994 21:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:39.994 21:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:39.994 21:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:39.994 21:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:39.994 21:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:39.994 21:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:39.994 21:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:39.994 21:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:40.254 21:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:40.254 "name": "Existed_Raid", 00:14:40.254 "uuid": "b802ea2d-75ba-4958-aeeb-2eccefa23273", 00:14:40.254 "strip_size_kb": 64, 00:14:40.254 "state": "online", 00:14:40.254 "raid_level": "raid0", 00:14:40.254 "superblock": true, 00:14:40.254 "num_base_bdevs": 3, 00:14:40.254 "num_base_bdevs_discovered": 3, 00:14:40.254 "num_base_bdevs_operational": 3, 00:14:40.254 "base_bdevs_list": [ 00:14:40.254 { 00:14:40.254 "name": "NewBaseBdev", 00:14:40.254 "uuid": "bd1b3549-6b77-43a0-9091-4734516abd2f", 00:14:40.254 "is_configured": true, 00:14:40.254 "data_offset": 2048, 00:14:40.254 "data_size": 63488 00:14:40.254 }, 00:14:40.254 { 00:14:40.254 "name": "BaseBdev2", 00:14:40.254 "uuid": "5a9dc045-0b62-4d3b-b19f-3cef747e1c6d", 00:14:40.254 "is_configured": true, 00:14:40.254 "data_offset": 2048, 00:14:40.254 "data_size": 63488 00:14:40.254 }, 00:14:40.254 { 00:14:40.254 "name": "BaseBdev3", 00:14:40.254 "uuid": "2e81ea7b-9228-4e2c-8342-cede176b2264", 00:14:40.254 "is_configured": true, 00:14:40.254 "data_offset": 2048, 00:14:40.254 "data_size": 63488 00:14:40.254 } 00:14:40.254 ] 00:14:40.254 }' 00:14:40.254 21:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:40.254 21:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.823 21:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:14:40.823 21:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:40.823 21:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:40.823 21:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:40.823 21:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:40.823 21:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:14:40.823 21:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:40.823 21:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:40.823 [2024-07-13 21:58:00.128549] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:40.823 21:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:40.823 "name": "Existed_Raid", 00:14:40.823 "aliases": [ 00:14:40.823 "b802ea2d-75ba-4958-aeeb-2eccefa23273" 00:14:40.823 ], 00:14:40.823 "product_name": "Raid Volume", 00:14:40.823 "block_size": 512, 00:14:40.823 "num_blocks": 190464, 00:14:40.823 "uuid": "b802ea2d-75ba-4958-aeeb-2eccefa23273", 00:14:40.823 "assigned_rate_limits": { 00:14:40.823 "rw_ios_per_sec": 0, 00:14:40.823 "rw_mbytes_per_sec": 0, 00:14:40.823 "r_mbytes_per_sec": 0, 00:14:40.823 "w_mbytes_per_sec": 0 00:14:40.823 }, 00:14:40.823 "claimed": false, 00:14:40.823 "zoned": false, 00:14:40.823 "supported_io_types": { 00:14:40.823 "read": true, 00:14:40.823 "write": true, 00:14:40.823 "unmap": true, 00:14:40.823 "flush": true, 00:14:40.823 "reset": true, 00:14:40.823 "nvme_admin": false, 00:14:40.823 "nvme_io": false, 00:14:40.823 "nvme_io_md": false, 00:14:40.823 "write_zeroes": true, 00:14:40.823 "zcopy": false, 00:14:40.823 "get_zone_info": false, 00:14:40.823 "zone_management": false, 00:14:40.823 "zone_append": false, 00:14:40.823 "compare": false, 00:14:40.823 "compare_and_write": false, 00:14:40.823 "abort": false, 00:14:40.823 "seek_hole": false, 00:14:40.823 "seek_data": false, 00:14:40.823 "copy": false, 00:14:40.823 "nvme_iov_md": false 00:14:40.823 }, 00:14:40.823 "memory_domains": [ 00:14:40.823 { 00:14:40.823 "dma_device_id": "system", 00:14:40.823 "dma_device_type": 1 00:14:40.823 }, 00:14:40.823 { 00:14:40.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.823 "dma_device_type": 2 00:14:40.823 }, 00:14:40.823 { 00:14:40.823 "dma_device_id": "system", 00:14:40.823 "dma_device_type": 1 00:14:40.823 }, 00:14:40.823 { 00:14:40.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.823 "dma_device_type": 2 00:14:40.823 }, 00:14:40.823 { 00:14:40.823 "dma_device_id": "system", 00:14:40.823 "dma_device_type": 1 00:14:40.823 }, 00:14:40.823 { 00:14:40.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.823 "dma_device_type": 2 00:14:40.823 } 00:14:40.823 ], 00:14:40.823 "driver_specific": { 00:14:40.823 "raid": { 00:14:40.823 "uuid": "b802ea2d-75ba-4958-aeeb-2eccefa23273", 00:14:40.823 "strip_size_kb": 64, 00:14:40.823 "state": "online", 00:14:40.823 "raid_level": "raid0", 00:14:40.823 "superblock": true, 00:14:40.823 "num_base_bdevs": 3, 00:14:40.823 "num_base_bdevs_discovered": 3, 00:14:40.823 "num_base_bdevs_operational": 3, 00:14:40.823 "base_bdevs_list": [ 00:14:40.823 { 00:14:40.823 "name": "NewBaseBdev", 00:14:40.823 "uuid": "bd1b3549-6b77-43a0-9091-4734516abd2f", 00:14:40.823 "is_configured": true, 00:14:40.823 "data_offset": 2048, 00:14:40.823 "data_size": 63488 00:14:40.823 }, 00:14:40.823 { 00:14:40.823 "name": "BaseBdev2", 00:14:40.823 "uuid": "5a9dc045-0b62-4d3b-b19f-3cef747e1c6d", 00:14:40.823 "is_configured": true, 00:14:40.823 "data_offset": 2048, 00:14:40.823 "data_size": 63488 00:14:40.823 }, 00:14:40.823 { 00:14:40.823 "name": "BaseBdev3", 00:14:40.823 "uuid": "2e81ea7b-9228-4e2c-8342-cede176b2264", 00:14:40.823 "is_configured": true, 00:14:40.823 "data_offset": 2048, 00:14:40.823 "data_size": 63488 00:14:40.823 } 00:14:40.823 ] 00:14:40.823 } 00:14:40.823 } 00:14:40.823 }' 00:14:40.823 21:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:40.823 21:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:14:40.823 BaseBdev2 00:14:40.823 BaseBdev3' 00:14:40.823 21:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:40.823 21:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:40.823 21:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:14:41.082 21:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:41.082 "name": "NewBaseBdev", 00:14:41.082 "aliases": [ 00:14:41.082 "bd1b3549-6b77-43a0-9091-4734516abd2f" 00:14:41.082 ], 00:14:41.082 "product_name": "Malloc disk", 00:14:41.082 "block_size": 512, 00:14:41.082 "num_blocks": 65536, 00:14:41.082 "uuid": "bd1b3549-6b77-43a0-9091-4734516abd2f", 00:14:41.082 "assigned_rate_limits": { 00:14:41.082 "rw_ios_per_sec": 0, 00:14:41.082 "rw_mbytes_per_sec": 0, 00:14:41.082 "r_mbytes_per_sec": 0, 00:14:41.082 "w_mbytes_per_sec": 0 00:14:41.082 }, 00:14:41.082 "claimed": true, 00:14:41.082 "claim_type": "exclusive_write", 00:14:41.082 "zoned": false, 00:14:41.082 "supported_io_types": { 00:14:41.082 "read": true, 00:14:41.082 "write": true, 00:14:41.082 "unmap": true, 00:14:41.082 "flush": true, 00:14:41.082 "reset": true, 00:14:41.082 "nvme_admin": false, 00:14:41.082 "nvme_io": false, 00:14:41.082 "nvme_io_md": false, 00:14:41.082 "write_zeroes": true, 00:14:41.082 "zcopy": true, 00:14:41.082 "get_zone_info": false, 00:14:41.082 "zone_management": false, 00:14:41.082 "zone_append": false, 00:14:41.082 "compare": false, 00:14:41.082 "compare_and_write": false, 00:14:41.082 "abort": true, 00:14:41.082 "seek_hole": false, 00:14:41.082 "seek_data": false, 00:14:41.082 "copy": true, 00:14:41.082 "nvme_iov_md": false 00:14:41.082 }, 00:14:41.082 "memory_domains": [ 00:14:41.082 { 00:14:41.083 "dma_device_id": "system", 00:14:41.083 "dma_device_type": 1 00:14:41.083 }, 00:14:41.083 { 00:14:41.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.083 "dma_device_type": 2 00:14:41.083 } 00:14:41.083 ], 00:14:41.083 "driver_specific": {} 00:14:41.083 }' 00:14:41.083 21:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:41.083 21:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:41.083 21:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:41.083 21:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:41.342 21:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:41.342 21:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:41.342 21:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:41.342 21:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:41.342 21:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:41.342 21:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:41.342 21:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:41.342 21:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:41.342 21:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:41.342 21:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:41.342 21:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:41.601 21:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:41.601 "name": "BaseBdev2", 00:14:41.601 "aliases": [ 00:14:41.601 "5a9dc045-0b62-4d3b-b19f-3cef747e1c6d" 00:14:41.601 ], 00:14:41.601 "product_name": "Malloc disk", 00:14:41.601 "block_size": 512, 00:14:41.601 "num_blocks": 65536, 00:14:41.601 "uuid": "5a9dc045-0b62-4d3b-b19f-3cef747e1c6d", 00:14:41.601 "assigned_rate_limits": { 00:14:41.601 "rw_ios_per_sec": 0, 00:14:41.601 "rw_mbytes_per_sec": 0, 00:14:41.601 "r_mbytes_per_sec": 0, 00:14:41.601 "w_mbytes_per_sec": 0 00:14:41.601 }, 00:14:41.601 "claimed": true, 00:14:41.601 "claim_type": "exclusive_write", 00:14:41.601 "zoned": false, 00:14:41.601 "supported_io_types": { 00:14:41.601 "read": true, 00:14:41.601 "write": true, 00:14:41.601 "unmap": true, 00:14:41.601 "flush": true, 00:14:41.601 "reset": true, 00:14:41.601 "nvme_admin": false, 00:14:41.601 "nvme_io": false, 00:14:41.601 "nvme_io_md": false, 00:14:41.601 "write_zeroes": true, 00:14:41.601 "zcopy": true, 00:14:41.601 "get_zone_info": false, 00:14:41.601 "zone_management": false, 00:14:41.601 "zone_append": false, 00:14:41.601 "compare": false, 00:14:41.601 "compare_and_write": false, 00:14:41.601 "abort": true, 00:14:41.601 "seek_hole": false, 00:14:41.601 "seek_data": false, 00:14:41.601 "copy": true, 00:14:41.601 "nvme_iov_md": false 00:14:41.601 }, 00:14:41.601 "memory_domains": [ 00:14:41.601 { 00:14:41.601 "dma_device_id": "system", 00:14:41.601 "dma_device_type": 1 00:14:41.601 }, 00:14:41.601 { 00:14:41.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.601 "dma_device_type": 2 00:14:41.601 } 00:14:41.601 ], 00:14:41.601 "driver_specific": {} 00:14:41.601 }' 00:14:41.601 21:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:41.601 21:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:41.601 21:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:41.601 21:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:41.601 21:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:41.601 21:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:41.601 21:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:41.860 21:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:41.860 21:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:41.860 21:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:41.860 21:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:41.860 21:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:41.860 21:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:41.860 21:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:41.860 21:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:42.120 21:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:42.120 "name": "BaseBdev3", 00:14:42.120 "aliases": [ 00:14:42.120 "2e81ea7b-9228-4e2c-8342-cede176b2264" 00:14:42.120 ], 00:14:42.120 "product_name": "Malloc disk", 00:14:42.120 "block_size": 512, 00:14:42.120 "num_blocks": 65536, 00:14:42.120 "uuid": "2e81ea7b-9228-4e2c-8342-cede176b2264", 00:14:42.120 "assigned_rate_limits": { 00:14:42.120 "rw_ios_per_sec": 0, 00:14:42.120 "rw_mbytes_per_sec": 0, 00:14:42.120 "r_mbytes_per_sec": 0, 00:14:42.120 "w_mbytes_per_sec": 0 00:14:42.120 }, 00:14:42.120 "claimed": true, 00:14:42.120 "claim_type": "exclusive_write", 00:14:42.120 "zoned": false, 00:14:42.120 "supported_io_types": { 00:14:42.120 "read": true, 00:14:42.120 "write": true, 00:14:42.120 "unmap": true, 00:14:42.120 "flush": true, 00:14:42.120 "reset": true, 00:14:42.120 "nvme_admin": false, 00:14:42.120 "nvme_io": false, 00:14:42.120 "nvme_io_md": false, 00:14:42.120 "write_zeroes": true, 00:14:42.120 "zcopy": true, 00:14:42.120 "get_zone_info": false, 00:14:42.120 "zone_management": false, 00:14:42.120 "zone_append": false, 00:14:42.120 "compare": false, 00:14:42.120 "compare_and_write": false, 00:14:42.120 "abort": true, 00:14:42.120 "seek_hole": false, 00:14:42.120 "seek_data": false, 00:14:42.120 "copy": true, 00:14:42.120 "nvme_iov_md": false 00:14:42.120 }, 00:14:42.120 "memory_domains": [ 00:14:42.120 { 00:14:42.120 "dma_device_id": "system", 00:14:42.120 "dma_device_type": 1 00:14:42.120 }, 00:14:42.120 { 00:14:42.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:42.120 "dma_device_type": 2 00:14:42.120 } 00:14:42.120 ], 00:14:42.120 "driver_specific": {} 00:14:42.120 }' 00:14:42.120 21:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:42.120 21:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:42.120 21:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:42.120 21:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:42.120 21:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:42.120 21:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:42.120 21:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:42.120 21:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:42.379 21:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:42.379 21:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:42.379 21:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:42.379 21:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:42.379 21:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:42.638 [2024-07-13 21:58:01.772614] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:42.638 [2024-07-13 21:58:01.772644] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:42.638 [2024-07-13 21:58:01.772726] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:42.638 [2024-07-13 21:58:01.772780] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:42.638 [2024-07-13 21:58:01.772798] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041780 name Existed_Raid, state offline 00:14:42.638 21:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 1373058 00:14:42.638 21:58:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 1373058 ']' 00:14:42.639 21:58:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 1373058 00:14:42.639 21:58:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:14:42.639 21:58:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:42.639 21:58:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1373058 00:14:42.639 21:58:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:42.639 21:58:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:42.639 21:58:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1373058' 00:14:42.639 killing process with pid 1373058 00:14:42.639 21:58:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 1373058 00:14:42.639 [2024-07-13 21:58:01.842682] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:42.639 21:58:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 1373058 00:14:42.898 [2024-07-13 21:58:02.065626] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:44.277 21:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:14:44.277 00:14:44.277 real 0m23.321s 00:14:44.277 user 0m40.816s 00:14:44.277 sys 0m4.398s 00:14:44.277 21:58:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:44.277 21:58:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.277 ************************************ 00:14:44.277 END TEST raid_state_function_test_sb 00:14:44.277 ************************************ 00:14:44.277 21:58:03 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:44.277 21:58:03 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:14:44.277 21:58:03 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:14:44.277 21:58:03 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:44.277 21:58:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:44.277 ************************************ 00:14:44.277 START TEST raid_superblock_test 00:14:44.277 ************************************ 00:14:44.277 21:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 3 00:14:44.277 21:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:14:44.277 21:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:14:44.277 21:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:14:44.277 21:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:14:44.277 21:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:14:44.277 21:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:14:44.277 21:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:14:44.277 21:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:14:44.277 21:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:14:44.277 21:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:14:44.277 21:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:14:44.277 21:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:14:44.277 21:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:14:44.277 21:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:14:44.277 21:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:14:44.277 21:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:14:44.277 21:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=1377635 00:14:44.277 21:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 1377635 /var/tmp/spdk-raid.sock 00:14:44.277 21:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:44.277 21:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 1377635 ']' 00:14:44.277 21:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:44.277 21:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:44.277 21:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:44.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:44.277 21:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:44.277 21:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.277 [2024-07-13 21:58:03.473426] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:44.277 [2024-07-13 21:58:03.473517] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1377635 ] 00:14:44.277 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.277 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:44.277 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.277 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:44.277 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.277 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:44.277 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.277 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:44.277 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.277 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:44.277 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.277 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:44.277 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.278 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:44.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.278 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:44.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.278 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:44.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.278 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:44.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.278 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:44.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.278 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:44.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.278 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:44.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.278 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:44.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.278 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:44.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.278 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:44.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.278 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:44.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.278 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:44.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.278 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:44.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.278 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:44.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.278 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:44.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.278 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:44.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.278 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:44.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.278 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:44.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.278 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:44.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.278 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:44.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.278 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:44.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.278 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:44.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.278 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:44.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.278 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:44.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.278 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:44.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.278 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:44.278 [2024-07-13 21:58:03.636443] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.536 [2024-07-13 21:58:03.843758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.794 [2024-07-13 21:58:04.084608] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:44.795 [2024-07-13 21:58:04.084643] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:45.086 21:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:45.086 21:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:14:45.086 21:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:14:45.086 21:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:45.086 21:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:14:45.087 21:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:14:45.087 21:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:45.087 21:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:45.087 21:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:45.087 21:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:45.087 21:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:45.087 malloc1 00:14:45.087 21:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:45.373 [2024-07-13 21:58:04.610284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:45.373 [2024-07-13 21:58:04.610345] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:45.373 [2024-07-13 21:58:04.610388] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:14:45.373 [2024-07-13 21:58:04.610401] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:45.373 [2024-07-13 21:58:04.612528] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:45.373 [2024-07-13 21:58:04.612559] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:45.373 pt1 00:14:45.373 21:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:45.373 21:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:45.373 21:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:14:45.373 21:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:14:45.374 21:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:45.374 21:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:45.374 21:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:45.374 21:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:45.374 21:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:45.632 malloc2 00:14:45.632 21:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:45.632 [2024-07-13 21:58:04.979107] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:45.632 [2024-07-13 21:58:04.979161] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:45.632 [2024-07-13 21:58:04.979184] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:14:45.632 [2024-07-13 21:58:04.979196] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:45.632 [2024-07-13 21:58:04.981313] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:45.632 [2024-07-13 21:58:04.981347] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:45.632 pt2 00:14:45.632 21:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:45.632 21:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:45.632 21:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:14:45.632 21:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:14:45.632 21:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:45.632 21:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:45.632 21:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:45.632 21:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:45.632 21:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:14:45.890 malloc3 00:14:45.890 21:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:46.149 [2024-07-13 21:58:05.342012] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:46.149 [2024-07-13 21:58:05.342063] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:46.149 [2024-07-13 21:58:05.342101] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040e80 00:14:46.149 [2024-07-13 21:58:05.342112] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:46.149 [2024-07-13 21:58:05.344189] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:46.149 [2024-07-13 21:58:05.344216] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:46.149 pt3 00:14:46.149 21:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:46.149 21:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:46.149 21:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:14:46.149 [2024-07-13 21:58:05.510488] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:46.149 [2024-07-13 21:58:05.512141] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:46.149 [2024-07-13 21:58:05.512200] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:46.149 [2024-07-13 21:58:05.512366] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041480 00:14:46.149 [2024-07-13 21:58:05.512381] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:46.149 [2024-07-13 21:58:05.512617] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:14:46.149 [2024-07-13 21:58:05.512785] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041480 00:14:46.149 [2024-07-13 21:58:05.512796] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041480 00:14:46.149 [2024-07-13 21:58:05.512940] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:46.149 21:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:14:46.149 21:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:46.149 21:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:46.149 21:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:46.149 21:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:46.149 21:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:46.149 21:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:46.149 21:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:46.149 21:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:46.149 21:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:46.149 21:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:46.149 21:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.408 21:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:46.408 "name": "raid_bdev1", 00:14:46.408 "uuid": "7ae76301-76d5-41dc-8f20-dc2c8aab77dd", 00:14:46.408 "strip_size_kb": 64, 00:14:46.408 "state": "online", 00:14:46.408 "raid_level": "raid0", 00:14:46.408 "superblock": true, 00:14:46.408 "num_base_bdevs": 3, 00:14:46.408 "num_base_bdevs_discovered": 3, 00:14:46.408 "num_base_bdevs_operational": 3, 00:14:46.408 "base_bdevs_list": [ 00:14:46.408 { 00:14:46.408 "name": "pt1", 00:14:46.408 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:46.408 "is_configured": true, 00:14:46.408 "data_offset": 2048, 00:14:46.408 "data_size": 63488 00:14:46.408 }, 00:14:46.408 { 00:14:46.408 "name": "pt2", 00:14:46.408 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:46.408 "is_configured": true, 00:14:46.408 "data_offset": 2048, 00:14:46.408 "data_size": 63488 00:14:46.408 }, 00:14:46.408 { 00:14:46.408 "name": "pt3", 00:14:46.408 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:46.408 "is_configured": true, 00:14:46.408 "data_offset": 2048, 00:14:46.408 "data_size": 63488 00:14:46.408 } 00:14:46.408 ] 00:14:46.408 }' 00:14:46.408 21:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:46.408 21:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.976 21:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:14:46.976 21:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:46.976 21:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:46.976 21:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:46.976 21:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:46.976 21:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:46.976 21:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:46.976 21:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:46.976 [2024-07-13 21:58:06.352994] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:47.235 21:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:47.235 "name": "raid_bdev1", 00:14:47.235 "aliases": [ 00:14:47.235 "7ae76301-76d5-41dc-8f20-dc2c8aab77dd" 00:14:47.235 ], 00:14:47.235 "product_name": "Raid Volume", 00:14:47.235 "block_size": 512, 00:14:47.235 "num_blocks": 190464, 00:14:47.235 "uuid": "7ae76301-76d5-41dc-8f20-dc2c8aab77dd", 00:14:47.235 "assigned_rate_limits": { 00:14:47.235 "rw_ios_per_sec": 0, 00:14:47.235 "rw_mbytes_per_sec": 0, 00:14:47.235 "r_mbytes_per_sec": 0, 00:14:47.235 "w_mbytes_per_sec": 0 00:14:47.235 }, 00:14:47.235 "claimed": false, 00:14:47.235 "zoned": false, 00:14:47.235 "supported_io_types": { 00:14:47.235 "read": true, 00:14:47.235 "write": true, 00:14:47.235 "unmap": true, 00:14:47.235 "flush": true, 00:14:47.235 "reset": true, 00:14:47.235 "nvme_admin": false, 00:14:47.235 "nvme_io": false, 00:14:47.235 "nvme_io_md": false, 00:14:47.235 "write_zeroes": true, 00:14:47.235 "zcopy": false, 00:14:47.235 "get_zone_info": false, 00:14:47.235 "zone_management": false, 00:14:47.235 "zone_append": false, 00:14:47.235 "compare": false, 00:14:47.235 "compare_and_write": false, 00:14:47.235 "abort": false, 00:14:47.235 "seek_hole": false, 00:14:47.235 "seek_data": false, 00:14:47.235 "copy": false, 00:14:47.235 "nvme_iov_md": false 00:14:47.235 }, 00:14:47.235 "memory_domains": [ 00:14:47.235 { 00:14:47.235 "dma_device_id": "system", 00:14:47.235 "dma_device_type": 1 00:14:47.235 }, 00:14:47.235 { 00:14:47.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.235 "dma_device_type": 2 00:14:47.235 }, 00:14:47.235 { 00:14:47.235 "dma_device_id": "system", 00:14:47.235 "dma_device_type": 1 00:14:47.235 }, 00:14:47.235 { 00:14:47.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.235 "dma_device_type": 2 00:14:47.235 }, 00:14:47.235 { 00:14:47.235 "dma_device_id": "system", 00:14:47.235 "dma_device_type": 1 00:14:47.235 }, 00:14:47.235 { 00:14:47.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.235 "dma_device_type": 2 00:14:47.235 } 00:14:47.235 ], 00:14:47.235 "driver_specific": { 00:14:47.235 "raid": { 00:14:47.235 "uuid": "7ae76301-76d5-41dc-8f20-dc2c8aab77dd", 00:14:47.235 "strip_size_kb": 64, 00:14:47.235 "state": "online", 00:14:47.235 "raid_level": "raid0", 00:14:47.235 "superblock": true, 00:14:47.235 "num_base_bdevs": 3, 00:14:47.235 "num_base_bdevs_discovered": 3, 00:14:47.235 "num_base_bdevs_operational": 3, 00:14:47.235 "base_bdevs_list": [ 00:14:47.235 { 00:14:47.235 "name": "pt1", 00:14:47.235 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:47.235 "is_configured": true, 00:14:47.235 "data_offset": 2048, 00:14:47.235 "data_size": 63488 00:14:47.235 }, 00:14:47.235 { 00:14:47.235 "name": "pt2", 00:14:47.235 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:47.235 "is_configured": true, 00:14:47.235 "data_offset": 2048, 00:14:47.235 "data_size": 63488 00:14:47.235 }, 00:14:47.235 { 00:14:47.235 "name": "pt3", 00:14:47.235 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:47.235 "is_configured": true, 00:14:47.235 "data_offset": 2048, 00:14:47.235 "data_size": 63488 00:14:47.235 } 00:14:47.235 ] 00:14:47.235 } 00:14:47.235 } 00:14:47.235 }' 00:14:47.235 21:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:47.235 21:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:47.235 pt2 00:14:47.235 pt3' 00:14:47.235 21:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:47.235 21:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:47.235 21:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:47.235 21:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:47.235 "name": "pt1", 00:14:47.235 "aliases": [ 00:14:47.235 "00000000-0000-0000-0000-000000000001" 00:14:47.235 ], 00:14:47.235 "product_name": "passthru", 00:14:47.235 "block_size": 512, 00:14:47.235 "num_blocks": 65536, 00:14:47.235 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:47.235 "assigned_rate_limits": { 00:14:47.235 "rw_ios_per_sec": 0, 00:14:47.235 "rw_mbytes_per_sec": 0, 00:14:47.235 "r_mbytes_per_sec": 0, 00:14:47.235 "w_mbytes_per_sec": 0 00:14:47.235 }, 00:14:47.235 "claimed": true, 00:14:47.235 "claim_type": "exclusive_write", 00:14:47.235 "zoned": false, 00:14:47.235 "supported_io_types": { 00:14:47.235 "read": true, 00:14:47.235 "write": true, 00:14:47.235 "unmap": true, 00:14:47.235 "flush": true, 00:14:47.235 "reset": true, 00:14:47.235 "nvme_admin": false, 00:14:47.235 "nvme_io": false, 00:14:47.235 "nvme_io_md": false, 00:14:47.235 "write_zeroes": true, 00:14:47.235 "zcopy": true, 00:14:47.235 "get_zone_info": false, 00:14:47.235 "zone_management": false, 00:14:47.235 "zone_append": false, 00:14:47.235 "compare": false, 00:14:47.235 "compare_and_write": false, 00:14:47.235 "abort": true, 00:14:47.235 "seek_hole": false, 00:14:47.235 "seek_data": false, 00:14:47.235 "copy": true, 00:14:47.235 "nvme_iov_md": false 00:14:47.235 }, 00:14:47.235 "memory_domains": [ 00:14:47.235 { 00:14:47.235 "dma_device_id": "system", 00:14:47.235 "dma_device_type": 1 00:14:47.235 }, 00:14:47.235 { 00:14:47.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.235 "dma_device_type": 2 00:14:47.235 } 00:14:47.235 ], 00:14:47.235 "driver_specific": { 00:14:47.235 "passthru": { 00:14:47.235 "name": "pt1", 00:14:47.235 "base_bdev_name": "malloc1" 00:14:47.235 } 00:14:47.235 } 00:14:47.235 }' 00:14:47.235 21:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:47.494 21:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:47.494 21:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:47.494 21:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:47.494 21:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:47.494 21:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:47.494 21:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:47.494 21:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:47.494 21:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:47.494 21:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:47.494 21:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:47.752 21:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:47.752 21:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:47.753 21:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:47.753 21:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:47.753 21:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:47.753 "name": "pt2", 00:14:47.753 "aliases": [ 00:14:47.753 "00000000-0000-0000-0000-000000000002" 00:14:47.753 ], 00:14:47.753 "product_name": "passthru", 00:14:47.753 "block_size": 512, 00:14:47.753 "num_blocks": 65536, 00:14:47.753 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:47.753 "assigned_rate_limits": { 00:14:47.753 "rw_ios_per_sec": 0, 00:14:47.753 "rw_mbytes_per_sec": 0, 00:14:47.753 "r_mbytes_per_sec": 0, 00:14:47.753 "w_mbytes_per_sec": 0 00:14:47.753 }, 00:14:47.753 "claimed": true, 00:14:47.753 "claim_type": "exclusive_write", 00:14:47.753 "zoned": false, 00:14:47.753 "supported_io_types": { 00:14:47.753 "read": true, 00:14:47.753 "write": true, 00:14:47.753 "unmap": true, 00:14:47.753 "flush": true, 00:14:47.753 "reset": true, 00:14:47.753 "nvme_admin": false, 00:14:47.753 "nvme_io": false, 00:14:47.753 "nvme_io_md": false, 00:14:47.753 "write_zeroes": true, 00:14:47.753 "zcopy": true, 00:14:47.753 "get_zone_info": false, 00:14:47.753 "zone_management": false, 00:14:47.753 "zone_append": false, 00:14:47.753 "compare": false, 00:14:47.753 "compare_and_write": false, 00:14:47.753 "abort": true, 00:14:47.753 "seek_hole": false, 00:14:47.753 "seek_data": false, 00:14:47.753 "copy": true, 00:14:47.753 "nvme_iov_md": false 00:14:47.753 }, 00:14:47.753 "memory_domains": [ 00:14:47.753 { 00:14:47.753 "dma_device_id": "system", 00:14:47.753 "dma_device_type": 1 00:14:47.753 }, 00:14:47.753 { 00:14:47.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.753 "dma_device_type": 2 00:14:47.753 } 00:14:47.753 ], 00:14:47.753 "driver_specific": { 00:14:47.753 "passthru": { 00:14:47.753 "name": "pt2", 00:14:47.753 "base_bdev_name": "malloc2" 00:14:47.753 } 00:14:47.753 } 00:14:47.753 }' 00:14:47.753 21:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:47.753 21:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:47.753 21:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:47.753 21:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:48.011 21:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:48.011 21:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:48.011 21:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:48.011 21:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:48.011 21:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:48.011 21:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:48.011 21:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:48.011 21:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:48.011 21:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:48.011 21:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:14:48.011 21:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:48.270 21:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:48.270 "name": "pt3", 00:14:48.270 "aliases": [ 00:14:48.270 "00000000-0000-0000-0000-000000000003" 00:14:48.270 ], 00:14:48.270 "product_name": "passthru", 00:14:48.270 "block_size": 512, 00:14:48.270 "num_blocks": 65536, 00:14:48.270 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:48.270 "assigned_rate_limits": { 00:14:48.270 "rw_ios_per_sec": 0, 00:14:48.270 "rw_mbytes_per_sec": 0, 00:14:48.270 "r_mbytes_per_sec": 0, 00:14:48.270 "w_mbytes_per_sec": 0 00:14:48.270 }, 00:14:48.270 "claimed": true, 00:14:48.270 "claim_type": "exclusive_write", 00:14:48.270 "zoned": false, 00:14:48.270 "supported_io_types": { 00:14:48.270 "read": true, 00:14:48.270 "write": true, 00:14:48.270 "unmap": true, 00:14:48.270 "flush": true, 00:14:48.270 "reset": true, 00:14:48.270 "nvme_admin": false, 00:14:48.270 "nvme_io": false, 00:14:48.270 "nvme_io_md": false, 00:14:48.270 "write_zeroes": true, 00:14:48.270 "zcopy": true, 00:14:48.270 "get_zone_info": false, 00:14:48.270 "zone_management": false, 00:14:48.270 "zone_append": false, 00:14:48.270 "compare": false, 00:14:48.270 "compare_and_write": false, 00:14:48.270 "abort": true, 00:14:48.270 "seek_hole": false, 00:14:48.270 "seek_data": false, 00:14:48.270 "copy": true, 00:14:48.270 "nvme_iov_md": false 00:14:48.270 }, 00:14:48.270 "memory_domains": [ 00:14:48.270 { 00:14:48.270 "dma_device_id": "system", 00:14:48.270 "dma_device_type": 1 00:14:48.270 }, 00:14:48.270 { 00:14:48.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.270 "dma_device_type": 2 00:14:48.270 } 00:14:48.270 ], 00:14:48.270 "driver_specific": { 00:14:48.270 "passthru": { 00:14:48.270 "name": "pt3", 00:14:48.270 "base_bdev_name": "malloc3" 00:14:48.270 } 00:14:48.270 } 00:14:48.270 }' 00:14:48.270 21:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:48.270 21:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:48.270 21:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:48.270 21:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:48.270 21:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:48.529 21:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:48.529 21:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:48.529 21:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:48.529 21:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:48.529 21:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:48.529 21:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:48.529 21:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:48.529 21:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:48.529 21:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:14:48.787 [2024-07-13 21:58:07.997327] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:48.787 21:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=7ae76301-76d5-41dc-8f20-dc2c8aab77dd 00:14:48.787 21:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 7ae76301-76d5-41dc-8f20-dc2c8aab77dd ']' 00:14:48.787 21:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:48.787 [2024-07-13 21:58:08.169485] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:48.787 [2024-07-13 21:58:08.169518] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:48.787 [2024-07-13 21:58:08.169602] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:48.787 [2024-07-13 21:58:08.169661] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:48.787 [2024-07-13 21:58:08.169673] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041480 name raid_bdev1, state offline 00:14:49.046 21:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:14:49.046 21:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:49.046 21:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:14:49.046 21:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:14:49.046 21:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:49.046 21:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:49.305 21:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:49.305 21:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:49.305 21:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:49.305 21:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:14:49.563 21:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:49.563 21:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:49.822 21:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:14:49.822 21:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:14:49.822 21:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:14:49.822 21:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:14:49.822 21:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:49.822 21:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:49.822 21:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:49.822 21:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:49.822 21:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:49.822 21:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:49.822 21:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:49.822 21:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:14:49.822 21:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:14:50.081 [2024-07-13 21:58:09.212223] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:50.081 [2024-07-13 21:58:09.214122] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:50.081 [2024-07-13 21:58:09.214176] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:50.081 [2024-07-13 21:58:09.214227] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:50.081 [2024-07-13 21:58:09.214273] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:50.081 [2024-07-13 21:58:09.214295] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:50.081 [2024-07-13 21:58:09.214313] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:50.081 [2024-07-13 21:58:09.214324] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041a80 name raid_bdev1, state configuring 00:14:50.081 request: 00:14:50.081 { 00:14:50.081 "name": "raid_bdev1", 00:14:50.081 "raid_level": "raid0", 00:14:50.081 "base_bdevs": [ 00:14:50.081 "malloc1", 00:14:50.081 "malloc2", 00:14:50.081 "malloc3" 00:14:50.081 ], 00:14:50.081 "strip_size_kb": 64, 00:14:50.081 "superblock": false, 00:14:50.081 "method": "bdev_raid_create", 00:14:50.081 "req_id": 1 00:14:50.081 } 00:14:50.081 Got JSON-RPC error response 00:14:50.081 response: 00:14:50.081 { 00:14:50.081 "code": -17, 00:14:50.081 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:50.081 } 00:14:50.081 21:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:14:50.081 21:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:50.081 21:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:50.081 21:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:50.081 21:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:50.081 21:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:14:50.081 21:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:14:50.081 21:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:14:50.081 21:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:50.340 [2024-07-13 21:58:09.537025] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:50.340 [2024-07-13 21:58:09.537089] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:50.340 [2024-07-13 21:58:09.537112] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042080 00:14:50.340 [2024-07-13 21:58:09.537126] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:50.340 [2024-07-13 21:58:09.539419] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:50.340 [2024-07-13 21:58:09.539452] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:50.340 [2024-07-13 21:58:09.539553] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:50.340 [2024-07-13 21:58:09.539615] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:50.340 pt1 00:14:50.340 21:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:14:50.340 21:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:50.340 21:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:50.340 21:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:50.340 21:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:50.340 21:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:50.340 21:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:50.340 21:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:50.340 21:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:50.340 21:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:50.340 21:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:50.340 21:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.340 21:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:50.340 "name": "raid_bdev1", 00:14:50.340 "uuid": "7ae76301-76d5-41dc-8f20-dc2c8aab77dd", 00:14:50.340 "strip_size_kb": 64, 00:14:50.340 "state": "configuring", 00:14:50.340 "raid_level": "raid0", 00:14:50.340 "superblock": true, 00:14:50.340 "num_base_bdevs": 3, 00:14:50.340 "num_base_bdevs_discovered": 1, 00:14:50.340 "num_base_bdevs_operational": 3, 00:14:50.340 "base_bdevs_list": [ 00:14:50.340 { 00:14:50.340 "name": "pt1", 00:14:50.340 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:50.340 "is_configured": true, 00:14:50.340 "data_offset": 2048, 00:14:50.340 "data_size": 63488 00:14:50.340 }, 00:14:50.340 { 00:14:50.340 "name": null, 00:14:50.340 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:50.340 "is_configured": false, 00:14:50.340 "data_offset": 2048, 00:14:50.340 "data_size": 63488 00:14:50.340 }, 00:14:50.340 { 00:14:50.340 "name": null, 00:14:50.340 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:50.340 "is_configured": false, 00:14:50.340 "data_offset": 2048, 00:14:50.340 "data_size": 63488 00:14:50.340 } 00:14:50.340 ] 00:14:50.340 }' 00:14:50.340 21:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:50.340 21:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.906 21:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:14:50.906 21:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:51.164 [2024-07-13 21:58:10.335127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:51.164 [2024-07-13 21:58:10.335199] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.164 [2024-07-13 21:58:10.335223] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042980 00:14:51.164 [2024-07-13 21:58:10.335234] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.164 [2024-07-13 21:58:10.335711] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.164 [2024-07-13 21:58:10.335730] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:51.164 [2024-07-13 21:58:10.335808] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:51.164 [2024-07-13 21:58:10.335831] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:51.164 pt2 00:14:51.164 21:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:51.164 [2024-07-13 21:58:10.511601] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:51.164 21:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:14:51.164 21:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:51.164 21:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:51.164 21:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:51.164 21:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:51.164 21:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:51.164 21:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:51.164 21:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:51.164 21:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:51.164 21:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:51.164 21:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:51.164 21:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.422 21:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:51.422 "name": "raid_bdev1", 00:14:51.422 "uuid": "7ae76301-76d5-41dc-8f20-dc2c8aab77dd", 00:14:51.422 "strip_size_kb": 64, 00:14:51.422 "state": "configuring", 00:14:51.422 "raid_level": "raid0", 00:14:51.422 "superblock": true, 00:14:51.422 "num_base_bdevs": 3, 00:14:51.422 "num_base_bdevs_discovered": 1, 00:14:51.422 "num_base_bdevs_operational": 3, 00:14:51.422 "base_bdevs_list": [ 00:14:51.422 { 00:14:51.422 "name": "pt1", 00:14:51.422 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:51.422 "is_configured": true, 00:14:51.422 "data_offset": 2048, 00:14:51.422 "data_size": 63488 00:14:51.422 }, 00:14:51.422 { 00:14:51.422 "name": null, 00:14:51.422 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:51.422 "is_configured": false, 00:14:51.422 "data_offset": 2048, 00:14:51.422 "data_size": 63488 00:14:51.422 }, 00:14:51.422 { 00:14:51.422 "name": null, 00:14:51.422 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:51.422 "is_configured": false, 00:14:51.422 "data_offset": 2048, 00:14:51.422 "data_size": 63488 00:14:51.422 } 00:14:51.422 ] 00:14:51.422 }' 00:14:51.422 21:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:51.422 21:58:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.989 21:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:14:51.989 21:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:14:51.989 21:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:51.989 [2024-07-13 21:58:11.353807] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:51.989 [2024-07-13 21:58:11.353869] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.989 [2024-07-13 21:58:11.353889] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042c80 00:14:51.989 [2024-07-13 21:58:11.353907] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.989 [2024-07-13 21:58:11.354365] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.989 [2024-07-13 21:58:11.354386] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:51.989 [2024-07-13 21:58:11.354458] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:51.989 [2024-07-13 21:58:11.354481] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:51.989 pt2 00:14:51.989 21:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:14:51.989 21:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:14:51.989 21:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:52.247 [2024-07-13 21:58:11.530236] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:52.247 [2024-07-13 21:58:11.530285] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:52.247 [2024-07-13 21:58:11.530303] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042f80 00:14:52.247 [2024-07-13 21:58:11.530315] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:52.247 [2024-07-13 21:58:11.530714] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:52.247 [2024-07-13 21:58:11.530734] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:52.247 [2024-07-13 21:58:11.530797] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:52.247 [2024-07-13 21:58:11.530822] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:52.247 [2024-07-13 21:58:11.530951] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042680 00:14:52.247 [2024-07-13 21:58:11.530964] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:52.247 [2024-07-13 21:58:11.531174] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:14:52.247 [2024-07-13 21:58:11.531321] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042680 00:14:52.247 [2024-07-13 21:58:11.531331] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000042680 00:14:52.248 [2024-07-13 21:58:11.531453] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:52.248 pt3 00:14:52.248 21:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:14:52.248 21:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:14:52.248 21:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:14:52.248 21:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:52.248 21:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:52.248 21:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:52.248 21:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:52.248 21:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:52.248 21:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:52.248 21:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:52.248 21:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:52.248 21:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:52.248 21:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.248 21:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:52.507 21:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:52.507 "name": "raid_bdev1", 00:14:52.507 "uuid": "7ae76301-76d5-41dc-8f20-dc2c8aab77dd", 00:14:52.507 "strip_size_kb": 64, 00:14:52.507 "state": "online", 00:14:52.507 "raid_level": "raid0", 00:14:52.507 "superblock": true, 00:14:52.507 "num_base_bdevs": 3, 00:14:52.507 "num_base_bdevs_discovered": 3, 00:14:52.507 "num_base_bdevs_operational": 3, 00:14:52.507 "base_bdevs_list": [ 00:14:52.507 { 00:14:52.507 "name": "pt1", 00:14:52.507 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:52.507 "is_configured": true, 00:14:52.507 "data_offset": 2048, 00:14:52.507 "data_size": 63488 00:14:52.507 }, 00:14:52.507 { 00:14:52.507 "name": "pt2", 00:14:52.507 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:52.507 "is_configured": true, 00:14:52.507 "data_offset": 2048, 00:14:52.507 "data_size": 63488 00:14:52.507 }, 00:14:52.507 { 00:14:52.507 "name": "pt3", 00:14:52.507 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:52.507 "is_configured": true, 00:14:52.507 "data_offset": 2048, 00:14:52.507 "data_size": 63488 00:14:52.507 } 00:14:52.507 ] 00:14:52.507 }' 00:14:52.507 21:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:52.507 21:58:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.075 21:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:14:53.075 21:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:53.075 21:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:53.075 21:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:53.075 21:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:53.075 21:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:53.075 21:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:53.075 21:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:53.075 [2024-07-13 21:58:12.376761] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:53.075 21:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:53.075 "name": "raid_bdev1", 00:14:53.075 "aliases": [ 00:14:53.075 "7ae76301-76d5-41dc-8f20-dc2c8aab77dd" 00:14:53.075 ], 00:14:53.075 "product_name": "Raid Volume", 00:14:53.075 "block_size": 512, 00:14:53.075 "num_blocks": 190464, 00:14:53.075 "uuid": "7ae76301-76d5-41dc-8f20-dc2c8aab77dd", 00:14:53.075 "assigned_rate_limits": { 00:14:53.075 "rw_ios_per_sec": 0, 00:14:53.075 "rw_mbytes_per_sec": 0, 00:14:53.075 "r_mbytes_per_sec": 0, 00:14:53.075 "w_mbytes_per_sec": 0 00:14:53.075 }, 00:14:53.075 "claimed": false, 00:14:53.075 "zoned": false, 00:14:53.075 "supported_io_types": { 00:14:53.075 "read": true, 00:14:53.075 "write": true, 00:14:53.075 "unmap": true, 00:14:53.075 "flush": true, 00:14:53.075 "reset": true, 00:14:53.075 "nvme_admin": false, 00:14:53.075 "nvme_io": false, 00:14:53.075 "nvme_io_md": false, 00:14:53.075 "write_zeroes": true, 00:14:53.075 "zcopy": false, 00:14:53.075 "get_zone_info": false, 00:14:53.075 "zone_management": false, 00:14:53.075 "zone_append": false, 00:14:53.075 "compare": false, 00:14:53.075 "compare_and_write": false, 00:14:53.075 "abort": false, 00:14:53.075 "seek_hole": false, 00:14:53.075 "seek_data": false, 00:14:53.075 "copy": false, 00:14:53.075 "nvme_iov_md": false 00:14:53.075 }, 00:14:53.075 "memory_domains": [ 00:14:53.075 { 00:14:53.075 "dma_device_id": "system", 00:14:53.075 "dma_device_type": 1 00:14:53.075 }, 00:14:53.075 { 00:14:53.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.075 "dma_device_type": 2 00:14:53.076 }, 00:14:53.076 { 00:14:53.076 "dma_device_id": "system", 00:14:53.076 "dma_device_type": 1 00:14:53.076 }, 00:14:53.076 { 00:14:53.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.076 "dma_device_type": 2 00:14:53.076 }, 00:14:53.076 { 00:14:53.076 "dma_device_id": "system", 00:14:53.076 "dma_device_type": 1 00:14:53.076 }, 00:14:53.076 { 00:14:53.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.076 "dma_device_type": 2 00:14:53.076 } 00:14:53.076 ], 00:14:53.076 "driver_specific": { 00:14:53.076 "raid": { 00:14:53.076 "uuid": "7ae76301-76d5-41dc-8f20-dc2c8aab77dd", 00:14:53.076 "strip_size_kb": 64, 00:14:53.076 "state": "online", 00:14:53.076 "raid_level": "raid0", 00:14:53.076 "superblock": true, 00:14:53.076 "num_base_bdevs": 3, 00:14:53.076 "num_base_bdevs_discovered": 3, 00:14:53.076 "num_base_bdevs_operational": 3, 00:14:53.076 "base_bdevs_list": [ 00:14:53.076 { 00:14:53.076 "name": "pt1", 00:14:53.076 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:53.076 "is_configured": true, 00:14:53.076 "data_offset": 2048, 00:14:53.076 "data_size": 63488 00:14:53.076 }, 00:14:53.076 { 00:14:53.076 "name": "pt2", 00:14:53.076 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:53.076 "is_configured": true, 00:14:53.076 "data_offset": 2048, 00:14:53.076 "data_size": 63488 00:14:53.076 }, 00:14:53.076 { 00:14:53.076 "name": "pt3", 00:14:53.076 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:53.076 "is_configured": true, 00:14:53.076 "data_offset": 2048, 00:14:53.076 "data_size": 63488 00:14:53.076 } 00:14:53.076 ] 00:14:53.076 } 00:14:53.076 } 00:14:53.076 }' 00:14:53.076 21:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:53.076 21:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:53.076 pt2 00:14:53.076 pt3' 00:14:53.076 21:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:53.076 21:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:53.076 21:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:53.335 21:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:53.335 "name": "pt1", 00:14:53.335 "aliases": [ 00:14:53.335 "00000000-0000-0000-0000-000000000001" 00:14:53.335 ], 00:14:53.335 "product_name": "passthru", 00:14:53.335 "block_size": 512, 00:14:53.335 "num_blocks": 65536, 00:14:53.335 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:53.335 "assigned_rate_limits": { 00:14:53.335 "rw_ios_per_sec": 0, 00:14:53.335 "rw_mbytes_per_sec": 0, 00:14:53.335 "r_mbytes_per_sec": 0, 00:14:53.335 "w_mbytes_per_sec": 0 00:14:53.335 }, 00:14:53.335 "claimed": true, 00:14:53.335 "claim_type": "exclusive_write", 00:14:53.335 "zoned": false, 00:14:53.335 "supported_io_types": { 00:14:53.335 "read": true, 00:14:53.335 "write": true, 00:14:53.335 "unmap": true, 00:14:53.335 "flush": true, 00:14:53.335 "reset": true, 00:14:53.335 "nvme_admin": false, 00:14:53.335 "nvme_io": false, 00:14:53.335 "nvme_io_md": false, 00:14:53.335 "write_zeroes": true, 00:14:53.335 "zcopy": true, 00:14:53.335 "get_zone_info": false, 00:14:53.335 "zone_management": false, 00:14:53.335 "zone_append": false, 00:14:53.335 "compare": false, 00:14:53.335 "compare_and_write": false, 00:14:53.335 "abort": true, 00:14:53.335 "seek_hole": false, 00:14:53.335 "seek_data": false, 00:14:53.335 "copy": true, 00:14:53.335 "nvme_iov_md": false 00:14:53.335 }, 00:14:53.335 "memory_domains": [ 00:14:53.335 { 00:14:53.335 "dma_device_id": "system", 00:14:53.335 "dma_device_type": 1 00:14:53.335 }, 00:14:53.335 { 00:14:53.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.335 "dma_device_type": 2 00:14:53.335 } 00:14:53.335 ], 00:14:53.335 "driver_specific": { 00:14:53.335 "passthru": { 00:14:53.335 "name": "pt1", 00:14:53.335 "base_bdev_name": "malloc1" 00:14:53.335 } 00:14:53.335 } 00:14:53.335 }' 00:14:53.335 21:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:53.335 21:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:53.335 21:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:53.335 21:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:53.335 21:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:53.595 21:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:53.595 21:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:53.595 21:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:53.595 21:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:53.595 21:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:53.595 21:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:53.595 21:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:53.595 21:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:53.595 21:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:53.595 21:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:53.854 21:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:53.854 "name": "pt2", 00:14:53.854 "aliases": [ 00:14:53.854 "00000000-0000-0000-0000-000000000002" 00:14:53.854 ], 00:14:53.854 "product_name": "passthru", 00:14:53.854 "block_size": 512, 00:14:53.854 "num_blocks": 65536, 00:14:53.854 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:53.854 "assigned_rate_limits": { 00:14:53.854 "rw_ios_per_sec": 0, 00:14:53.854 "rw_mbytes_per_sec": 0, 00:14:53.854 "r_mbytes_per_sec": 0, 00:14:53.854 "w_mbytes_per_sec": 0 00:14:53.854 }, 00:14:53.854 "claimed": true, 00:14:53.854 "claim_type": "exclusive_write", 00:14:53.854 "zoned": false, 00:14:53.854 "supported_io_types": { 00:14:53.854 "read": true, 00:14:53.854 "write": true, 00:14:53.855 "unmap": true, 00:14:53.855 "flush": true, 00:14:53.855 "reset": true, 00:14:53.855 "nvme_admin": false, 00:14:53.855 "nvme_io": false, 00:14:53.855 "nvme_io_md": false, 00:14:53.855 "write_zeroes": true, 00:14:53.855 "zcopy": true, 00:14:53.855 "get_zone_info": false, 00:14:53.855 "zone_management": false, 00:14:53.855 "zone_append": false, 00:14:53.855 "compare": false, 00:14:53.855 "compare_and_write": false, 00:14:53.855 "abort": true, 00:14:53.855 "seek_hole": false, 00:14:53.855 "seek_data": false, 00:14:53.855 "copy": true, 00:14:53.855 "nvme_iov_md": false 00:14:53.855 }, 00:14:53.855 "memory_domains": [ 00:14:53.855 { 00:14:53.855 "dma_device_id": "system", 00:14:53.855 "dma_device_type": 1 00:14:53.855 }, 00:14:53.855 { 00:14:53.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.855 "dma_device_type": 2 00:14:53.855 } 00:14:53.855 ], 00:14:53.855 "driver_specific": { 00:14:53.855 "passthru": { 00:14:53.855 "name": "pt2", 00:14:53.855 "base_bdev_name": "malloc2" 00:14:53.855 } 00:14:53.855 } 00:14:53.855 }' 00:14:53.855 21:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:53.855 21:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:53.855 21:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:53.855 21:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:53.855 21:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:53.855 21:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:53.855 21:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:53.855 21:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:54.114 21:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:54.114 21:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:54.114 21:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:54.114 21:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:54.114 21:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:54.114 21:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:14:54.114 21:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:54.373 21:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:54.373 "name": "pt3", 00:14:54.373 "aliases": [ 00:14:54.373 "00000000-0000-0000-0000-000000000003" 00:14:54.373 ], 00:14:54.373 "product_name": "passthru", 00:14:54.373 "block_size": 512, 00:14:54.373 "num_blocks": 65536, 00:14:54.373 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:54.373 "assigned_rate_limits": { 00:14:54.373 "rw_ios_per_sec": 0, 00:14:54.373 "rw_mbytes_per_sec": 0, 00:14:54.373 "r_mbytes_per_sec": 0, 00:14:54.373 "w_mbytes_per_sec": 0 00:14:54.373 }, 00:14:54.373 "claimed": true, 00:14:54.373 "claim_type": "exclusive_write", 00:14:54.373 "zoned": false, 00:14:54.373 "supported_io_types": { 00:14:54.373 "read": true, 00:14:54.373 "write": true, 00:14:54.373 "unmap": true, 00:14:54.373 "flush": true, 00:14:54.373 "reset": true, 00:14:54.373 "nvme_admin": false, 00:14:54.373 "nvme_io": false, 00:14:54.373 "nvme_io_md": false, 00:14:54.373 "write_zeroes": true, 00:14:54.373 "zcopy": true, 00:14:54.373 "get_zone_info": false, 00:14:54.373 "zone_management": false, 00:14:54.373 "zone_append": false, 00:14:54.373 "compare": false, 00:14:54.373 "compare_and_write": false, 00:14:54.373 "abort": true, 00:14:54.373 "seek_hole": false, 00:14:54.373 "seek_data": false, 00:14:54.373 "copy": true, 00:14:54.373 "nvme_iov_md": false 00:14:54.373 }, 00:14:54.373 "memory_domains": [ 00:14:54.373 { 00:14:54.373 "dma_device_id": "system", 00:14:54.373 "dma_device_type": 1 00:14:54.373 }, 00:14:54.373 { 00:14:54.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.373 "dma_device_type": 2 00:14:54.373 } 00:14:54.373 ], 00:14:54.373 "driver_specific": { 00:14:54.373 "passthru": { 00:14:54.373 "name": "pt3", 00:14:54.373 "base_bdev_name": "malloc3" 00:14:54.373 } 00:14:54.373 } 00:14:54.373 }' 00:14:54.373 21:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:54.373 21:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:54.373 21:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:54.373 21:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:54.373 21:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:54.373 21:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:54.373 21:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:54.373 21:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:54.373 21:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:54.373 21:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:54.632 21:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:54.632 21:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:54.632 21:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:54.632 21:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:14:54.632 [2024-07-13 21:58:13.981221] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:54.632 21:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 7ae76301-76d5-41dc-8f20-dc2c8aab77dd '!=' 7ae76301-76d5-41dc-8f20-dc2c8aab77dd ']' 00:14:54.632 21:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:14:54.632 21:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:54.632 21:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:54.632 21:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 1377635 00:14:54.632 21:58:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 1377635 ']' 00:14:54.632 21:58:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 1377635 00:14:54.632 21:58:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:14:54.632 21:58:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:54.632 21:58:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1377635 00:14:54.891 21:58:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:54.891 21:58:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:54.891 21:58:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1377635' 00:14:54.891 killing process with pid 1377635 00:14:54.891 21:58:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 1377635 00:14:54.891 [2024-07-13 21:58:14.042423] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:54.891 [2024-07-13 21:58:14.042526] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:54.891 [2024-07-13 21:58:14.042585] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:54.891 21:58:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 1377635 00:14:54.891 [2024-07-13 21:58:14.042599] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042680 name raid_bdev1, state offline 00:14:54.891 [2024-07-13 21:58:14.266971] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:56.271 21:58:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:14:56.271 00:14:56.271 real 0m12.108s 00:14:56.271 user 0m20.311s 00:14:56.271 sys 0m2.302s 00:14:56.271 21:58:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:56.271 21:58:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.271 ************************************ 00:14:56.271 END TEST raid_superblock_test 00:14:56.271 ************************************ 00:14:56.271 21:58:15 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:56.271 21:58:15 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:14:56.271 21:58:15 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:56.271 21:58:15 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:56.271 21:58:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:56.271 ************************************ 00:14:56.271 START TEST raid_read_error_test 00:14:56.271 ************************************ 00:14:56.271 21:58:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 3 read 00:14:56.271 21:58:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:14:56.271 21:58:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:14:56.271 21:58:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:14:56.271 21:58:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:14:56.271 21:58:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:56.271 21:58:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:14:56.271 21:58:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:56.271 21:58:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:56.271 21:58:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:14:56.271 21:58:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:56.271 21:58:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:56.271 21:58:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:14:56.271 21:58:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:56.271 21:58:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:56.271 21:58:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:56.271 21:58:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:14:56.271 21:58:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:14:56.271 21:58:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:14:56.271 21:58:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:14:56.271 21:58:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:14:56.271 21:58:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:14:56.271 21:58:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:14:56.271 21:58:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:14:56.271 21:58:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:14:56.271 21:58:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:14:56.271 21:58:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.lNVz4xSI5P 00:14:56.271 21:58:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:56.271 21:58:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1380042 00:14:56.271 21:58:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1380042 /var/tmp/spdk-raid.sock 00:14:56.271 21:58:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 1380042 ']' 00:14:56.271 21:58:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:56.271 21:58:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:56.271 21:58:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:56.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:56.271 21:58:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:56.271 21:58:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.271 [2024-07-13 21:58:15.659545] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:56.271 [2024-07-13 21:58:15.659644] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1380042 ] 00:14:56.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.530 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:56.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.530 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:56.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.530 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:56.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.530 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:56.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.530 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:56.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.530 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:56.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.530 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:56.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.531 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:56.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.531 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:56.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.531 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:56.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.531 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:56.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.531 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:56.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.531 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:56.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.531 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:56.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.531 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:56.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.531 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:56.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.531 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:56.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.531 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:56.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.531 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:56.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.531 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:56.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.531 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:56.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.531 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:56.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.531 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:56.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.531 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:56.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.531 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:56.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.531 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:56.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.531 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:56.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.531 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:56.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.531 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:56.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.531 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:56.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.531 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:56.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.531 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:56.531 [2024-07-13 21:58:15.816657] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.790 [2024-07-13 21:58:16.018786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.050 [2024-07-13 21:58:16.247375] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:57.050 [2024-07-13 21:58:16.247419] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:57.050 21:58:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:57.050 21:58:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:14:57.050 21:58:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:57.050 21:58:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:57.309 BaseBdev1_malloc 00:14:57.309 21:58:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:14:57.568 true 00:14:57.568 21:58:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:57.568 [2024-07-13 21:58:16.944338] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:57.568 [2024-07-13 21:58:16.944392] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:57.568 [2024-07-13 21:58:16.944413] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:14:57.568 [2024-07-13 21:58:16.944430] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:57.568 [2024-07-13 21:58:16.946532] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:57.568 [2024-07-13 21:58:16.946564] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:57.568 BaseBdev1 00:14:57.828 21:58:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:57.828 21:58:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:57.828 BaseBdev2_malloc 00:14:57.828 21:58:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:14:58.087 true 00:14:58.087 21:58:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:58.346 [2024-07-13 21:58:17.476542] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:58.346 [2024-07-13 21:58:17.476593] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.346 [2024-07-13 21:58:17.476614] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:14:58.346 [2024-07-13 21:58:17.476629] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.346 [2024-07-13 21:58:17.478753] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.346 [2024-07-13 21:58:17.478784] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:58.346 BaseBdev2 00:14:58.346 21:58:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:58.346 21:58:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:58.346 BaseBdev3_malloc 00:14:58.346 21:58:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:14:58.605 true 00:14:58.605 21:58:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:58.876 [2024-07-13 21:58:18.011712] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:58.876 [2024-07-13 21:58:18.011758] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.876 [2024-07-13 21:58:18.011779] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041780 00:14:58.876 [2024-07-13 21:58:18.011793] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.876 [2024-07-13 21:58:18.013846] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.876 [2024-07-13 21:58:18.013876] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:58.876 BaseBdev3 00:14:58.876 21:58:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:14:58.876 [2024-07-13 21:58:18.196223] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:58.876 [2024-07-13 21:58:18.198012] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:58.876 [2024-07-13 21:58:18.198080] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:58.876 [2024-07-13 21:58:18.198287] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041d80 00:14:58.876 [2024-07-13 21:58:18.198306] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:58.876 [2024-07-13 21:58:18.198553] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:14:58.876 [2024-07-13 21:58:18.198744] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041d80 00:14:58.876 [2024-07-13 21:58:18.198762] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041d80 00:14:58.876 [2024-07-13 21:58:18.198915] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.876 21:58:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:14:58.876 21:58:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:58.876 21:58:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:58.876 21:58:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:58.876 21:58:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:58.876 21:58:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:58.876 21:58:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:58.876 21:58:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:58.876 21:58:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:58.876 21:58:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:58.876 21:58:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.876 21:58:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:59.150 21:58:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:59.150 "name": "raid_bdev1", 00:14:59.150 "uuid": "b94befb0-fb0c-40af-b951-8ac9de9f4a33", 00:14:59.150 "strip_size_kb": 64, 00:14:59.150 "state": "online", 00:14:59.150 "raid_level": "raid0", 00:14:59.150 "superblock": true, 00:14:59.150 "num_base_bdevs": 3, 00:14:59.150 "num_base_bdevs_discovered": 3, 00:14:59.150 "num_base_bdevs_operational": 3, 00:14:59.150 "base_bdevs_list": [ 00:14:59.150 { 00:14:59.150 "name": "BaseBdev1", 00:14:59.150 "uuid": "834073f1-c35c-59b9-a4ef-7c6084cb3741", 00:14:59.150 "is_configured": true, 00:14:59.150 "data_offset": 2048, 00:14:59.150 "data_size": 63488 00:14:59.150 }, 00:14:59.150 { 00:14:59.150 "name": "BaseBdev2", 00:14:59.150 "uuid": "29e2151e-c9ef-5daa-a673-fc81c1b32e25", 00:14:59.150 "is_configured": true, 00:14:59.150 "data_offset": 2048, 00:14:59.150 "data_size": 63488 00:14:59.150 }, 00:14:59.150 { 00:14:59.150 "name": "BaseBdev3", 00:14:59.150 "uuid": "8171b132-59f2-55e2-b414-6858c43920af", 00:14:59.150 "is_configured": true, 00:14:59.150 "data_offset": 2048, 00:14:59.150 "data_size": 63488 00:14:59.150 } 00:14:59.150 ] 00:14:59.150 }' 00:14:59.150 21:58:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:59.150 21:58:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.719 21:58:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:14:59.719 21:58:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:14:59.719 [2024-07-13 21:58:18.963547] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:15:00.657 21:58:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:00.917 21:58:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:15:00.917 21:58:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:15:00.917 21:58:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:15:00.917 21:58:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:15:00.917 21:58:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:00.917 21:58:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:00.917 21:58:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:00.917 21:58:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:00.917 21:58:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:00.917 21:58:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:00.917 21:58:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:00.917 21:58:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:00.917 21:58:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:00.917 21:58:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.917 21:58:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:00.917 21:58:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:00.917 "name": "raid_bdev1", 00:15:00.917 "uuid": "b94befb0-fb0c-40af-b951-8ac9de9f4a33", 00:15:00.917 "strip_size_kb": 64, 00:15:00.917 "state": "online", 00:15:00.917 "raid_level": "raid0", 00:15:00.917 "superblock": true, 00:15:00.917 "num_base_bdevs": 3, 00:15:00.917 "num_base_bdevs_discovered": 3, 00:15:00.917 "num_base_bdevs_operational": 3, 00:15:00.917 "base_bdevs_list": [ 00:15:00.917 { 00:15:00.917 "name": "BaseBdev1", 00:15:00.917 "uuid": "834073f1-c35c-59b9-a4ef-7c6084cb3741", 00:15:00.917 "is_configured": true, 00:15:00.917 "data_offset": 2048, 00:15:00.917 "data_size": 63488 00:15:00.917 }, 00:15:00.917 { 00:15:00.917 "name": "BaseBdev2", 00:15:00.917 "uuid": "29e2151e-c9ef-5daa-a673-fc81c1b32e25", 00:15:00.917 "is_configured": true, 00:15:00.917 "data_offset": 2048, 00:15:00.917 "data_size": 63488 00:15:00.917 }, 00:15:00.917 { 00:15:00.917 "name": "BaseBdev3", 00:15:00.917 "uuid": "8171b132-59f2-55e2-b414-6858c43920af", 00:15:00.917 "is_configured": true, 00:15:00.917 "data_offset": 2048, 00:15:00.917 "data_size": 63488 00:15:00.917 } 00:15:00.917 ] 00:15:00.917 }' 00:15:00.917 21:58:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:00.917 21:58:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.486 21:58:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:01.486 [2024-07-13 21:58:20.851523] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:01.486 [2024-07-13 21:58:20.851556] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:01.486 [2024-07-13 21:58:20.853939] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:01.486 [2024-07-13 21:58:20.853979] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:01.486 [2024-07-13 21:58:20.854017] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:01.486 [2024-07-13 21:58:20.854028] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041d80 name raid_bdev1, state offline 00:15:01.486 0 00:15:01.486 21:58:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1380042 00:15:01.486 21:58:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 1380042 ']' 00:15:01.486 21:58:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 1380042 00:15:01.486 21:58:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:15:01.745 21:58:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:01.745 21:58:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1380042 00:15:01.745 21:58:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:01.745 21:58:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:01.745 21:58:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1380042' 00:15:01.745 killing process with pid 1380042 00:15:01.745 21:58:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 1380042 00:15:01.745 [2024-07-13 21:58:20.926726] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:01.745 21:58:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 1380042 00:15:01.745 [2024-07-13 21:58:21.089594] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:03.124 21:58:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.lNVz4xSI5P 00:15:03.124 21:58:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:15:03.124 21:58:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:15:03.124 21:58:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.53 00:15:03.124 21:58:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:15:03.124 21:58:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:03.124 21:58:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:03.124 21:58:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.53 != \0\.\0\0 ]] 00:15:03.124 00:15:03.124 real 0m6.806s 00:15:03.124 user 0m9.539s 00:15:03.124 sys 0m1.076s 00:15:03.124 21:58:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:03.124 21:58:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.124 ************************************ 00:15:03.124 END TEST raid_read_error_test 00:15:03.124 ************************************ 00:15:03.124 21:58:22 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:03.124 21:58:22 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:15:03.124 21:58:22 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:03.124 21:58:22 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:03.124 21:58:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:03.124 ************************************ 00:15:03.124 START TEST raid_write_error_test 00:15:03.124 ************************************ 00:15:03.124 21:58:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 3 write 00:15:03.124 21:58:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:15:03.124 21:58:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:15:03.125 21:58:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:15:03.125 21:58:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:15:03.125 21:58:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:03.125 21:58:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:15:03.125 21:58:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:03.125 21:58:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:03.125 21:58:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:15:03.125 21:58:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:03.125 21:58:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:03.125 21:58:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:15:03.125 21:58:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:03.125 21:58:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:03.125 21:58:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:03.125 21:58:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:15:03.125 21:58:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:15:03.125 21:58:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:15:03.125 21:58:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:15:03.125 21:58:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:15:03.125 21:58:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:15:03.125 21:58:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:15:03.125 21:58:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:15:03.125 21:58:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:15:03.125 21:58:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:15:03.125 21:58:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.qBA035BFwI 00:15:03.125 21:58:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1381212 00:15:03.125 21:58:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:03.125 21:58:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1381212 /var/tmp/spdk-raid.sock 00:15:03.125 21:58:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 1381212 ']' 00:15:03.125 21:58:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:03.125 21:58:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:03.125 21:58:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:03.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:03.125 21:58:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:03.125 21:58:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.385 [2024-07-13 21:58:22.570957] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:03.385 [2024-07-13 21:58:22.571054] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1381212 ] 00:15:03.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.385 EAL: Requested device 0000:3d:01.0 cannot be used 00:15:03.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.385 EAL: Requested device 0000:3d:01.1 cannot be used 00:15:03.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.385 EAL: Requested device 0000:3d:01.2 cannot be used 00:15:03.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.385 EAL: Requested device 0000:3d:01.3 cannot be used 00:15:03.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.385 EAL: Requested device 0000:3d:01.4 cannot be used 00:15:03.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.385 EAL: Requested device 0000:3d:01.5 cannot be used 00:15:03.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.385 EAL: Requested device 0000:3d:01.6 cannot be used 00:15:03.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.385 EAL: Requested device 0000:3d:01.7 cannot be used 00:15:03.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.385 EAL: Requested device 0000:3d:02.0 cannot be used 00:15:03.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.385 EAL: Requested device 0000:3d:02.1 cannot be used 00:15:03.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.385 EAL: Requested device 0000:3d:02.2 cannot be used 00:15:03.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.385 EAL: Requested device 0000:3d:02.3 cannot be used 00:15:03.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.385 EAL: Requested device 0000:3d:02.4 cannot be used 00:15:03.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.385 EAL: Requested device 0000:3d:02.5 cannot be used 00:15:03.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.385 EAL: Requested device 0000:3d:02.6 cannot be used 00:15:03.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.385 EAL: Requested device 0000:3d:02.7 cannot be used 00:15:03.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.385 EAL: Requested device 0000:3f:01.0 cannot be used 00:15:03.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.385 EAL: Requested device 0000:3f:01.1 cannot be used 00:15:03.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.385 EAL: Requested device 0000:3f:01.2 cannot be used 00:15:03.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.385 EAL: Requested device 0000:3f:01.3 cannot be used 00:15:03.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.385 EAL: Requested device 0000:3f:01.4 cannot be used 00:15:03.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.385 EAL: Requested device 0000:3f:01.5 cannot be used 00:15:03.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.385 EAL: Requested device 0000:3f:01.6 cannot be used 00:15:03.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.385 EAL: Requested device 0000:3f:01.7 cannot be used 00:15:03.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.385 EAL: Requested device 0000:3f:02.0 cannot be used 00:15:03.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.385 EAL: Requested device 0000:3f:02.1 cannot be used 00:15:03.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.385 EAL: Requested device 0000:3f:02.2 cannot be used 00:15:03.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.385 EAL: Requested device 0000:3f:02.3 cannot be used 00:15:03.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.385 EAL: Requested device 0000:3f:02.4 cannot be used 00:15:03.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.385 EAL: Requested device 0000:3f:02.5 cannot be used 00:15:03.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.385 EAL: Requested device 0000:3f:02.6 cannot be used 00:15:03.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.385 EAL: Requested device 0000:3f:02.7 cannot be used 00:15:03.385 [2024-07-13 21:58:22.731583] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.644 [2024-07-13 21:58:22.934432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.903 [2024-07-13 21:58:23.183482] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:03.903 [2024-07-13 21:58:23.183514] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:04.161 21:58:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:04.161 21:58:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:15:04.161 21:58:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:04.161 21:58:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:04.161 BaseBdev1_malloc 00:15:04.161 21:58:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:04.420 true 00:15:04.420 21:58:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:04.678 [2024-07-13 21:58:23.831781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:04.678 [2024-07-13 21:58:23.831834] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.678 [2024-07-13 21:58:23.831868] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:15:04.678 [2024-07-13 21:58:23.831884] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.678 [2024-07-13 21:58:23.834004] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.678 [2024-07-13 21:58:23.834033] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:04.678 BaseBdev1 00:15:04.678 21:58:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:04.678 21:58:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:04.678 BaseBdev2_malloc 00:15:04.678 21:58:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:04.937 true 00:15:04.937 21:58:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:05.196 [2024-07-13 21:58:24.349272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:05.196 [2024-07-13 21:58:24.349318] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:05.196 [2024-07-13 21:58:24.349337] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:15:05.196 [2024-07-13 21:58:24.349353] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:05.196 [2024-07-13 21:58:24.351381] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:05.196 [2024-07-13 21:58:24.351409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:05.196 BaseBdev2 00:15:05.196 21:58:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:05.196 21:58:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:05.196 BaseBdev3_malloc 00:15:05.196 21:58:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:15:05.454 true 00:15:05.454 21:58:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:05.712 [2024-07-13 21:58:24.921170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:05.712 [2024-07-13 21:58:24.921213] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:05.712 [2024-07-13 21:58:24.921246] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041780 00:15:05.712 [2024-07-13 21:58:24.921260] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:05.712 [2024-07-13 21:58:24.923298] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:05.712 [2024-07-13 21:58:24.923326] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:05.712 BaseBdev3 00:15:05.712 21:58:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:15:05.712 [2024-07-13 21:58:25.093649] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:05.712 [2024-07-13 21:58:25.095365] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:05.712 [2024-07-13 21:58:25.095427] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:05.712 [2024-07-13 21:58:25.095612] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041d80 00:15:05.712 [2024-07-13 21:58:25.095624] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:05.712 [2024-07-13 21:58:25.095840] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:15:05.712 [2024-07-13 21:58:25.096018] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041d80 00:15:05.713 [2024-07-13 21:58:25.096036] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041d80 00:15:05.713 [2024-07-13 21:58:25.096160] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:05.971 21:58:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:15:05.971 21:58:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:05.971 21:58:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:05.971 21:58:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:05.971 21:58:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:05.971 21:58:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:05.971 21:58:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:05.971 21:58:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:05.971 21:58:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:05.971 21:58:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:05.971 21:58:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:05.971 21:58:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.971 21:58:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:05.971 "name": "raid_bdev1", 00:15:05.971 "uuid": "1069a47e-5eb3-48b1-9d5f-9c211255592f", 00:15:05.971 "strip_size_kb": 64, 00:15:05.971 "state": "online", 00:15:05.971 "raid_level": "raid0", 00:15:05.971 "superblock": true, 00:15:05.971 "num_base_bdevs": 3, 00:15:05.971 "num_base_bdevs_discovered": 3, 00:15:05.971 "num_base_bdevs_operational": 3, 00:15:05.971 "base_bdevs_list": [ 00:15:05.971 { 00:15:05.971 "name": "BaseBdev1", 00:15:05.971 "uuid": "e6afce8b-3c46-56f4-9ac5-b7197b50d568", 00:15:05.971 "is_configured": true, 00:15:05.971 "data_offset": 2048, 00:15:05.971 "data_size": 63488 00:15:05.971 }, 00:15:05.971 { 00:15:05.971 "name": "BaseBdev2", 00:15:05.971 "uuid": "c53d2fec-0dea-5504-8514-1aea52fd5511", 00:15:05.971 "is_configured": true, 00:15:05.971 "data_offset": 2048, 00:15:05.971 "data_size": 63488 00:15:05.971 }, 00:15:05.971 { 00:15:05.971 "name": "BaseBdev3", 00:15:05.971 "uuid": "deec3fc4-f6f8-5f96-9147-4fb753c5a557", 00:15:05.971 "is_configured": true, 00:15:05.971 "data_offset": 2048, 00:15:05.971 "data_size": 63488 00:15:05.971 } 00:15:05.971 ] 00:15:05.971 }' 00:15:05.971 21:58:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:05.971 21:58:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.540 21:58:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:15:06.540 21:58:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:06.540 [2024-07-13 21:58:25.869162] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:15:07.478 21:58:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:07.739 21:58:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:15:07.739 21:58:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:15:07.739 21:58:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:15:07.739 21:58:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:15:07.739 21:58:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:07.739 21:58:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:07.739 21:58:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:07.739 21:58:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:07.739 21:58:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:07.739 21:58:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:07.739 21:58:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:07.739 21:58:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:07.739 21:58:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:07.739 21:58:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:07.739 21:58:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.998 21:58:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:07.998 "name": "raid_bdev1", 00:15:07.998 "uuid": "1069a47e-5eb3-48b1-9d5f-9c211255592f", 00:15:07.998 "strip_size_kb": 64, 00:15:07.998 "state": "online", 00:15:07.998 "raid_level": "raid0", 00:15:07.998 "superblock": true, 00:15:07.998 "num_base_bdevs": 3, 00:15:07.998 "num_base_bdevs_discovered": 3, 00:15:07.998 "num_base_bdevs_operational": 3, 00:15:07.998 "base_bdevs_list": [ 00:15:07.998 { 00:15:07.998 "name": "BaseBdev1", 00:15:07.998 "uuid": "e6afce8b-3c46-56f4-9ac5-b7197b50d568", 00:15:07.998 "is_configured": true, 00:15:07.998 "data_offset": 2048, 00:15:07.998 "data_size": 63488 00:15:07.998 }, 00:15:07.998 { 00:15:07.998 "name": "BaseBdev2", 00:15:07.998 "uuid": "c53d2fec-0dea-5504-8514-1aea52fd5511", 00:15:07.998 "is_configured": true, 00:15:07.998 "data_offset": 2048, 00:15:07.998 "data_size": 63488 00:15:07.998 }, 00:15:07.998 { 00:15:07.998 "name": "BaseBdev3", 00:15:07.998 "uuid": "deec3fc4-f6f8-5f96-9147-4fb753c5a557", 00:15:07.998 "is_configured": true, 00:15:07.998 "data_offset": 2048, 00:15:07.998 "data_size": 63488 00:15:07.998 } 00:15:07.998 ] 00:15:07.998 }' 00:15:07.998 21:58:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:07.998 21:58:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.257 21:58:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:08.517 [2024-07-13 21:58:27.761716] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:08.517 [2024-07-13 21:58:27.761762] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:08.517 [2024-07-13 21:58:27.764038] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:08.517 [2024-07-13 21:58:27.764078] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:08.517 [2024-07-13 21:58:27.764114] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:08.517 [2024-07-13 21:58:27.764125] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041d80 name raid_bdev1, state offline 00:15:08.517 0 00:15:08.517 21:58:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1381212 00:15:08.517 21:58:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 1381212 ']' 00:15:08.517 21:58:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 1381212 00:15:08.517 21:58:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:15:08.517 21:58:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:08.517 21:58:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1381212 00:15:08.517 21:58:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:08.517 21:58:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:08.517 21:58:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1381212' 00:15:08.517 killing process with pid 1381212 00:15:08.517 21:58:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 1381212 00:15:08.517 [2024-07-13 21:58:27.835620] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:08.517 21:58:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 1381212 00:15:08.776 [2024-07-13 21:58:27.998989] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:10.153 21:58:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.qBA035BFwI 00:15:10.153 21:58:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:15:10.153 21:58:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:15:10.153 21:58:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.53 00:15:10.153 21:58:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:15:10.153 21:58:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:10.153 21:58:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:10.153 21:58:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.53 != \0\.\0\0 ]] 00:15:10.153 00:15:10.153 real 0m6.834s 00:15:10.153 user 0m9.506s 00:15:10.153 sys 0m1.142s 00:15:10.153 21:58:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:10.153 21:58:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.153 ************************************ 00:15:10.153 END TEST raid_write_error_test 00:15:10.153 ************************************ 00:15:10.153 21:58:29 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:10.153 21:58:29 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:15:10.153 21:58:29 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:15:10.153 21:58:29 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:10.153 21:58:29 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:10.153 21:58:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:10.153 ************************************ 00:15:10.153 START TEST raid_state_function_test 00:15:10.153 ************************************ 00:15:10.153 21:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 3 false 00:15:10.153 21:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:15:10.153 21:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:15:10.153 21:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:15:10.153 21:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:10.153 21:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:10.153 21:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:10.153 21:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:15:10.153 21:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:10.153 21:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:10.153 21:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:15:10.153 21:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:10.153 21:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:10.153 21:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:15:10.153 21:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:10.153 21:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:10.153 21:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:10.153 21:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:10.153 21:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:10.153 21:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:10.153 21:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:10.153 21:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:10.153 21:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:15:10.153 21:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:15:10.153 21:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:15:10.153 21:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:15:10.153 21:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:15:10.153 21:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=1382512 00:15:10.153 21:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1382512' 00:15:10.153 Process raid pid: 1382512 00:15:10.153 21:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 1382512 /var/tmp/spdk-raid.sock 00:15:10.153 21:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 1382512 ']' 00:15:10.153 21:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:10.153 21:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:10.153 21:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:10.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:10.153 21:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:10.153 21:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:10.153 21:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.153 [2024-07-13 21:58:29.467354] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:10.153 [2024-07-13 21:58:29.467448] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:10.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.412 EAL: Requested device 0000:3d:01.0 cannot be used 00:15:10.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.412 EAL: Requested device 0000:3d:01.1 cannot be used 00:15:10.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.412 EAL: Requested device 0000:3d:01.2 cannot be used 00:15:10.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.412 EAL: Requested device 0000:3d:01.3 cannot be used 00:15:10.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.412 EAL: Requested device 0000:3d:01.4 cannot be used 00:15:10.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.412 EAL: Requested device 0000:3d:01.5 cannot be used 00:15:10.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.412 EAL: Requested device 0000:3d:01.6 cannot be used 00:15:10.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.412 EAL: Requested device 0000:3d:01.7 cannot be used 00:15:10.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.412 EAL: Requested device 0000:3d:02.0 cannot be used 00:15:10.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.412 EAL: Requested device 0000:3d:02.1 cannot be used 00:15:10.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.412 EAL: Requested device 0000:3d:02.2 cannot be used 00:15:10.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.412 EAL: Requested device 0000:3d:02.3 cannot be used 00:15:10.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.412 EAL: Requested device 0000:3d:02.4 cannot be used 00:15:10.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.412 EAL: Requested device 0000:3d:02.5 cannot be used 00:15:10.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.412 EAL: Requested device 0000:3d:02.6 cannot be used 00:15:10.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.412 EAL: Requested device 0000:3d:02.7 cannot be used 00:15:10.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.412 EAL: Requested device 0000:3f:01.0 cannot be used 00:15:10.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.412 EAL: Requested device 0000:3f:01.1 cannot be used 00:15:10.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.412 EAL: Requested device 0000:3f:01.2 cannot be used 00:15:10.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.412 EAL: Requested device 0000:3f:01.3 cannot be used 00:15:10.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.412 EAL: Requested device 0000:3f:01.4 cannot be used 00:15:10.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.412 EAL: Requested device 0000:3f:01.5 cannot be used 00:15:10.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.412 EAL: Requested device 0000:3f:01.6 cannot be used 00:15:10.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.412 EAL: Requested device 0000:3f:01.7 cannot be used 00:15:10.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.412 EAL: Requested device 0000:3f:02.0 cannot be used 00:15:10.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.412 EAL: Requested device 0000:3f:02.1 cannot be used 00:15:10.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.412 EAL: Requested device 0000:3f:02.2 cannot be used 00:15:10.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.412 EAL: Requested device 0000:3f:02.3 cannot be used 00:15:10.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.412 EAL: Requested device 0000:3f:02.4 cannot be used 00:15:10.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.412 EAL: Requested device 0000:3f:02.5 cannot be used 00:15:10.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.412 EAL: Requested device 0000:3f:02.6 cannot be used 00:15:10.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.412 EAL: Requested device 0000:3f:02.7 cannot be used 00:15:10.412 [2024-07-13 21:58:29.631016] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.671 [2024-07-13 21:58:29.838822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.929 [2024-07-13 21:58:30.087486] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:10.929 [2024-07-13 21:58:30.087517] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:10.929 21:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:10.929 21:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:15:10.929 21:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:11.188 [2024-07-13 21:58:30.374408] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:11.188 [2024-07-13 21:58:30.374450] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:11.188 [2024-07-13 21:58:30.374459] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:11.188 [2024-07-13 21:58:30.374486] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:11.188 [2024-07-13 21:58:30.374494] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:11.188 [2024-07-13 21:58:30.374505] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:11.188 21:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:11.188 21:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:11.188 21:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:11.188 21:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:11.188 21:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:11.188 21:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:11.188 21:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:11.188 21:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:11.188 21:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:11.188 21:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:11.188 21:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:11.188 21:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:11.188 21:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:11.188 "name": "Existed_Raid", 00:15:11.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.188 "strip_size_kb": 64, 00:15:11.188 "state": "configuring", 00:15:11.188 "raid_level": "concat", 00:15:11.188 "superblock": false, 00:15:11.188 "num_base_bdevs": 3, 00:15:11.188 "num_base_bdevs_discovered": 0, 00:15:11.188 "num_base_bdevs_operational": 3, 00:15:11.188 "base_bdevs_list": [ 00:15:11.188 { 00:15:11.188 "name": "BaseBdev1", 00:15:11.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.188 "is_configured": false, 00:15:11.188 "data_offset": 0, 00:15:11.188 "data_size": 0 00:15:11.188 }, 00:15:11.188 { 00:15:11.188 "name": "BaseBdev2", 00:15:11.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.188 "is_configured": false, 00:15:11.188 "data_offset": 0, 00:15:11.188 "data_size": 0 00:15:11.188 }, 00:15:11.188 { 00:15:11.188 "name": "BaseBdev3", 00:15:11.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.188 "is_configured": false, 00:15:11.188 "data_offset": 0, 00:15:11.188 "data_size": 0 00:15:11.188 } 00:15:11.188 ] 00:15:11.188 }' 00:15:11.188 21:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:11.188 21:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.754 21:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:12.012 [2024-07-13 21:58:31.212526] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:12.013 [2024-07-13 21:58:31.212561] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:15:12.013 21:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:12.013 [2024-07-13 21:58:31.381036] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:12.013 [2024-07-13 21:58:31.381073] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:12.013 [2024-07-13 21:58:31.381082] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:12.013 [2024-07-13 21:58:31.381111] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:12.013 [2024-07-13 21:58:31.381120] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:12.013 [2024-07-13 21:58:31.381131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:12.013 21:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:12.271 [2024-07-13 21:58:31.589231] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:12.271 BaseBdev1 00:15:12.271 21:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:12.271 21:58:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:12.271 21:58:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:12.271 21:58:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:12.271 21:58:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:12.271 21:58:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:12.271 21:58:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:12.530 21:58:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:12.789 [ 00:15:12.789 { 00:15:12.789 "name": "BaseBdev1", 00:15:12.789 "aliases": [ 00:15:12.789 "04605b58-9fae-4c52-891a-63168f48ba51" 00:15:12.789 ], 00:15:12.789 "product_name": "Malloc disk", 00:15:12.789 "block_size": 512, 00:15:12.789 "num_blocks": 65536, 00:15:12.789 "uuid": "04605b58-9fae-4c52-891a-63168f48ba51", 00:15:12.789 "assigned_rate_limits": { 00:15:12.789 "rw_ios_per_sec": 0, 00:15:12.789 "rw_mbytes_per_sec": 0, 00:15:12.789 "r_mbytes_per_sec": 0, 00:15:12.789 "w_mbytes_per_sec": 0 00:15:12.789 }, 00:15:12.789 "claimed": true, 00:15:12.789 "claim_type": "exclusive_write", 00:15:12.789 "zoned": false, 00:15:12.789 "supported_io_types": { 00:15:12.789 "read": true, 00:15:12.789 "write": true, 00:15:12.789 "unmap": true, 00:15:12.789 "flush": true, 00:15:12.789 "reset": true, 00:15:12.789 "nvme_admin": false, 00:15:12.789 "nvme_io": false, 00:15:12.789 "nvme_io_md": false, 00:15:12.789 "write_zeroes": true, 00:15:12.789 "zcopy": true, 00:15:12.789 "get_zone_info": false, 00:15:12.789 "zone_management": false, 00:15:12.789 "zone_append": false, 00:15:12.789 "compare": false, 00:15:12.789 "compare_and_write": false, 00:15:12.789 "abort": true, 00:15:12.789 "seek_hole": false, 00:15:12.789 "seek_data": false, 00:15:12.789 "copy": true, 00:15:12.789 "nvme_iov_md": false 00:15:12.789 }, 00:15:12.789 "memory_domains": [ 00:15:12.789 { 00:15:12.789 "dma_device_id": "system", 00:15:12.789 "dma_device_type": 1 00:15:12.789 }, 00:15:12.789 { 00:15:12.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.789 "dma_device_type": 2 00:15:12.789 } 00:15:12.789 ], 00:15:12.789 "driver_specific": {} 00:15:12.789 } 00:15:12.789 ] 00:15:12.789 21:58:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:12.789 21:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:12.790 21:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:12.790 21:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:12.790 21:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:12.790 21:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:12.790 21:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:12.790 21:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:12.790 21:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:12.790 21:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:12.790 21:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:12.790 21:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:12.790 21:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:12.790 21:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:12.790 "name": "Existed_Raid", 00:15:12.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.790 "strip_size_kb": 64, 00:15:12.790 "state": "configuring", 00:15:12.790 "raid_level": "concat", 00:15:12.790 "superblock": false, 00:15:12.790 "num_base_bdevs": 3, 00:15:12.790 "num_base_bdevs_discovered": 1, 00:15:12.790 "num_base_bdevs_operational": 3, 00:15:12.790 "base_bdevs_list": [ 00:15:12.790 { 00:15:12.790 "name": "BaseBdev1", 00:15:12.790 "uuid": "04605b58-9fae-4c52-891a-63168f48ba51", 00:15:12.790 "is_configured": true, 00:15:12.790 "data_offset": 0, 00:15:12.790 "data_size": 65536 00:15:12.790 }, 00:15:12.790 { 00:15:12.790 "name": "BaseBdev2", 00:15:12.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.790 "is_configured": false, 00:15:12.790 "data_offset": 0, 00:15:12.790 "data_size": 0 00:15:12.790 }, 00:15:12.790 { 00:15:12.790 "name": "BaseBdev3", 00:15:12.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.790 "is_configured": false, 00:15:12.790 "data_offset": 0, 00:15:12.790 "data_size": 0 00:15:12.790 } 00:15:12.790 ] 00:15:12.790 }' 00:15:12.790 21:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:12.790 21:58:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.357 21:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:13.615 [2024-07-13 21:58:32.776370] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:13.615 [2024-07-13 21:58:32.776419] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:15:13.615 21:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:13.615 [2024-07-13 21:58:32.948892] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:13.615 [2024-07-13 21:58:32.950692] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:13.615 [2024-07-13 21:58:32.950731] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:13.615 [2024-07-13 21:58:32.950742] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:13.615 [2024-07-13 21:58:32.950753] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:13.615 21:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:13.615 21:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:13.615 21:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:13.615 21:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:13.615 21:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:13.615 21:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:13.615 21:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:13.615 21:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:13.615 21:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:13.615 21:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:13.615 21:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:13.615 21:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:13.615 21:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:13.615 21:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:13.873 21:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:13.873 "name": "Existed_Raid", 00:15:13.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.873 "strip_size_kb": 64, 00:15:13.873 "state": "configuring", 00:15:13.873 "raid_level": "concat", 00:15:13.873 "superblock": false, 00:15:13.873 "num_base_bdevs": 3, 00:15:13.873 "num_base_bdevs_discovered": 1, 00:15:13.873 "num_base_bdevs_operational": 3, 00:15:13.873 "base_bdevs_list": [ 00:15:13.873 { 00:15:13.873 "name": "BaseBdev1", 00:15:13.873 "uuid": "04605b58-9fae-4c52-891a-63168f48ba51", 00:15:13.873 "is_configured": true, 00:15:13.873 "data_offset": 0, 00:15:13.873 "data_size": 65536 00:15:13.873 }, 00:15:13.873 { 00:15:13.873 "name": "BaseBdev2", 00:15:13.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.873 "is_configured": false, 00:15:13.873 "data_offset": 0, 00:15:13.873 "data_size": 0 00:15:13.873 }, 00:15:13.873 { 00:15:13.873 "name": "BaseBdev3", 00:15:13.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.873 "is_configured": false, 00:15:13.873 "data_offset": 0, 00:15:13.873 "data_size": 0 00:15:13.873 } 00:15:13.873 ] 00:15:13.873 }' 00:15:13.873 21:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:13.873 21:58:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.440 21:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:14.440 [2024-07-13 21:58:33.822039] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:14.440 BaseBdev2 00:15:14.699 21:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:14.699 21:58:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:14.699 21:58:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:14.699 21:58:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:14.699 21:58:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:14.699 21:58:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:14.699 21:58:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:14.699 21:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:14.957 [ 00:15:14.957 { 00:15:14.957 "name": "BaseBdev2", 00:15:14.957 "aliases": [ 00:15:14.957 "63c96c14-a2d8-4e6a-8cea-82652aaf6155" 00:15:14.957 ], 00:15:14.957 "product_name": "Malloc disk", 00:15:14.957 "block_size": 512, 00:15:14.957 "num_blocks": 65536, 00:15:14.957 "uuid": "63c96c14-a2d8-4e6a-8cea-82652aaf6155", 00:15:14.957 "assigned_rate_limits": { 00:15:14.957 "rw_ios_per_sec": 0, 00:15:14.957 "rw_mbytes_per_sec": 0, 00:15:14.957 "r_mbytes_per_sec": 0, 00:15:14.957 "w_mbytes_per_sec": 0 00:15:14.957 }, 00:15:14.957 "claimed": true, 00:15:14.957 "claim_type": "exclusive_write", 00:15:14.957 "zoned": false, 00:15:14.957 "supported_io_types": { 00:15:14.957 "read": true, 00:15:14.957 "write": true, 00:15:14.957 "unmap": true, 00:15:14.957 "flush": true, 00:15:14.957 "reset": true, 00:15:14.957 "nvme_admin": false, 00:15:14.957 "nvme_io": false, 00:15:14.957 "nvme_io_md": false, 00:15:14.957 "write_zeroes": true, 00:15:14.957 "zcopy": true, 00:15:14.957 "get_zone_info": false, 00:15:14.957 "zone_management": false, 00:15:14.957 "zone_append": false, 00:15:14.957 "compare": false, 00:15:14.957 "compare_and_write": false, 00:15:14.957 "abort": true, 00:15:14.957 "seek_hole": false, 00:15:14.957 "seek_data": false, 00:15:14.957 "copy": true, 00:15:14.958 "nvme_iov_md": false 00:15:14.958 }, 00:15:14.958 "memory_domains": [ 00:15:14.958 { 00:15:14.958 "dma_device_id": "system", 00:15:14.958 "dma_device_type": 1 00:15:14.958 }, 00:15:14.958 { 00:15:14.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:14.958 "dma_device_type": 2 00:15:14.958 } 00:15:14.958 ], 00:15:14.958 "driver_specific": {} 00:15:14.958 } 00:15:14.958 ] 00:15:14.958 21:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:14.958 21:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:14.958 21:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:14.958 21:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:14.958 21:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:14.958 21:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:14.958 21:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:14.958 21:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:14.958 21:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:14.958 21:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:14.958 21:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:14.958 21:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:14.958 21:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:14.958 21:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:14.958 21:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:15.217 21:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:15.217 "name": "Existed_Raid", 00:15:15.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.217 "strip_size_kb": 64, 00:15:15.217 "state": "configuring", 00:15:15.217 "raid_level": "concat", 00:15:15.217 "superblock": false, 00:15:15.217 "num_base_bdevs": 3, 00:15:15.217 "num_base_bdevs_discovered": 2, 00:15:15.217 "num_base_bdevs_operational": 3, 00:15:15.217 "base_bdevs_list": [ 00:15:15.217 { 00:15:15.217 "name": "BaseBdev1", 00:15:15.217 "uuid": "04605b58-9fae-4c52-891a-63168f48ba51", 00:15:15.217 "is_configured": true, 00:15:15.217 "data_offset": 0, 00:15:15.217 "data_size": 65536 00:15:15.217 }, 00:15:15.217 { 00:15:15.217 "name": "BaseBdev2", 00:15:15.217 "uuid": "63c96c14-a2d8-4e6a-8cea-82652aaf6155", 00:15:15.217 "is_configured": true, 00:15:15.217 "data_offset": 0, 00:15:15.217 "data_size": 65536 00:15:15.217 }, 00:15:15.217 { 00:15:15.217 "name": "BaseBdev3", 00:15:15.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.217 "is_configured": false, 00:15:15.217 "data_offset": 0, 00:15:15.217 "data_size": 0 00:15:15.217 } 00:15:15.217 ] 00:15:15.217 }' 00:15:15.217 21:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:15.217 21:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.477 21:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:15.736 [2024-07-13 21:58:34.991547] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:15.736 [2024-07-13 21:58:34.991588] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:15:15.736 [2024-07-13 21:58:34.991600] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:15.736 [2024-07-13 21:58:34.991828] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:15:15.736 [2024-07-13 21:58:34.992049] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:15:15.736 [2024-07-13 21:58:34.992062] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:15:15.736 [2024-07-13 21:58:34.992329] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:15.736 BaseBdev3 00:15:15.736 21:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:15:15.736 21:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:15:15.736 21:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:15.736 21:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:15.736 21:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:15.736 21:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:15.736 21:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:15.996 21:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:15.996 [ 00:15:15.996 { 00:15:15.996 "name": "BaseBdev3", 00:15:15.996 "aliases": [ 00:15:15.996 "7f3180ce-6f1c-4001-82c3-4527b1f726be" 00:15:15.996 ], 00:15:15.996 "product_name": "Malloc disk", 00:15:15.996 "block_size": 512, 00:15:15.996 "num_blocks": 65536, 00:15:15.996 "uuid": "7f3180ce-6f1c-4001-82c3-4527b1f726be", 00:15:15.996 "assigned_rate_limits": { 00:15:15.996 "rw_ios_per_sec": 0, 00:15:15.996 "rw_mbytes_per_sec": 0, 00:15:15.996 "r_mbytes_per_sec": 0, 00:15:15.996 "w_mbytes_per_sec": 0 00:15:15.996 }, 00:15:15.996 "claimed": true, 00:15:15.996 "claim_type": "exclusive_write", 00:15:15.996 "zoned": false, 00:15:15.996 "supported_io_types": { 00:15:15.996 "read": true, 00:15:15.996 "write": true, 00:15:15.996 "unmap": true, 00:15:15.996 "flush": true, 00:15:15.996 "reset": true, 00:15:15.996 "nvme_admin": false, 00:15:15.996 "nvme_io": false, 00:15:15.996 "nvme_io_md": false, 00:15:15.996 "write_zeroes": true, 00:15:15.996 "zcopy": true, 00:15:15.996 "get_zone_info": false, 00:15:15.996 "zone_management": false, 00:15:15.996 "zone_append": false, 00:15:15.996 "compare": false, 00:15:15.996 "compare_and_write": false, 00:15:15.996 "abort": true, 00:15:15.996 "seek_hole": false, 00:15:15.996 "seek_data": false, 00:15:15.996 "copy": true, 00:15:15.996 "nvme_iov_md": false 00:15:15.996 }, 00:15:15.996 "memory_domains": [ 00:15:15.996 { 00:15:15.996 "dma_device_id": "system", 00:15:15.996 "dma_device_type": 1 00:15:15.996 }, 00:15:15.996 { 00:15:15.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.996 "dma_device_type": 2 00:15:15.996 } 00:15:15.996 ], 00:15:15.996 "driver_specific": {} 00:15:15.996 } 00:15:15.996 ] 00:15:15.996 21:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:15.996 21:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:15.996 21:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:15.996 21:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:15:15.996 21:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:15.996 21:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:15.996 21:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:15.996 21:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:15.996 21:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:15.996 21:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:15.996 21:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:15.996 21:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:15.996 21:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:15.996 21:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:15.996 21:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.256 21:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:16.256 "name": "Existed_Raid", 00:15:16.256 "uuid": "60ae9cab-b0c6-405c-9399-e7babab3e018", 00:15:16.256 "strip_size_kb": 64, 00:15:16.256 "state": "online", 00:15:16.256 "raid_level": "concat", 00:15:16.256 "superblock": false, 00:15:16.256 "num_base_bdevs": 3, 00:15:16.256 "num_base_bdevs_discovered": 3, 00:15:16.256 "num_base_bdevs_operational": 3, 00:15:16.256 "base_bdevs_list": [ 00:15:16.256 { 00:15:16.256 "name": "BaseBdev1", 00:15:16.256 "uuid": "04605b58-9fae-4c52-891a-63168f48ba51", 00:15:16.256 "is_configured": true, 00:15:16.256 "data_offset": 0, 00:15:16.256 "data_size": 65536 00:15:16.256 }, 00:15:16.256 { 00:15:16.256 "name": "BaseBdev2", 00:15:16.256 "uuid": "63c96c14-a2d8-4e6a-8cea-82652aaf6155", 00:15:16.256 "is_configured": true, 00:15:16.256 "data_offset": 0, 00:15:16.256 "data_size": 65536 00:15:16.256 }, 00:15:16.256 { 00:15:16.256 "name": "BaseBdev3", 00:15:16.256 "uuid": "7f3180ce-6f1c-4001-82c3-4527b1f726be", 00:15:16.256 "is_configured": true, 00:15:16.256 "data_offset": 0, 00:15:16.256 "data_size": 65536 00:15:16.256 } 00:15:16.256 ] 00:15:16.256 }' 00:15:16.256 21:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:16.256 21:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.825 21:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:16.825 21:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:16.825 21:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:16.825 21:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:16.825 21:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:16.825 21:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:16.825 21:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:16.825 21:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:16.825 [2024-07-13 21:58:36.166961] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:16.825 21:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:16.825 "name": "Existed_Raid", 00:15:16.825 "aliases": [ 00:15:16.825 "60ae9cab-b0c6-405c-9399-e7babab3e018" 00:15:16.825 ], 00:15:16.825 "product_name": "Raid Volume", 00:15:16.825 "block_size": 512, 00:15:16.825 "num_blocks": 196608, 00:15:16.825 "uuid": "60ae9cab-b0c6-405c-9399-e7babab3e018", 00:15:16.825 "assigned_rate_limits": { 00:15:16.825 "rw_ios_per_sec": 0, 00:15:16.825 "rw_mbytes_per_sec": 0, 00:15:16.825 "r_mbytes_per_sec": 0, 00:15:16.825 "w_mbytes_per_sec": 0 00:15:16.825 }, 00:15:16.825 "claimed": false, 00:15:16.825 "zoned": false, 00:15:16.825 "supported_io_types": { 00:15:16.825 "read": true, 00:15:16.825 "write": true, 00:15:16.825 "unmap": true, 00:15:16.825 "flush": true, 00:15:16.825 "reset": true, 00:15:16.825 "nvme_admin": false, 00:15:16.825 "nvme_io": false, 00:15:16.825 "nvme_io_md": false, 00:15:16.825 "write_zeroes": true, 00:15:16.825 "zcopy": false, 00:15:16.825 "get_zone_info": false, 00:15:16.825 "zone_management": false, 00:15:16.825 "zone_append": false, 00:15:16.825 "compare": false, 00:15:16.825 "compare_and_write": false, 00:15:16.825 "abort": false, 00:15:16.825 "seek_hole": false, 00:15:16.825 "seek_data": false, 00:15:16.825 "copy": false, 00:15:16.825 "nvme_iov_md": false 00:15:16.825 }, 00:15:16.825 "memory_domains": [ 00:15:16.825 { 00:15:16.825 "dma_device_id": "system", 00:15:16.825 "dma_device_type": 1 00:15:16.825 }, 00:15:16.825 { 00:15:16.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.825 "dma_device_type": 2 00:15:16.825 }, 00:15:16.825 { 00:15:16.825 "dma_device_id": "system", 00:15:16.825 "dma_device_type": 1 00:15:16.825 }, 00:15:16.825 { 00:15:16.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.825 "dma_device_type": 2 00:15:16.825 }, 00:15:16.825 { 00:15:16.825 "dma_device_id": "system", 00:15:16.825 "dma_device_type": 1 00:15:16.825 }, 00:15:16.825 { 00:15:16.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.825 "dma_device_type": 2 00:15:16.825 } 00:15:16.825 ], 00:15:16.825 "driver_specific": { 00:15:16.825 "raid": { 00:15:16.825 "uuid": "60ae9cab-b0c6-405c-9399-e7babab3e018", 00:15:16.825 "strip_size_kb": 64, 00:15:16.825 "state": "online", 00:15:16.825 "raid_level": "concat", 00:15:16.825 "superblock": false, 00:15:16.825 "num_base_bdevs": 3, 00:15:16.825 "num_base_bdevs_discovered": 3, 00:15:16.825 "num_base_bdevs_operational": 3, 00:15:16.825 "base_bdevs_list": [ 00:15:16.825 { 00:15:16.825 "name": "BaseBdev1", 00:15:16.825 "uuid": "04605b58-9fae-4c52-891a-63168f48ba51", 00:15:16.825 "is_configured": true, 00:15:16.825 "data_offset": 0, 00:15:16.825 "data_size": 65536 00:15:16.825 }, 00:15:16.825 { 00:15:16.825 "name": "BaseBdev2", 00:15:16.825 "uuid": "63c96c14-a2d8-4e6a-8cea-82652aaf6155", 00:15:16.825 "is_configured": true, 00:15:16.825 "data_offset": 0, 00:15:16.825 "data_size": 65536 00:15:16.825 }, 00:15:16.825 { 00:15:16.825 "name": "BaseBdev3", 00:15:16.825 "uuid": "7f3180ce-6f1c-4001-82c3-4527b1f726be", 00:15:16.825 "is_configured": true, 00:15:16.825 "data_offset": 0, 00:15:16.825 "data_size": 65536 00:15:16.825 } 00:15:16.825 ] 00:15:16.825 } 00:15:16.825 } 00:15:16.825 }' 00:15:16.825 21:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:17.084 21:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:17.084 BaseBdev2 00:15:17.084 BaseBdev3' 00:15:17.084 21:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:17.084 21:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:17.084 21:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:17.084 21:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:17.084 "name": "BaseBdev1", 00:15:17.084 "aliases": [ 00:15:17.084 "04605b58-9fae-4c52-891a-63168f48ba51" 00:15:17.084 ], 00:15:17.084 "product_name": "Malloc disk", 00:15:17.084 "block_size": 512, 00:15:17.084 "num_blocks": 65536, 00:15:17.084 "uuid": "04605b58-9fae-4c52-891a-63168f48ba51", 00:15:17.084 "assigned_rate_limits": { 00:15:17.084 "rw_ios_per_sec": 0, 00:15:17.084 "rw_mbytes_per_sec": 0, 00:15:17.084 "r_mbytes_per_sec": 0, 00:15:17.084 "w_mbytes_per_sec": 0 00:15:17.084 }, 00:15:17.084 "claimed": true, 00:15:17.084 "claim_type": "exclusive_write", 00:15:17.084 "zoned": false, 00:15:17.084 "supported_io_types": { 00:15:17.084 "read": true, 00:15:17.084 "write": true, 00:15:17.084 "unmap": true, 00:15:17.084 "flush": true, 00:15:17.084 "reset": true, 00:15:17.084 "nvme_admin": false, 00:15:17.084 "nvme_io": false, 00:15:17.084 "nvme_io_md": false, 00:15:17.084 "write_zeroes": true, 00:15:17.084 "zcopy": true, 00:15:17.084 "get_zone_info": false, 00:15:17.084 "zone_management": false, 00:15:17.084 "zone_append": false, 00:15:17.084 "compare": false, 00:15:17.084 "compare_and_write": false, 00:15:17.084 "abort": true, 00:15:17.084 "seek_hole": false, 00:15:17.084 "seek_data": false, 00:15:17.084 "copy": true, 00:15:17.084 "nvme_iov_md": false 00:15:17.084 }, 00:15:17.084 "memory_domains": [ 00:15:17.084 { 00:15:17.084 "dma_device_id": "system", 00:15:17.084 "dma_device_type": 1 00:15:17.084 }, 00:15:17.084 { 00:15:17.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.084 "dma_device_type": 2 00:15:17.084 } 00:15:17.084 ], 00:15:17.084 "driver_specific": {} 00:15:17.084 }' 00:15:17.084 21:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:17.084 21:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:17.343 21:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:17.343 21:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:17.343 21:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:17.343 21:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:17.343 21:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:17.343 21:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:17.343 21:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:17.343 21:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:17.343 21:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:17.343 21:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:17.343 21:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:17.343 21:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:17.343 21:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:17.602 21:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:17.602 "name": "BaseBdev2", 00:15:17.602 "aliases": [ 00:15:17.602 "63c96c14-a2d8-4e6a-8cea-82652aaf6155" 00:15:17.602 ], 00:15:17.602 "product_name": "Malloc disk", 00:15:17.602 "block_size": 512, 00:15:17.602 "num_blocks": 65536, 00:15:17.602 "uuid": "63c96c14-a2d8-4e6a-8cea-82652aaf6155", 00:15:17.602 "assigned_rate_limits": { 00:15:17.602 "rw_ios_per_sec": 0, 00:15:17.602 "rw_mbytes_per_sec": 0, 00:15:17.602 "r_mbytes_per_sec": 0, 00:15:17.602 "w_mbytes_per_sec": 0 00:15:17.602 }, 00:15:17.602 "claimed": true, 00:15:17.602 "claim_type": "exclusive_write", 00:15:17.602 "zoned": false, 00:15:17.602 "supported_io_types": { 00:15:17.602 "read": true, 00:15:17.602 "write": true, 00:15:17.602 "unmap": true, 00:15:17.602 "flush": true, 00:15:17.602 "reset": true, 00:15:17.602 "nvme_admin": false, 00:15:17.602 "nvme_io": false, 00:15:17.602 "nvme_io_md": false, 00:15:17.602 "write_zeroes": true, 00:15:17.602 "zcopy": true, 00:15:17.602 "get_zone_info": false, 00:15:17.602 "zone_management": false, 00:15:17.602 "zone_append": false, 00:15:17.602 "compare": false, 00:15:17.602 "compare_and_write": false, 00:15:17.602 "abort": true, 00:15:17.602 "seek_hole": false, 00:15:17.602 "seek_data": false, 00:15:17.602 "copy": true, 00:15:17.603 "nvme_iov_md": false 00:15:17.603 }, 00:15:17.603 "memory_domains": [ 00:15:17.603 { 00:15:17.603 "dma_device_id": "system", 00:15:17.603 "dma_device_type": 1 00:15:17.603 }, 00:15:17.603 { 00:15:17.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.603 "dma_device_type": 2 00:15:17.603 } 00:15:17.603 ], 00:15:17.603 "driver_specific": {} 00:15:17.603 }' 00:15:17.603 21:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:17.603 21:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:17.603 21:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:17.603 21:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:17.603 21:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:17.862 21:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:17.862 21:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:17.862 21:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:17.862 21:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:17.862 21:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:17.862 21:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:17.862 21:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:17.862 21:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:17.862 21:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:17.862 21:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:18.121 21:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:18.121 "name": "BaseBdev3", 00:15:18.121 "aliases": [ 00:15:18.121 "7f3180ce-6f1c-4001-82c3-4527b1f726be" 00:15:18.121 ], 00:15:18.121 "product_name": "Malloc disk", 00:15:18.121 "block_size": 512, 00:15:18.121 "num_blocks": 65536, 00:15:18.121 "uuid": "7f3180ce-6f1c-4001-82c3-4527b1f726be", 00:15:18.121 "assigned_rate_limits": { 00:15:18.121 "rw_ios_per_sec": 0, 00:15:18.121 "rw_mbytes_per_sec": 0, 00:15:18.121 "r_mbytes_per_sec": 0, 00:15:18.121 "w_mbytes_per_sec": 0 00:15:18.121 }, 00:15:18.121 "claimed": true, 00:15:18.121 "claim_type": "exclusive_write", 00:15:18.121 "zoned": false, 00:15:18.121 "supported_io_types": { 00:15:18.121 "read": true, 00:15:18.121 "write": true, 00:15:18.121 "unmap": true, 00:15:18.121 "flush": true, 00:15:18.121 "reset": true, 00:15:18.121 "nvme_admin": false, 00:15:18.121 "nvme_io": false, 00:15:18.121 "nvme_io_md": false, 00:15:18.121 "write_zeroes": true, 00:15:18.121 "zcopy": true, 00:15:18.121 "get_zone_info": false, 00:15:18.121 "zone_management": false, 00:15:18.121 "zone_append": false, 00:15:18.121 "compare": false, 00:15:18.121 "compare_and_write": false, 00:15:18.121 "abort": true, 00:15:18.121 "seek_hole": false, 00:15:18.121 "seek_data": false, 00:15:18.121 "copy": true, 00:15:18.121 "nvme_iov_md": false 00:15:18.121 }, 00:15:18.121 "memory_domains": [ 00:15:18.121 { 00:15:18.121 "dma_device_id": "system", 00:15:18.121 "dma_device_type": 1 00:15:18.121 }, 00:15:18.121 { 00:15:18.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:18.121 "dma_device_type": 2 00:15:18.121 } 00:15:18.121 ], 00:15:18.121 "driver_specific": {} 00:15:18.121 }' 00:15:18.121 21:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:18.121 21:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:18.121 21:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:18.121 21:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:18.121 21:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:18.121 21:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:18.121 21:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:18.380 21:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:18.380 21:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:18.380 21:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:18.380 21:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:18.380 21:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:18.380 21:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:18.639 [2024-07-13 21:58:37.807055] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:18.639 [2024-07-13 21:58:37.807080] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:18.639 [2024-07-13 21:58:37.807126] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:18.639 21:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:18.639 21:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:15:18.639 21:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:18.639 21:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:18.639 21:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:15:18.639 21:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:15:18.639 21:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:18.639 21:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:15:18.639 21:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:18.639 21:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:18.639 21:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:18.639 21:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:18.639 21:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:18.639 21:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:18.639 21:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:18.639 21:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:18.639 21:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.639 21:58:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:18.639 "name": "Existed_Raid", 00:15:18.639 "uuid": "60ae9cab-b0c6-405c-9399-e7babab3e018", 00:15:18.639 "strip_size_kb": 64, 00:15:18.639 "state": "offline", 00:15:18.639 "raid_level": "concat", 00:15:18.639 "superblock": false, 00:15:18.639 "num_base_bdevs": 3, 00:15:18.639 "num_base_bdevs_discovered": 2, 00:15:18.639 "num_base_bdevs_operational": 2, 00:15:18.639 "base_bdevs_list": [ 00:15:18.639 { 00:15:18.639 "name": null, 00:15:18.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.639 "is_configured": false, 00:15:18.639 "data_offset": 0, 00:15:18.639 "data_size": 65536 00:15:18.639 }, 00:15:18.639 { 00:15:18.639 "name": "BaseBdev2", 00:15:18.639 "uuid": "63c96c14-a2d8-4e6a-8cea-82652aaf6155", 00:15:18.639 "is_configured": true, 00:15:18.639 "data_offset": 0, 00:15:18.639 "data_size": 65536 00:15:18.639 }, 00:15:18.639 { 00:15:18.639 "name": "BaseBdev3", 00:15:18.639 "uuid": "7f3180ce-6f1c-4001-82c3-4527b1f726be", 00:15:18.639 "is_configured": true, 00:15:18.639 "data_offset": 0, 00:15:18.639 "data_size": 65536 00:15:18.639 } 00:15:18.639 ] 00:15:18.639 }' 00:15:18.639 21:58:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:18.639 21:58:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.207 21:58:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:19.207 21:58:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:19.207 21:58:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:19.207 21:58:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:19.466 21:58:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:19.466 21:58:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:19.466 21:58:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:19.466 [2024-07-13 21:58:38.824848] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:19.726 21:58:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:19.726 21:58:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:19.726 21:58:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:19.726 21:58:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:19.726 21:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:19.726 21:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:19.726 21:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:19.985 [2024-07-13 21:58:39.264652] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:19.985 [2024-07-13 21:58:39.264697] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:15:20.244 21:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:20.244 21:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:20.244 21:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.244 21:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:20.244 21:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:20.244 21:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:20.244 21:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:15:20.244 21:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:15:20.244 21:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:20.244 21:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:20.503 BaseBdev2 00:15:20.503 21:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:15:20.503 21:58:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:20.503 21:58:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:20.503 21:58:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:20.503 21:58:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:20.503 21:58:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:20.503 21:58:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:20.762 21:58:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:20.762 [ 00:15:20.762 { 00:15:20.762 "name": "BaseBdev2", 00:15:20.762 "aliases": [ 00:15:20.762 "b3a45de6-a46b-455c-b342-141b5ce76853" 00:15:20.762 ], 00:15:20.762 "product_name": "Malloc disk", 00:15:20.762 "block_size": 512, 00:15:20.762 "num_blocks": 65536, 00:15:20.762 "uuid": "b3a45de6-a46b-455c-b342-141b5ce76853", 00:15:20.762 "assigned_rate_limits": { 00:15:20.762 "rw_ios_per_sec": 0, 00:15:20.762 "rw_mbytes_per_sec": 0, 00:15:20.762 "r_mbytes_per_sec": 0, 00:15:20.762 "w_mbytes_per_sec": 0 00:15:20.762 }, 00:15:20.762 "claimed": false, 00:15:20.762 "zoned": false, 00:15:20.762 "supported_io_types": { 00:15:20.762 "read": true, 00:15:20.762 "write": true, 00:15:20.762 "unmap": true, 00:15:20.763 "flush": true, 00:15:20.763 "reset": true, 00:15:20.763 "nvme_admin": false, 00:15:20.763 "nvme_io": false, 00:15:20.763 "nvme_io_md": false, 00:15:20.763 "write_zeroes": true, 00:15:20.763 "zcopy": true, 00:15:20.763 "get_zone_info": false, 00:15:20.763 "zone_management": false, 00:15:20.763 "zone_append": false, 00:15:20.763 "compare": false, 00:15:20.763 "compare_and_write": false, 00:15:20.763 "abort": true, 00:15:20.763 "seek_hole": false, 00:15:20.763 "seek_data": false, 00:15:20.763 "copy": true, 00:15:20.763 "nvme_iov_md": false 00:15:20.763 }, 00:15:20.763 "memory_domains": [ 00:15:20.763 { 00:15:20.763 "dma_device_id": "system", 00:15:20.763 "dma_device_type": 1 00:15:20.763 }, 00:15:20.763 { 00:15:20.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.763 "dma_device_type": 2 00:15:20.763 } 00:15:20.763 ], 00:15:20.763 "driver_specific": {} 00:15:20.763 } 00:15:20.763 ] 00:15:20.763 21:58:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:20.763 21:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:20.763 21:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:20.763 21:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:21.022 BaseBdev3 00:15:21.022 21:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:15:21.022 21:58:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:15:21.022 21:58:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:21.022 21:58:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:21.022 21:58:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:21.022 21:58:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:21.022 21:58:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:21.282 21:58:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:21.282 [ 00:15:21.282 { 00:15:21.282 "name": "BaseBdev3", 00:15:21.282 "aliases": [ 00:15:21.282 "279b2d88-cc9b-46c8-9700-911c41cd67ab" 00:15:21.282 ], 00:15:21.282 "product_name": "Malloc disk", 00:15:21.282 "block_size": 512, 00:15:21.282 "num_blocks": 65536, 00:15:21.282 "uuid": "279b2d88-cc9b-46c8-9700-911c41cd67ab", 00:15:21.282 "assigned_rate_limits": { 00:15:21.282 "rw_ios_per_sec": 0, 00:15:21.282 "rw_mbytes_per_sec": 0, 00:15:21.282 "r_mbytes_per_sec": 0, 00:15:21.282 "w_mbytes_per_sec": 0 00:15:21.282 }, 00:15:21.282 "claimed": false, 00:15:21.282 "zoned": false, 00:15:21.282 "supported_io_types": { 00:15:21.282 "read": true, 00:15:21.282 "write": true, 00:15:21.282 "unmap": true, 00:15:21.282 "flush": true, 00:15:21.282 "reset": true, 00:15:21.282 "nvme_admin": false, 00:15:21.282 "nvme_io": false, 00:15:21.282 "nvme_io_md": false, 00:15:21.282 "write_zeroes": true, 00:15:21.282 "zcopy": true, 00:15:21.282 "get_zone_info": false, 00:15:21.282 "zone_management": false, 00:15:21.282 "zone_append": false, 00:15:21.282 "compare": false, 00:15:21.282 "compare_and_write": false, 00:15:21.282 "abort": true, 00:15:21.282 "seek_hole": false, 00:15:21.282 "seek_data": false, 00:15:21.282 "copy": true, 00:15:21.282 "nvme_iov_md": false 00:15:21.282 }, 00:15:21.282 "memory_domains": [ 00:15:21.282 { 00:15:21.282 "dma_device_id": "system", 00:15:21.282 "dma_device_type": 1 00:15:21.282 }, 00:15:21.282 { 00:15:21.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:21.282 "dma_device_type": 2 00:15:21.282 } 00:15:21.282 ], 00:15:21.282 "driver_specific": {} 00:15:21.282 } 00:15:21.282 ] 00:15:21.282 21:58:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:21.282 21:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:21.282 21:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:21.282 21:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:21.541 [2024-07-13 21:58:40.795066] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:21.541 [2024-07-13 21:58:40.795104] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:21.541 [2024-07-13 21:58:40.795129] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:21.541 [2024-07-13 21:58:40.796855] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:21.541 21:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:21.541 21:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:21.541 21:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:21.541 21:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:21.541 21:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:21.541 21:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:21.541 21:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:21.541 21:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:21.541 21:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:21.541 21:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:21.541 21:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.541 21:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.801 21:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:21.801 "name": "Existed_Raid", 00:15:21.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.801 "strip_size_kb": 64, 00:15:21.801 "state": "configuring", 00:15:21.801 "raid_level": "concat", 00:15:21.801 "superblock": false, 00:15:21.801 "num_base_bdevs": 3, 00:15:21.801 "num_base_bdevs_discovered": 2, 00:15:21.801 "num_base_bdevs_operational": 3, 00:15:21.801 "base_bdevs_list": [ 00:15:21.801 { 00:15:21.801 "name": "BaseBdev1", 00:15:21.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.801 "is_configured": false, 00:15:21.801 "data_offset": 0, 00:15:21.801 "data_size": 0 00:15:21.801 }, 00:15:21.801 { 00:15:21.801 "name": "BaseBdev2", 00:15:21.801 "uuid": "b3a45de6-a46b-455c-b342-141b5ce76853", 00:15:21.801 "is_configured": true, 00:15:21.801 "data_offset": 0, 00:15:21.801 "data_size": 65536 00:15:21.801 }, 00:15:21.801 { 00:15:21.801 "name": "BaseBdev3", 00:15:21.801 "uuid": "279b2d88-cc9b-46c8-9700-911c41cd67ab", 00:15:21.801 "is_configured": true, 00:15:21.801 "data_offset": 0, 00:15:21.801 "data_size": 65536 00:15:21.801 } 00:15:21.801 ] 00:15:21.801 }' 00:15:21.801 21:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:21.801 21:58:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.369 21:58:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:15:22.369 [2024-07-13 21:58:41.637266] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:22.369 21:58:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:22.369 21:58:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:22.369 21:58:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:22.369 21:58:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:22.369 21:58:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:22.369 21:58:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:22.369 21:58:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:22.369 21:58:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:22.369 21:58:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:22.369 21:58:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:22.369 21:58:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.369 21:58:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:22.628 21:58:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:22.628 "name": "Existed_Raid", 00:15:22.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.628 "strip_size_kb": 64, 00:15:22.628 "state": "configuring", 00:15:22.628 "raid_level": "concat", 00:15:22.628 "superblock": false, 00:15:22.628 "num_base_bdevs": 3, 00:15:22.628 "num_base_bdevs_discovered": 1, 00:15:22.628 "num_base_bdevs_operational": 3, 00:15:22.628 "base_bdevs_list": [ 00:15:22.628 { 00:15:22.628 "name": "BaseBdev1", 00:15:22.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.628 "is_configured": false, 00:15:22.628 "data_offset": 0, 00:15:22.628 "data_size": 0 00:15:22.628 }, 00:15:22.628 { 00:15:22.628 "name": null, 00:15:22.628 "uuid": "b3a45de6-a46b-455c-b342-141b5ce76853", 00:15:22.628 "is_configured": false, 00:15:22.628 "data_offset": 0, 00:15:22.628 "data_size": 65536 00:15:22.628 }, 00:15:22.628 { 00:15:22.628 "name": "BaseBdev3", 00:15:22.628 "uuid": "279b2d88-cc9b-46c8-9700-911c41cd67ab", 00:15:22.628 "is_configured": true, 00:15:22.628 "data_offset": 0, 00:15:22.628 "data_size": 65536 00:15:22.628 } 00:15:22.628 ] 00:15:22.628 }' 00:15:22.628 21:58:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:22.628 21:58:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.196 21:58:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:23.196 21:58:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:23.196 21:58:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:15:23.196 21:58:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:23.455 [2024-07-13 21:58:42.654946] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:23.455 BaseBdev1 00:15:23.455 21:58:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:15:23.455 21:58:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:23.455 21:58:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:23.455 21:58:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:23.455 21:58:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:23.455 21:58:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:23.455 21:58:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:23.714 21:58:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:23.714 [ 00:15:23.714 { 00:15:23.714 "name": "BaseBdev1", 00:15:23.714 "aliases": [ 00:15:23.714 "98a5ab86-d6fd-4f4f-8dcb-179fbc86ff5c" 00:15:23.714 ], 00:15:23.714 "product_name": "Malloc disk", 00:15:23.714 "block_size": 512, 00:15:23.714 "num_blocks": 65536, 00:15:23.714 "uuid": "98a5ab86-d6fd-4f4f-8dcb-179fbc86ff5c", 00:15:23.714 "assigned_rate_limits": { 00:15:23.714 "rw_ios_per_sec": 0, 00:15:23.714 "rw_mbytes_per_sec": 0, 00:15:23.714 "r_mbytes_per_sec": 0, 00:15:23.714 "w_mbytes_per_sec": 0 00:15:23.714 }, 00:15:23.714 "claimed": true, 00:15:23.714 "claim_type": "exclusive_write", 00:15:23.714 "zoned": false, 00:15:23.714 "supported_io_types": { 00:15:23.714 "read": true, 00:15:23.714 "write": true, 00:15:23.714 "unmap": true, 00:15:23.714 "flush": true, 00:15:23.714 "reset": true, 00:15:23.714 "nvme_admin": false, 00:15:23.714 "nvme_io": false, 00:15:23.714 "nvme_io_md": false, 00:15:23.714 "write_zeroes": true, 00:15:23.714 "zcopy": true, 00:15:23.714 "get_zone_info": false, 00:15:23.714 "zone_management": false, 00:15:23.714 "zone_append": false, 00:15:23.714 "compare": false, 00:15:23.714 "compare_and_write": false, 00:15:23.714 "abort": true, 00:15:23.714 "seek_hole": false, 00:15:23.714 "seek_data": false, 00:15:23.714 "copy": true, 00:15:23.714 "nvme_iov_md": false 00:15:23.714 }, 00:15:23.714 "memory_domains": [ 00:15:23.714 { 00:15:23.714 "dma_device_id": "system", 00:15:23.714 "dma_device_type": 1 00:15:23.714 }, 00:15:23.714 { 00:15:23.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.714 "dma_device_type": 2 00:15:23.714 } 00:15:23.714 ], 00:15:23.714 "driver_specific": {} 00:15:23.714 } 00:15:23.714 ] 00:15:23.715 21:58:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:23.715 21:58:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:23.715 21:58:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:23.715 21:58:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:23.715 21:58:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:23.715 21:58:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:23.715 21:58:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:23.715 21:58:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:23.715 21:58:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:23.715 21:58:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:23.715 21:58:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:23.715 21:58:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.715 21:58:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:23.989 21:58:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:23.989 "name": "Existed_Raid", 00:15:23.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.989 "strip_size_kb": 64, 00:15:23.989 "state": "configuring", 00:15:23.989 "raid_level": "concat", 00:15:23.989 "superblock": false, 00:15:23.989 "num_base_bdevs": 3, 00:15:23.989 "num_base_bdevs_discovered": 2, 00:15:23.989 "num_base_bdevs_operational": 3, 00:15:23.989 "base_bdevs_list": [ 00:15:23.989 { 00:15:23.989 "name": "BaseBdev1", 00:15:23.989 "uuid": "98a5ab86-d6fd-4f4f-8dcb-179fbc86ff5c", 00:15:23.989 "is_configured": true, 00:15:23.989 "data_offset": 0, 00:15:23.989 "data_size": 65536 00:15:23.989 }, 00:15:23.989 { 00:15:23.989 "name": null, 00:15:23.989 "uuid": "b3a45de6-a46b-455c-b342-141b5ce76853", 00:15:23.989 "is_configured": false, 00:15:23.989 "data_offset": 0, 00:15:23.989 "data_size": 65536 00:15:23.989 }, 00:15:23.989 { 00:15:23.989 "name": "BaseBdev3", 00:15:23.989 "uuid": "279b2d88-cc9b-46c8-9700-911c41cd67ab", 00:15:23.989 "is_configured": true, 00:15:23.989 "data_offset": 0, 00:15:23.989 "data_size": 65536 00:15:23.989 } 00:15:23.989 ] 00:15:23.989 }' 00:15:23.989 21:58:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:23.989 21:58:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.559 21:58:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:24.559 21:58:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:24.559 21:58:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:15:24.559 21:58:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:15:24.818 [2024-07-13 21:58:43.990522] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:24.818 21:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:24.818 21:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:24.818 21:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:24.818 21:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:24.818 21:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:24.818 21:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:24.818 21:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:24.818 21:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:24.818 21:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:24.818 21:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:24.818 21:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:24.818 21:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.818 21:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:24.818 "name": "Existed_Raid", 00:15:24.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.818 "strip_size_kb": 64, 00:15:24.818 "state": "configuring", 00:15:24.818 "raid_level": "concat", 00:15:24.818 "superblock": false, 00:15:24.818 "num_base_bdevs": 3, 00:15:24.818 "num_base_bdevs_discovered": 1, 00:15:24.818 "num_base_bdevs_operational": 3, 00:15:24.818 "base_bdevs_list": [ 00:15:24.818 { 00:15:24.818 "name": "BaseBdev1", 00:15:24.818 "uuid": "98a5ab86-d6fd-4f4f-8dcb-179fbc86ff5c", 00:15:24.818 "is_configured": true, 00:15:24.818 "data_offset": 0, 00:15:24.818 "data_size": 65536 00:15:24.818 }, 00:15:24.818 { 00:15:24.818 "name": null, 00:15:24.818 "uuid": "b3a45de6-a46b-455c-b342-141b5ce76853", 00:15:24.818 "is_configured": false, 00:15:24.818 "data_offset": 0, 00:15:24.818 "data_size": 65536 00:15:24.818 }, 00:15:24.818 { 00:15:24.818 "name": null, 00:15:24.818 "uuid": "279b2d88-cc9b-46c8-9700-911c41cd67ab", 00:15:24.818 "is_configured": false, 00:15:24.818 "data_offset": 0, 00:15:24.818 "data_size": 65536 00:15:24.818 } 00:15:24.818 ] 00:15:24.818 }' 00:15:24.818 21:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:24.818 21:58:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.387 21:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:25.387 21:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:25.646 21:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:15:25.646 21:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:25.646 [2024-07-13 21:58:45.017230] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:25.646 21:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:25.646 21:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:25.646 21:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:25.646 21:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:25.646 21:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:25.646 21:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:25.646 21:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:25.646 21:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:25.646 21:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:25.646 21:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:25.905 21:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:25.905 21:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.905 21:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:25.905 "name": "Existed_Raid", 00:15:25.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.905 "strip_size_kb": 64, 00:15:25.905 "state": "configuring", 00:15:25.905 "raid_level": "concat", 00:15:25.905 "superblock": false, 00:15:25.905 "num_base_bdevs": 3, 00:15:25.905 "num_base_bdevs_discovered": 2, 00:15:25.905 "num_base_bdevs_operational": 3, 00:15:25.905 "base_bdevs_list": [ 00:15:25.905 { 00:15:25.905 "name": "BaseBdev1", 00:15:25.905 "uuid": "98a5ab86-d6fd-4f4f-8dcb-179fbc86ff5c", 00:15:25.905 "is_configured": true, 00:15:25.905 "data_offset": 0, 00:15:25.905 "data_size": 65536 00:15:25.905 }, 00:15:25.905 { 00:15:25.905 "name": null, 00:15:25.905 "uuid": "b3a45de6-a46b-455c-b342-141b5ce76853", 00:15:25.905 "is_configured": false, 00:15:25.905 "data_offset": 0, 00:15:25.905 "data_size": 65536 00:15:25.905 }, 00:15:25.905 { 00:15:25.905 "name": "BaseBdev3", 00:15:25.905 "uuid": "279b2d88-cc9b-46c8-9700-911c41cd67ab", 00:15:25.905 "is_configured": true, 00:15:25.905 "data_offset": 0, 00:15:25.905 "data_size": 65536 00:15:25.905 } 00:15:25.905 ] 00:15:25.905 }' 00:15:25.906 21:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:25.906 21:58:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.473 21:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:26.473 21:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.473 21:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:15:26.473 21:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:26.733 [2024-07-13 21:58:46.011842] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:26.733 21:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:26.733 21:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:26.733 21:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:26.733 21:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:26.733 21:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:26.733 21:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:26.733 21:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:26.733 21:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:26.992 21:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:26.992 21:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:26.992 21:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.993 21:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.993 21:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:26.993 "name": "Existed_Raid", 00:15:26.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.993 "strip_size_kb": 64, 00:15:26.993 "state": "configuring", 00:15:26.993 "raid_level": "concat", 00:15:26.993 "superblock": false, 00:15:26.993 "num_base_bdevs": 3, 00:15:26.993 "num_base_bdevs_discovered": 1, 00:15:26.993 "num_base_bdevs_operational": 3, 00:15:26.993 "base_bdevs_list": [ 00:15:26.993 { 00:15:26.993 "name": null, 00:15:26.993 "uuid": "98a5ab86-d6fd-4f4f-8dcb-179fbc86ff5c", 00:15:26.993 "is_configured": false, 00:15:26.993 "data_offset": 0, 00:15:26.993 "data_size": 65536 00:15:26.993 }, 00:15:26.993 { 00:15:26.993 "name": null, 00:15:26.993 "uuid": "b3a45de6-a46b-455c-b342-141b5ce76853", 00:15:26.993 "is_configured": false, 00:15:26.993 "data_offset": 0, 00:15:26.993 "data_size": 65536 00:15:26.993 }, 00:15:26.993 { 00:15:26.993 "name": "BaseBdev3", 00:15:26.993 "uuid": "279b2d88-cc9b-46c8-9700-911c41cd67ab", 00:15:26.993 "is_configured": true, 00:15:26.993 "data_offset": 0, 00:15:26.993 "data_size": 65536 00:15:26.993 } 00:15:26.993 ] 00:15:26.993 }' 00:15:26.993 21:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:26.993 21:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.562 21:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:27.562 21:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:27.821 21:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:15:27.821 21:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:27.821 [2024-07-13 21:58:47.107547] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:27.821 21:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:27.821 21:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:27.821 21:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:27.821 21:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:27.821 21:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:27.821 21:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:27.821 21:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:27.821 21:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:27.821 21:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:27.821 21:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:27.821 21:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:27.821 21:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.119 21:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:28.119 "name": "Existed_Raid", 00:15:28.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.119 "strip_size_kb": 64, 00:15:28.119 "state": "configuring", 00:15:28.119 "raid_level": "concat", 00:15:28.119 "superblock": false, 00:15:28.119 "num_base_bdevs": 3, 00:15:28.119 "num_base_bdevs_discovered": 2, 00:15:28.119 "num_base_bdevs_operational": 3, 00:15:28.119 "base_bdevs_list": [ 00:15:28.119 { 00:15:28.119 "name": null, 00:15:28.119 "uuid": "98a5ab86-d6fd-4f4f-8dcb-179fbc86ff5c", 00:15:28.119 "is_configured": false, 00:15:28.119 "data_offset": 0, 00:15:28.119 "data_size": 65536 00:15:28.119 }, 00:15:28.119 { 00:15:28.119 "name": "BaseBdev2", 00:15:28.119 "uuid": "b3a45de6-a46b-455c-b342-141b5ce76853", 00:15:28.119 "is_configured": true, 00:15:28.119 "data_offset": 0, 00:15:28.119 "data_size": 65536 00:15:28.119 }, 00:15:28.119 { 00:15:28.119 "name": "BaseBdev3", 00:15:28.119 "uuid": "279b2d88-cc9b-46c8-9700-911c41cd67ab", 00:15:28.119 "is_configured": true, 00:15:28.119 "data_offset": 0, 00:15:28.119 "data_size": 65536 00:15:28.119 } 00:15:28.119 ] 00:15:28.119 }' 00:15:28.119 21:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:28.119 21:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.379 21:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:28.379 21:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:28.637 21:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:15:28.637 21:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:28.637 21:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:28.897 21:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 98a5ab86-d6fd-4f4f-8dcb-179fbc86ff5c 00:15:28.897 [2024-07-13 21:58:48.285368] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:28.897 [2024-07-13 21:58:48.285405] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041780 00:15:28.897 [2024-07-13 21:58:48.285416] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:28.897 [2024-07-13 21:58:48.285671] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010980 00:15:28.897 [2024-07-13 21:58:48.285828] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041780 00:15:28.897 [2024-07-13 21:58:48.285838] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000041780 00:15:28.897 [2024-07-13 21:58:48.286086] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:29.156 NewBaseBdev 00:15:29.156 21:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:15:29.156 21:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:15:29.156 21:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:29.156 21:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:29.156 21:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:29.156 21:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:29.156 21:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:29.156 21:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:29.416 [ 00:15:29.416 { 00:15:29.416 "name": "NewBaseBdev", 00:15:29.416 "aliases": [ 00:15:29.416 "98a5ab86-d6fd-4f4f-8dcb-179fbc86ff5c" 00:15:29.416 ], 00:15:29.416 "product_name": "Malloc disk", 00:15:29.416 "block_size": 512, 00:15:29.416 "num_blocks": 65536, 00:15:29.416 "uuid": "98a5ab86-d6fd-4f4f-8dcb-179fbc86ff5c", 00:15:29.416 "assigned_rate_limits": { 00:15:29.416 "rw_ios_per_sec": 0, 00:15:29.416 "rw_mbytes_per_sec": 0, 00:15:29.416 "r_mbytes_per_sec": 0, 00:15:29.416 "w_mbytes_per_sec": 0 00:15:29.416 }, 00:15:29.416 "claimed": true, 00:15:29.416 "claim_type": "exclusive_write", 00:15:29.416 "zoned": false, 00:15:29.416 "supported_io_types": { 00:15:29.416 "read": true, 00:15:29.416 "write": true, 00:15:29.416 "unmap": true, 00:15:29.416 "flush": true, 00:15:29.416 "reset": true, 00:15:29.416 "nvme_admin": false, 00:15:29.416 "nvme_io": false, 00:15:29.416 "nvme_io_md": false, 00:15:29.416 "write_zeroes": true, 00:15:29.416 "zcopy": true, 00:15:29.416 "get_zone_info": false, 00:15:29.416 "zone_management": false, 00:15:29.416 "zone_append": false, 00:15:29.416 "compare": false, 00:15:29.416 "compare_and_write": false, 00:15:29.416 "abort": true, 00:15:29.416 "seek_hole": false, 00:15:29.416 "seek_data": false, 00:15:29.416 "copy": true, 00:15:29.416 "nvme_iov_md": false 00:15:29.416 }, 00:15:29.416 "memory_domains": [ 00:15:29.416 { 00:15:29.416 "dma_device_id": "system", 00:15:29.416 "dma_device_type": 1 00:15:29.416 }, 00:15:29.416 { 00:15:29.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.416 "dma_device_type": 2 00:15:29.416 } 00:15:29.416 ], 00:15:29.416 "driver_specific": {} 00:15:29.416 } 00:15:29.416 ] 00:15:29.416 21:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:29.416 21:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:15:29.416 21:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:29.416 21:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:29.416 21:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:29.416 21:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:29.416 21:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:29.416 21:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:29.416 21:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:29.416 21:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:29.416 21:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:29.416 21:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:29.416 21:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.416 21:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:29.416 "name": "Existed_Raid", 00:15:29.416 "uuid": "ea199171-5292-4618-9e1a-d4d7fafad85e", 00:15:29.416 "strip_size_kb": 64, 00:15:29.416 "state": "online", 00:15:29.416 "raid_level": "concat", 00:15:29.416 "superblock": false, 00:15:29.416 "num_base_bdevs": 3, 00:15:29.416 "num_base_bdevs_discovered": 3, 00:15:29.417 "num_base_bdevs_operational": 3, 00:15:29.417 "base_bdevs_list": [ 00:15:29.417 { 00:15:29.417 "name": "NewBaseBdev", 00:15:29.417 "uuid": "98a5ab86-d6fd-4f4f-8dcb-179fbc86ff5c", 00:15:29.417 "is_configured": true, 00:15:29.417 "data_offset": 0, 00:15:29.417 "data_size": 65536 00:15:29.417 }, 00:15:29.417 { 00:15:29.417 "name": "BaseBdev2", 00:15:29.417 "uuid": "b3a45de6-a46b-455c-b342-141b5ce76853", 00:15:29.417 "is_configured": true, 00:15:29.417 "data_offset": 0, 00:15:29.417 "data_size": 65536 00:15:29.417 }, 00:15:29.417 { 00:15:29.417 "name": "BaseBdev3", 00:15:29.417 "uuid": "279b2d88-cc9b-46c8-9700-911c41cd67ab", 00:15:29.417 "is_configured": true, 00:15:29.417 "data_offset": 0, 00:15:29.417 "data_size": 65536 00:15:29.417 } 00:15:29.417 ] 00:15:29.417 }' 00:15:29.417 21:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:29.417 21:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.986 21:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:15:29.986 21:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:29.986 21:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:29.986 21:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:29.986 21:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:29.986 21:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:29.986 21:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:29.986 21:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:30.247 [2024-07-13 21:58:49.420652] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:30.247 21:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:30.247 "name": "Existed_Raid", 00:15:30.247 "aliases": [ 00:15:30.247 "ea199171-5292-4618-9e1a-d4d7fafad85e" 00:15:30.247 ], 00:15:30.247 "product_name": "Raid Volume", 00:15:30.247 "block_size": 512, 00:15:30.247 "num_blocks": 196608, 00:15:30.247 "uuid": "ea199171-5292-4618-9e1a-d4d7fafad85e", 00:15:30.247 "assigned_rate_limits": { 00:15:30.247 "rw_ios_per_sec": 0, 00:15:30.247 "rw_mbytes_per_sec": 0, 00:15:30.247 "r_mbytes_per_sec": 0, 00:15:30.247 "w_mbytes_per_sec": 0 00:15:30.247 }, 00:15:30.247 "claimed": false, 00:15:30.247 "zoned": false, 00:15:30.247 "supported_io_types": { 00:15:30.247 "read": true, 00:15:30.247 "write": true, 00:15:30.247 "unmap": true, 00:15:30.247 "flush": true, 00:15:30.247 "reset": true, 00:15:30.247 "nvme_admin": false, 00:15:30.247 "nvme_io": false, 00:15:30.247 "nvme_io_md": false, 00:15:30.247 "write_zeroes": true, 00:15:30.247 "zcopy": false, 00:15:30.247 "get_zone_info": false, 00:15:30.247 "zone_management": false, 00:15:30.247 "zone_append": false, 00:15:30.247 "compare": false, 00:15:30.247 "compare_and_write": false, 00:15:30.247 "abort": false, 00:15:30.247 "seek_hole": false, 00:15:30.247 "seek_data": false, 00:15:30.247 "copy": false, 00:15:30.247 "nvme_iov_md": false 00:15:30.247 }, 00:15:30.247 "memory_domains": [ 00:15:30.247 { 00:15:30.247 "dma_device_id": "system", 00:15:30.247 "dma_device_type": 1 00:15:30.247 }, 00:15:30.247 { 00:15:30.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.247 "dma_device_type": 2 00:15:30.247 }, 00:15:30.247 { 00:15:30.247 "dma_device_id": "system", 00:15:30.247 "dma_device_type": 1 00:15:30.247 }, 00:15:30.247 { 00:15:30.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.247 "dma_device_type": 2 00:15:30.247 }, 00:15:30.247 { 00:15:30.247 "dma_device_id": "system", 00:15:30.247 "dma_device_type": 1 00:15:30.247 }, 00:15:30.247 { 00:15:30.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.247 "dma_device_type": 2 00:15:30.247 } 00:15:30.247 ], 00:15:30.247 "driver_specific": { 00:15:30.247 "raid": { 00:15:30.247 "uuid": "ea199171-5292-4618-9e1a-d4d7fafad85e", 00:15:30.247 "strip_size_kb": 64, 00:15:30.247 "state": "online", 00:15:30.247 "raid_level": "concat", 00:15:30.247 "superblock": false, 00:15:30.247 "num_base_bdevs": 3, 00:15:30.247 "num_base_bdevs_discovered": 3, 00:15:30.247 "num_base_bdevs_operational": 3, 00:15:30.247 "base_bdevs_list": [ 00:15:30.247 { 00:15:30.247 "name": "NewBaseBdev", 00:15:30.247 "uuid": "98a5ab86-d6fd-4f4f-8dcb-179fbc86ff5c", 00:15:30.247 "is_configured": true, 00:15:30.247 "data_offset": 0, 00:15:30.247 "data_size": 65536 00:15:30.247 }, 00:15:30.247 { 00:15:30.247 "name": "BaseBdev2", 00:15:30.247 "uuid": "b3a45de6-a46b-455c-b342-141b5ce76853", 00:15:30.247 "is_configured": true, 00:15:30.247 "data_offset": 0, 00:15:30.247 "data_size": 65536 00:15:30.247 }, 00:15:30.247 { 00:15:30.247 "name": "BaseBdev3", 00:15:30.247 "uuid": "279b2d88-cc9b-46c8-9700-911c41cd67ab", 00:15:30.247 "is_configured": true, 00:15:30.247 "data_offset": 0, 00:15:30.247 "data_size": 65536 00:15:30.247 } 00:15:30.247 ] 00:15:30.247 } 00:15:30.247 } 00:15:30.247 }' 00:15:30.247 21:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:30.247 21:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:15:30.247 BaseBdev2 00:15:30.247 BaseBdev3' 00:15:30.247 21:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:30.247 21:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:15:30.247 21:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:30.507 21:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:30.507 "name": "NewBaseBdev", 00:15:30.507 "aliases": [ 00:15:30.507 "98a5ab86-d6fd-4f4f-8dcb-179fbc86ff5c" 00:15:30.507 ], 00:15:30.507 "product_name": "Malloc disk", 00:15:30.507 "block_size": 512, 00:15:30.507 "num_blocks": 65536, 00:15:30.507 "uuid": "98a5ab86-d6fd-4f4f-8dcb-179fbc86ff5c", 00:15:30.507 "assigned_rate_limits": { 00:15:30.507 "rw_ios_per_sec": 0, 00:15:30.507 "rw_mbytes_per_sec": 0, 00:15:30.507 "r_mbytes_per_sec": 0, 00:15:30.507 "w_mbytes_per_sec": 0 00:15:30.507 }, 00:15:30.507 "claimed": true, 00:15:30.507 "claim_type": "exclusive_write", 00:15:30.507 "zoned": false, 00:15:30.507 "supported_io_types": { 00:15:30.507 "read": true, 00:15:30.507 "write": true, 00:15:30.507 "unmap": true, 00:15:30.507 "flush": true, 00:15:30.507 "reset": true, 00:15:30.507 "nvme_admin": false, 00:15:30.507 "nvme_io": false, 00:15:30.507 "nvme_io_md": false, 00:15:30.507 "write_zeroes": true, 00:15:30.507 "zcopy": true, 00:15:30.507 "get_zone_info": false, 00:15:30.507 "zone_management": false, 00:15:30.507 "zone_append": false, 00:15:30.507 "compare": false, 00:15:30.507 "compare_and_write": false, 00:15:30.507 "abort": true, 00:15:30.507 "seek_hole": false, 00:15:30.507 "seek_data": false, 00:15:30.507 "copy": true, 00:15:30.507 "nvme_iov_md": false 00:15:30.507 }, 00:15:30.507 "memory_domains": [ 00:15:30.507 { 00:15:30.507 "dma_device_id": "system", 00:15:30.507 "dma_device_type": 1 00:15:30.507 }, 00:15:30.507 { 00:15:30.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.507 "dma_device_type": 2 00:15:30.507 } 00:15:30.507 ], 00:15:30.507 "driver_specific": {} 00:15:30.507 }' 00:15:30.507 21:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:30.507 21:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:30.507 21:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:30.507 21:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:30.507 21:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:30.507 21:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:30.507 21:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:30.507 21:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:30.507 21:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:30.507 21:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:30.766 21:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:30.766 21:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:30.766 21:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:30.766 21:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:30.766 21:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:30.766 21:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:30.766 "name": "BaseBdev2", 00:15:30.766 "aliases": [ 00:15:30.766 "b3a45de6-a46b-455c-b342-141b5ce76853" 00:15:30.766 ], 00:15:30.766 "product_name": "Malloc disk", 00:15:30.766 "block_size": 512, 00:15:30.766 "num_blocks": 65536, 00:15:30.766 "uuid": "b3a45de6-a46b-455c-b342-141b5ce76853", 00:15:30.766 "assigned_rate_limits": { 00:15:30.766 "rw_ios_per_sec": 0, 00:15:30.766 "rw_mbytes_per_sec": 0, 00:15:30.766 "r_mbytes_per_sec": 0, 00:15:30.766 "w_mbytes_per_sec": 0 00:15:30.766 }, 00:15:30.766 "claimed": true, 00:15:30.766 "claim_type": "exclusive_write", 00:15:30.766 "zoned": false, 00:15:30.766 "supported_io_types": { 00:15:30.766 "read": true, 00:15:30.766 "write": true, 00:15:30.766 "unmap": true, 00:15:30.766 "flush": true, 00:15:30.766 "reset": true, 00:15:30.766 "nvme_admin": false, 00:15:30.766 "nvme_io": false, 00:15:30.766 "nvme_io_md": false, 00:15:30.766 "write_zeroes": true, 00:15:30.766 "zcopy": true, 00:15:30.766 "get_zone_info": false, 00:15:30.766 "zone_management": false, 00:15:30.766 "zone_append": false, 00:15:30.766 "compare": false, 00:15:30.766 "compare_and_write": false, 00:15:30.766 "abort": true, 00:15:30.766 "seek_hole": false, 00:15:30.766 "seek_data": false, 00:15:30.766 "copy": true, 00:15:30.766 "nvme_iov_md": false 00:15:30.766 }, 00:15:30.766 "memory_domains": [ 00:15:30.766 { 00:15:30.766 "dma_device_id": "system", 00:15:30.766 "dma_device_type": 1 00:15:30.766 }, 00:15:30.766 { 00:15:30.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.766 "dma_device_type": 2 00:15:30.766 } 00:15:30.766 ], 00:15:30.766 "driver_specific": {} 00:15:30.766 }' 00:15:30.766 21:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:31.025 21:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:31.025 21:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:31.025 21:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:31.025 21:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:31.025 21:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:31.025 21:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:31.025 21:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:31.025 21:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:31.025 21:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:31.283 21:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:31.283 21:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:31.283 21:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:31.283 21:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:31.283 21:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:31.283 21:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:31.283 "name": "BaseBdev3", 00:15:31.283 "aliases": [ 00:15:31.283 "279b2d88-cc9b-46c8-9700-911c41cd67ab" 00:15:31.283 ], 00:15:31.283 "product_name": "Malloc disk", 00:15:31.283 "block_size": 512, 00:15:31.283 "num_blocks": 65536, 00:15:31.283 "uuid": "279b2d88-cc9b-46c8-9700-911c41cd67ab", 00:15:31.283 "assigned_rate_limits": { 00:15:31.283 "rw_ios_per_sec": 0, 00:15:31.283 "rw_mbytes_per_sec": 0, 00:15:31.283 "r_mbytes_per_sec": 0, 00:15:31.283 "w_mbytes_per_sec": 0 00:15:31.283 }, 00:15:31.283 "claimed": true, 00:15:31.283 "claim_type": "exclusive_write", 00:15:31.283 "zoned": false, 00:15:31.283 "supported_io_types": { 00:15:31.283 "read": true, 00:15:31.283 "write": true, 00:15:31.283 "unmap": true, 00:15:31.283 "flush": true, 00:15:31.283 "reset": true, 00:15:31.283 "nvme_admin": false, 00:15:31.283 "nvme_io": false, 00:15:31.283 "nvme_io_md": false, 00:15:31.283 "write_zeroes": true, 00:15:31.283 "zcopy": true, 00:15:31.283 "get_zone_info": false, 00:15:31.283 "zone_management": false, 00:15:31.283 "zone_append": false, 00:15:31.283 "compare": false, 00:15:31.283 "compare_and_write": false, 00:15:31.283 "abort": true, 00:15:31.283 "seek_hole": false, 00:15:31.283 "seek_data": false, 00:15:31.283 "copy": true, 00:15:31.283 "nvme_iov_md": false 00:15:31.283 }, 00:15:31.283 "memory_domains": [ 00:15:31.283 { 00:15:31.283 "dma_device_id": "system", 00:15:31.283 "dma_device_type": 1 00:15:31.283 }, 00:15:31.283 { 00:15:31.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.283 "dma_device_type": 2 00:15:31.283 } 00:15:31.283 ], 00:15:31.283 "driver_specific": {} 00:15:31.284 }' 00:15:31.284 21:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:31.284 21:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:31.541 21:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:31.541 21:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:31.541 21:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:31.541 21:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:31.541 21:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:31.541 21:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:31.541 21:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:31.541 21:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:31.541 21:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:31.541 21:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:31.541 21:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:31.812 [2024-07-13 21:58:51.068738] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:31.813 [2024-07-13 21:58:51.068764] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:31.813 [2024-07-13 21:58:51.068832] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:31.813 [2024-07-13 21:58:51.068884] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:31.813 [2024-07-13 21:58:51.068900] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041780 name Existed_Raid, state offline 00:15:31.813 21:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 1382512 00:15:31.813 21:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 1382512 ']' 00:15:31.813 21:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 1382512 00:15:31.813 21:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:15:31.813 21:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:31.813 21:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1382512 00:15:31.813 21:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:31.813 21:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:31.813 21:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1382512' 00:15:31.813 killing process with pid 1382512 00:15:31.813 21:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 1382512 00:15:31.813 [2024-07-13 21:58:51.142853] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:31.813 21:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 1382512 00:15:32.080 [2024-07-13 21:58:51.374701] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:33.457 21:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:15:33.457 00:15:33.457 real 0m23.211s 00:15:33.457 user 0m40.637s 00:15:33.457 sys 0m4.335s 00:15:33.457 21:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:33.457 21:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.457 ************************************ 00:15:33.457 END TEST raid_state_function_test 00:15:33.457 ************************************ 00:15:33.457 21:58:52 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:33.457 21:58:52 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:15:33.457 21:58:52 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:33.457 21:58:52 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:33.457 21:58:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:33.457 ************************************ 00:15:33.457 START TEST raid_state_function_test_sb 00:15:33.457 ************************************ 00:15:33.457 21:58:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 3 true 00:15:33.457 21:58:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:15:33.457 21:58:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:15:33.457 21:58:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:15:33.457 21:58:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:33.457 21:58:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:33.457 21:58:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:33.457 21:58:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:15:33.457 21:58:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:33.457 21:58:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:33.457 21:58:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:15:33.457 21:58:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:33.457 21:58:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:33.457 21:58:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:15:33.457 21:58:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:33.457 21:58:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:33.457 21:58:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:33.457 21:58:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:33.457 21:58:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:33.457 21:58:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:33.457 21:58:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:33.457 21:58:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:33.457 21:58:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:15:33.457 21:58:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:15:33.457 21:58:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:15:33.457 21:58:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:15:33.457 21:58:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:15:33.457 21:58:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=1386990 00:15:33.457 21:58:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1386990' 00:15:33.457 Process raid pid: 1386990 00:15:33.457 21:58:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:33.457 21:58:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 1386990 /var/tmp/spdk-raid.sock 00:15:33.457 21:58:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 1386990 ']' 00:15:33.458 21:58:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:33.458 21:58:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:33.458 21:58:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:33.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:33.458 21:58:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:33.458 21:58:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.458 [2024-07-13 21:58:52.766224] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:33.458 [2024-07-13 21:58:52.766335] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:33.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:33.719 EAL: Requested device 0000:3d:01.0 cannot be used 00:15:33.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:33.719 EAL: Requested device 0000:3d:01.1 cannot be used 00:15:33.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:33.719 EAL: Requested device 0000:3d:01.2 cannot be used 00:15:33.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:33.719 EAL: Requested device 0000:3d:01.3 cannot be used 00:15:33.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:33.719 EAL: Requested device 0000:3d:01.4 cannot be used 00:15:33.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:33.719 EAL: Requested device 0000:3d:01.5 cannot be used 00:15:33.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:33.719 EAL: Requested device 0000:3d:01.6 cannot be used 00:15:33.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:33.719 EAL: Requested device 0000:3d:01.7 cannot be used 00:15:33.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:33.719 EAL: Requested device 0000:3d:02.0 cannot be used 00:15:33.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:33.719 EAL: Requested device 0000:3d:02.1 cannot be used 00:15:33.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:33.719 EAL: Requested device 0000:3d:02.2 cannot be used 00:15:33.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:33.719 EAL: Requested device 0000:3d:02.3 cannot be used 00:15:33.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:33.719 EAL: Requested device 0000:3d:02.4 cannot be used 00:15:33.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:33.719 EAL: Requested device 0000:3d:02.5 cannot be used 00:15:33.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:33.719 EAL: Requested device 0000:3d:02.6 cannot be used 00:15:33.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:33.719 EAL: Requested device 0000:3d:02.7 cannot be used 00:15:33.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:33.719 EAL: Requested device 0000:3f:01.0 cannot be used 00:15:33.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:33.719 EAL: Requested device 0000:3f:01.1 cannot be used 00:15:33.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:33.719 EAL: Requested device 0000:3f:01.2 cannot be used 00:15:33.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:33.719 EAL: Requested device 0000:3f:01.3 cannot be used 00:15:33.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:33.719 EAL: Requested device 0000:3f:01.4 cannot be used 00:15:33.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:33.719 EAL: Requested device 0000:3f:01.5 cannot be used 00:15:33.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:33.719 EAL: Requested device 0000:3f:01.6 cannot be used 00:15:33.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:33.719 EAL: Requested device 0000:3f:01.7 cannot be used 00:15:33.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:33.719 EAL: Requested device 0000:3f:02.0 cannot be used 00:15:33.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:33.719 EAL: Requested device 0000:3f:02.1 cannot be used 00:15:33.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:33.719 EAL: Requested device 0000:3f:02.2 cannot be used 00:15:33.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:33.719 EAL: Requested device 0000:3f:02.3 cannot be used 00:15:33.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:33.719 EAL: Requested device 0000:3f:02.4 cannot be used 00:15:33.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:33.719 EAL: Requested device 0000:3f:02.5 cannot be used 00:15:33.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:33.719 EAL: Requested device 0000:3f:02.6 cannot be used 00:15:33.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:33.719 EAL: Requested device 0000:3f:02.7 cannot be used 00:15:33.719 [2024-07-13 21:58:52.930686] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.979 [2024-07-13 21:58:53.136995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.237 [2024-07-13 21:58:53.382064] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:34.237 [2024-07-13 21:58:53.382092] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:34.237 21:58:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:34.237 21:58:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:15:34.237 21:58:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:34.496 [2024-07-13 21:58:53.686060] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:34.496 [2024-07-13 21:58:53.686105] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:34.496 [2024-07-13 21:58:53.686116] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:34.496 [2024-07-13 21:58:53.686143] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:34.496 [2024-07-13 21:58:53.686152] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:34.496 [2024-07-13 21:58:53.686163] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:34.496 21:58:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:34.496 21:58:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:34.496 21:58:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:34.496 21:58:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:34.496 21:58:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:34.497 21:58:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:34.497 21:58:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:34.497 21:58:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:34.497 21:58:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:34.497 21:58:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:34.497 21:58:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:34.497 21:58:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.497 21:58:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:34.497 "name": "Existed_Raid", 00:15:34.497 "uuid": "2f0e9833-4cee-40ea-a7fe-06c2dc8d5c21", 00:15:34.497 "strip_size_kb": 64, 00:15:34.497 "state": "configuring", 00:15:34.497 "raid_level": "concat", 00:15:34.497 "superblock": true, 00:15:34.497 "num_base_bdevs": 3, 00:15:34.497 "num_base_bdevs_discovered": 0, 00:15:34.497 "num_base_bdevs_operational": 3, 00:15:34.497 "base_bdevs_list": [ 00:15:34.497 { 00:15:34.497 "name": "BaseBdev1", 00:15:34.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.497 "is_configured": false, 00:15:34.497 "data_offset": 0, 00:15:34.497 "data_size": 0 00:15:34.497 }, 00:15:34.497 { 00:15:34.497 "name": "BaseBdev2", 00:15:34.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.497 "is_configured": false, 00:15:34.497 "data_offset": 0, 00:15:34.497 "data_size": 0 00:15:34.497 }, 00:15:34.497 { 00:15:34.497 "name": "BaseBdev3", 00:15:34.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.497 "is_configured": false, 00:15:34.497 "data_offset": 0, 00:15:34.497 "data_size": 0 00:15:34.497 } 00:15:34.497 ] 00:15:34.497 }' 00:15:34.497 21:58:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:34.497 21:58:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.064 21:58:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:35.323 [2024-07-13 21:58:54.500195] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:35.323 [2024-07-13 21:58:54.500228] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:15:35.323 21:58:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:35.323 [2024-07-13 21:58:54.672709] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:35.323 [2024-07-13 21:58:54.672748] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:35.323 [2024-07-13 21:58:54.672758] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:35.323 [2024-07-13 21:58:54.672772] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:35.323 [2024-07-13 21:58:54.672780] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:35.323 [2024-07-13 21:58:54.672791] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:35.323 21:58:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:35.582 [2024-07-13 21:58:54.885237] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:35.582 BaseBdev1 00:15:35.582 21:58:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:35.583 21:58:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:35.583 21:58:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:35.583 21:58:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:35.583 21:58:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:35.583 21:58:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:35.583 21:58:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:35.842 21:58:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:35.842 [ 00:15:35.842 { 00:15:35.842 "name": "BaseBdev1", 00:15:35.842 "aliases": [ 00:15:35.842 "c6e3b283-5e9d-4275-ac1c-e29e1d65b7a0" 00:15:35.842 ], 00:15:35.842 "product_name": "Malloc disk", 00:15:35.842 "block_size": 512, 00:15:35.842 "num_blocks": 65536, 00:15:35.842 "uuid": "c6e3b283-5e9d-4275-ac1c-e29e1d65b7a0", 00:15:35.842 "assigned_rate_limits": { 00:15:35.842 "rw_ios_per_sec": 0, 00:15:35.842 "rw_mbytes_per_sec": 0, 00:15:35.842 "r_mbytes_per_sec": 0, 00:15:35.842 "w_mbytes_per_sec": 0 00:15:35.842 }, 00:15:35.842 "claimed": true, 00:15:35.842 "claim_type": "exclusive_write", 00:15:35.842 "zoned": false, 00:15:35.842 "supported_io_types": { 00:15:35.842 "read": true, 00:15:35.842 "write": true, 00:15:35.842 "unmap": true, 00:15:35.842 "flush": true, 00:15:35.842 "reset": true, 00:15:35.843 "nvme_admin": false, 00:15:35.843 "nvme_io": false, 00:15:35.843 "nvme_io_md": false, 00:15:35.843 "write_zeroes": true, 00:15:35.843 "zcopy": true, 00:15:35.843 "get_zone_info": false, 00:15:35.843 "zone_management": false, 00:15:35.843 "zone_append": false, 00:15:35.843 "compare": false, 00:15:35.843 "compare_and_write": false, 00:15:35.843 "abort": true, 00:15:35.843 "seek_hole": false, 00:15:35.843 "seek_data": false, 00:15:35.843 "copy": true, 00:15:35.843 "nvme_iov_md": false 00:15:35.843 }, 00:15:35.843 "memory_domains": [ 00:15:35.843 { 00:15:35.843 "dma_device_id": "system", 00:15:35.843 "dma_device_type": 1 00:15:35.843 }, 00:15:35.843 { 00:15:35.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.843 "dma_device_type": 2 00:15:35.843 } 00:15:35.843 ], 00:15:35.843 "driver_specific": {} 00:15:35.843 } 00:15:35.843 ] 00:15:36.103 21:58:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:36.103 21:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:36.103 21:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:36.103 21:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:36.103 21:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:36.103 21:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:36.103 21:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:36.103 21:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:36.103 21:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:36.103 21:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:36.103 21:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:36.103 21:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:36.103 21:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.103 21:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:36.103 "name": "Existed_Raid", 00:15:36.103 "uuid": "7c2d28e1-d9ef-443d-8ff8-fd869990e0d0", 00:15:36.103 "strip_size_kb": 64, 00:15:36.103 "state": "configuring", 00:15:36.103 "raid_level": "concat", 00:15:36.103 "superblock": true, 00:15:36.103 "num_base_bdevs": 3, 00:15:36.103 "num_base_bdevs_discovered": 1, 00:15:36.103 "num_base_bdevs_operational": 3, 00:15:36.103 "base_bdevs_list": [ 00:15:36.103 { 00:15:36.103 "name": "BaseBdev1", 00:15:36.103 "uuid": "c6e3b283-5e9d-4275-ac1c-e29e1d65b7a0", 00:15:36.103 "is_configured": true, 00:15:36.103 "data_offset": 2048, 00:15:36.103 "data_size": 63488 00:15:36.103 }, 00:15:36.103 { 00:15:36.103 "name": "BaseBdev2", 00:15:36.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.103 "is_configured": false, 00:15:36.103 "data_offset": 0, 00:15:36.103 "data_size": 0 00:15:36.103 }, 00:15:36.103 { 00:15:36.103 "name": "BaseBdev3", 00:15:36.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.103 "is_configured": false, 00:15:36.103 "data_offset": 0, 00:15:36.103 "data_size": 0 00:15:36.103 } 00:15:36.103 ] 00:15:36.103 }' 00:15:36.103 21:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:36.103 21:58:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.672 21:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:36.672 [2024-07-13 21:58:56.044321] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:36.672 [2024-07-13 21:58:56.044370] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:15:36.672 21:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:36.931 [2024-07-13 21:58:56.212846] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:36.931 [2024-07-13 21:58:56.214568] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:36.931 [2024-07-13 21:58:56.214606] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:36.931 [2024-07-13 21:58:56.214617] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:36.931 [2024-07-13 21:58:56.214629] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:36.931 21:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:36.931 21:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:36.931 21:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:36.931 21:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:36.931 21:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:36.931 21:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:36.931 21:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:36.931 21:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:36.931 21:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:36.931 21:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:36.931 21:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:36.931 21:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:36.931 21:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:36.931 21:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.190 21:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:37.190 "name": "Existed_Raid", 00:15:37.190 "uuid": "17578ac4-3dbd-4b74-b908-4f8d54c5de78", 00:15:37.190 "strip_size_kb": 64, 00:15:37.190 "state": "configuring", 00:15:37.190 "raid_level": "concat", 00:15:37.190 "superblock": true, 00:15:37.190 "num_base_bdevs": 3, 00:15:37.190 "num_base_bdevs_discovered": 1, 00:15:37.190 "num_base_bdevs_operational": 3, 00:15:37.190 "base_bdevs_list": [ 00:15:37.190 { 00:15:37.190 "name": "BaseBdev1", 00:15:37.190 "uuid": "c6e3b283-5e9d-4275-ac1c-e29e1d65b7a0", 00:15:37.190 "is_configured": true, 00:15:37.190 "data_offset": 2048, 00:15:37.190 "data_size": 63488 00:15:37.190 }, 00:15:37.190 { 00:15:37.190 "name": "BaseBdev2", 00:15:37.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.191 "is_configured": false, 00:15:37.191 "data_offset": 0, 00:15:37.191 "data_size": 0 00:15:37.191 }, 00:15:37.191 { 00:15:37.191 "name": "BaseBdev3", 00:15:37.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.191 "is_configured": false, 00:15:37.191 "data_offset": 0, 00:15:37.191 "data_size": 0 00:15:37.191 } 00:15:37.191 ] 00:15:37.191 }' 00:15:37.191 21:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:37.191 21:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.759 21:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:37.759 [2024-07-13 21:58:57.060343] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:37.759 BaseBdev2 00:15:37.759 21:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:37.759 21:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:37.759 21:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:37.759 21:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:37.759 21:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:37.759 21:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:37.759 21:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:38.018 21:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:38.018 [ 00:15:38.018 { 00:15:38.018 "name": "BaseBdev2", 00:15:38.018 "aliases": [ 00:15:38.018 "e7ee3331-38a9-4e3d-a5bb-c4a7fd781384" 00:15:38.018 ], 00:15:38.018 "product_name": "Malloc disk", 00:15:38.018 "block_size": 512, 00:15:38.018 "num_blocks": 65536, 00:15:38.018 "uuid": "e7ee3331-38a9-4e3d-a5bb-c4a7fd781384", 00:15:38.018 "assigned_rate_limits": { 00:15:38.018 "rw_ios_per_sec": 0, 00:15:38.018 "rw_mbytes_per_sec": 0, 00:15:38.018 "r_mbytes_per_sec": 0, 00:15:38.018 "w_mbytes_per_sec": 0 00:15:38.018 }, 00:15:38.018 "claimed": true, 00:15:38.018 "claim_type": "exclusive_write", 00:15:38.018 "zoned": false, 00:15:38.018 "supported_io_types": { 00:15:38.018 "read": true, 00:15:38.018 "write": true, 00:15:38.018 "unmap": true, 00:15:38.018 "flush": true, 00:15:38.018 "reset": true, 00:15:38.018 "nvme_admin": false, 00:15:38.018 "nvme_io": false, 00:15:38.018 "nvme_io_md": false, 00:15:38.018 "write_zeroes": true, 00:15:38.018 "zcopy": true, 00:15:38.018 "get_zone_info": false, 00:15:38.018 "zone_management": false, 00:15:38.018 "zone_append": false, 00:15:38.018 "compare": false, 00:15:38.018 "compare_and_write": false, 00:15:38.018 "abort": true, 00:15:38.018 "seek_hole": false, 00:15:38.018 "seek_data": false, 00:15:38.018 "copy": true, 00:15:38.018 "nvme_iov_md": false 00:15:38.018 }, 00:15:38.018 "memory_domains": [ 00:15:38.018 { 00:15:38.018 "dma_device_id": "system", 00:15:38.018 "dma_device_type": 1 00:15:38.018 }, 00:15:38.018 { 00:15:38.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.018 "dma_device_type": 2 00:15:38.018 } 00:15:38.018 ], 00:15:38.018 "driver_specific": {} 00:15:38.018 } 00:15:38.018 ] 00:15:38.277 21:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:38.277 21:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:38.277 21:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:38.277 21:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:38.277 21:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:38.277 21:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:38.277 21:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:38.277 21:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:38.277 21:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:38.277 21:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:38.277 21:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:38.277 21:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:38.277 21:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:38.277 21:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:38.277 21:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.277 21:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:38.277 "name": "Existed_Raid", 00:15:38.277 "uuid": "17578ac4-3dbd-4b74-b908-4f8d54c5de78", 00:15:38.277 "strip_size_kb": 64, 00:15:38.277 "state": "configuring", 00:15:38.277 "raid_level": "concat", 00:15:38.277 "superblock": true, 00:15:38.277 "num_base_bdevs": 3, 00:15:38.277 "num_base_bdevs_discovered": 2, 00:15:38.277 "num_base_bdevs_operational": 3, 00:15:38.277 "base_bdevs_list": [ 00:15:38.277 { 00:15:38.277 "name": "BaseBdev1", 00:15:38.277 "uuid": "c6e3b283-5e9d-4275-ac1c-e29e1d65b7a0", 00:15:38.277 "is_configured": true, 00:15:38.277 "data_offset": 2048, 00:15:38.277 "data_size": 63488 00:15:38.277 }, 00:15:38.277 { 00:15:38.277 "name": "BaseBdev2", 00:15:38.277 "uuid": "e7ee3331-38a9-4e3d-a5bb-c4a7fd781384", 00:15:38.277 "is_configured": true, 00:15:38.277 "data_offset": 2048, 00:15:38.277 "data_size": 63488 00:15:38.277 }, 00:15:38.277 { 00:15:38.277 "name": "BaseBdev3", 00:15:38.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.277 "is_configured": false, 00:15:38.277 "data_offset": 0, 00:15:38.277 "data_size": 0 00:15:38.277 } 00:15:38.277 ] 00:15:38.277 }' 00:15:38.277 21:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:38.277 21:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.845 21:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:39.104 [2024-07-13 21:58:58.289423] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:39.104 [2024-07-13 21:58:58.289634] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:15:39.104 [2024-07-13 21:58:58.289655] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:39.104 [2024-07-13 21:58:58.289889] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:15:39.104 [2024-07-13 21:58:58.290084] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:15:39.104 [2024-07-13 21:58:58.290095] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:15:39.104 [2024-07-13 21:58:58.290248] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:39.104 BaseBdev3 00:15:39.104 21:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:15:39.104 21:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:15:39.104 21:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:39.104 21:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:39.104 21:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:39.104 21:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:39.104 21:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:39.104 21:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:39.364 [ 00:15:39.364 { 00:15:39.364 "name": "BaseBdev3", 00:15:39.364 "aliases": [ 00:15:39.364 "183fc7fe-2807-44cd-9725-432b79efadc2" 00:15:39.364 ], 00:15:39.364 "product_name": "Malloc disk", 00:15:39.364 "block_size": 512, 00:15:39.364 "num_blocks": 65536, 00:15:39.364 "uuid": "183fc7fe-2807-44cd-9725-432b79efadc2", 00:15:39.364 "assigned_rate_limits": { 00:15:39.364 "rw_ios_per_sec": 0, 00:15:39.364 "rw_mbytes_per_sec": 0, 00:15:39.364 "r_mbytes_per_sec": 0, 00:15:39.364 "w_mbytes_per_sec": 0 00:15:39.364 }, 00:15:39.364 "claimed": true, 00:15:39.364 "claim_type": "exclusive_write", 00:15:39.364 "zoned": false, 00:15:39.364 "supported_io_types": { 00:15:39.364 "read": true, 00:15:39.364 "write": true, 00:15:39.364 "unmap": true, 00:15:39.364 "flush": true, 00:15:39.364 "reset": true, 00:15:39.364 "nvme_admin": false, 00:15:39.364 "nvme_io": false, 00:15:39.364 "nvme_io_md": false, 00:15:39.364 "write_zeroes": true, 00:15:39.364 "zcopy": true, 00:15:39.364 "get_zone_info": false, 00:15:39.364 "zone_management": false, 00:15:39.364 "zone_append": false, 00:15:39.364 "compare": false, 00:15:39.364 "compare_and_write": false, 00:15:39.364 "abort": true, 00:15:39.364 "seek_hole": false, 00:15:39.364 "seek_data": false, 00:15:39.364 "copy": true, 00:15:39.364 "nvme_iov_md": false 00:15:39.364 }, 00:15:39.364 "memory_domains": [ 00:15:39.364 { 00:15:39.364 "dma_device_id": "system", 00:15:39.364 "dma_device_type": 1 00:15:39.364 }, 00:15:39.364 { 00:15:39.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.364 "dma_device_type": 2 00:15:39.364 } 00:15:39.364 ], 00:15:39.364 "driver_specific": {} 00:15:39.364 } 00:15:39.364 ] 00:15:39.364 21:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:39.364 21:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:39.364 21:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:39.364 21:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:15:39.364 21:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:39.364 21:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:39.364 21:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:39.364 21:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:39.364 21:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:39.364 21:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:39.364 21:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:39.364 21:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:39.364 21:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:39.364 21:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:39.364 21:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.623 21:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:39.623 "name": "Existed_Raid", 00:15:39.624 "uuid": "17578ac4-3dbd-4b74-b908-4f8d54c5de78", 00:15:39.624 "strip_size_kb": 64, 00:15:39.624 "state": "online", 00:15:39.624 "raid_level": "concat", 00:15:39.624 "superblock": true, 00:15:39.624 "num_base_bdevs": 3, 00:15:39.624 "num_base_bdevs_discovered": 3, 00:15:39.624 "num_base_bdevs_operational": 3, 00:15:39.624 "base_bdevs_list": [ 00:15:39.624 { 00:15:39.624 "name": "BaseBdev1", 00:15:39.624 "uuid": "c6e3b283-5e9d-4275-ac1c-e29e1d65b7a0", 00:15:39.624 "is_configured": true, 00:15:39.624 "data_offset": 2048, 00:15:39.624 "data_size": 63488 00:15:39.624 }, 00:15:39.624 { 00:15:39.624 "name": "BaseBdev2", 00:15:39.624 "uuid": "e7ee3331-38a9-4e3d-a5bb-c4a7fd781384", 00:15:39.624 "is_configured": true, 00:15:39.624 "data_offset": 2048, 00:15:39.624 "data_size": 63488 00:15:39.624 }, 00:15:39.624 { 00:15:39.624 "name": "BaseBdev3", 00:15:39.624 "uuid": "183fc7fe-2807-44cd-9725-432b79efadc2", 00:15:39.624 "is_configured": true, 00:15:39.624 "data_offset": 2048, 00:15:39.624 "data_size": 63488 00:15:39.624 } 00:15:39.624 ] 00:15:39.624 }' 00:15:39.624 21:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:39.624 21:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.192 21:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:40.192 21:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:40.192 21:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:40.192 21:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:40.192 21:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:40.192 21:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:15:40.192 21:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:40.192 21:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:40.192 [2024-07-13 21:58:59.480890] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:40.192 21:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:40.192 "name": "Existed_Raid", 00:15:40.192 "aliases": [ 00:15:40.192 "17578ac4-3dbd-4b74-b908-4f8d54c5de78" 00:15:40.192 ], 00:15:40.192 "product_name": "Raid Volume", 00:15:40.192 "block_size": 512, 00:15:40.192 "num_blocks": 190464, 00:15:40.192 "uuid": "17578ac4-3dbd-4b74-b908-4f8d54c5de78", 00:15:40.192 "assigned_rate_limits": { 00:15:40.192 "rw_ios_per_sec": 0, 00:15:40.192 "rw_mbytes_per_sec": 0, 00:15:40.192 "r_mbytes_per_sec": 0, 00:15:40.192 "w_mbytes_per_sec": 0 00:15:40.192 }, 00:15:40.192 "claimed": false, 00:15:40.192 "zoned": false, 00:15:40.192 "supported_io_types": { 00:15:40.192 "read": true, 00:15:40.192 "write": true, 00:15:40.192 "unmap": true, 00:15:40.192 "flush": true, 00:15:40.192 "reset": true, 00:15:40.192 "nvme_admin": false, 00:15:40.192 "nvme_io": false, 00:15:40.192 "nvme_io_md": false, 00:15:40.192 "write_zeroes": true, 00:15:40.192 "zcopy": false, 00:15:40.192 "get_zone_info": false, 00:15:40.192 "zone_management": false, 00:15:40.192 "zone_append": false, 00:15:40.192 "compare": false, 00:15:40.192 "compare_and_write": false, 00:15:40.192 "abort": false, 00:15:40.192 "seek_hole": false, 00:15:40.192 "seek_data": false, 00:15:40.192 "copy": false, 00:15:40.193 "nvme_iov_md": false 00:15:40.193 }, 00:15:40.193 "memory_domains": [ 00:15:40.193 { 00:15:40.193 "dma_device_id": "system", 00:15:40.193 "dma_device_type": 1 00:15:40.193 }, 00:15:40.193 { 00:15:40.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.193 "dma_device_type": 2 00:15:40.193 }, 00:15:40.193 { 00:15:40.193 "dma_device_id": "system", 00:15:40.193 "dma_device_type": 1 00:15:40.193 }, 00:15:40.193 { 00:15:40.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.193 "dma_device_type": 2 00:15:40.193 }, 00:15:40.193 { 00:15:40.193 "dma_device_id": "system", 00:15:40.193 "dma_device_type": 1 00:15:40.193 }, 00:15:40.193 { 00:15:40.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.193 "dma_device_type": 2 00:15:40.193 } 00:15:40.193 ], 00:15:40.193 "driver_specific": { 00:15:40.193 "raid": { 00:15:40.193 "uuid": "17578ac4-3dbd-4b74-b908-4f8d54c5de78", 00:15:40.193 "strip_size_kb": 64, 00:15:40.193 "state": "online", 00:15:40.193 "raid_level": "concat", 00:15:40.193 "superblock": true, 00:15:40.193 "num_base_bdevs": 3, 00:15:40.193 "num_base_bdevs_discovered": 3, 00:15:40.193 "num_base_bdevs_operational": 3, 00:15:40.193 "base_bdevs_list": [ 00:15:40.193 { 00:15:40.193 "name": "BaseBdev1", 00:15:40.193 "uuid": "c6e3b283-5e9d-4275-ac1c-e29e1d65b7a0", 00:15:40.193 "is_configured": true, 00:15:40.193 "data_offset": 2048, 00:15:40.193 "data_size": 63488 00:15:40.193 }, 00:15:40.193 { 00:15:40.193 "name": "BaseBdev2", 00:15:40.193 "uuid": "e7ee3331-38a9-4e3d-a5bb-c4a7fd781384", 00:15:40.193 "is_configured": true, 00:15:40.193 "data_offset": 2048, 00:15:40.193 "data_size": 63488 00:15:40.193 }, 00:15:40.193 { 00:15:40.193 "name": "BaseBdev3", 00:15:40.193 "uuid": "183fc7fe-2807-44cd-9725-432b79efadc2", 00:15:40.193 "is_configured": true, 00:15:40.193 "data_offset": 2048, 00:15:40.193 "data_size": 63488 00:15:40.193 } 00:15:40.193 ] 00:15:40.193 } 00:15:40.193 } 00:15:40.193 }' 00:15:40.193 21:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:40.193 21:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:40.193 BaseBdev2 00:15:40.193 BaseBdev3' 00:15:40.193 21:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:40.193 21:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:40.193 21:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:40.453 21:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:40.453 "name": "BaseBdev1", 00:15:40.453 "aliases": [ 00:15:40.453 "c6e3b283-5e9d-4275-ac1c-e29e1d65b7a0" 00:15:40.453 ], 00:15:40.453 "product_name": "Malloc disk", 00:15:40.453 "block_size": 512, 00:15:40.453 "num_blocks": 65536, 00:15:40.453 "uuid": "c6e3b283-5e9d-4275-ac1c-e29e1d65b7a0", 00:15:40.453 "assigned_rate_limits": { 00:15:40.453 "rw_ios_per_sec": 0, 00:15:40.453 "rw_mbytes_per_sec": 0, 00:15:40.453 "r_mbytes_per_sec": 0, 00:15:40.453 "w_mbytes_per_sec": 0 00:15:40.453 }, 00:15:40.453 "claimed": true, 00:15:40.453 "claim_type": "exclusive_write", 00:15:40.453 "zoned": false, 00:15:40.453 "supported_io_types": { 00:15:40.453 "read": true, 00:15:40.453 "write": true, 00:15:40.453 "unmap": true, 00:15:40.453 "flush": true, 00:15:40.453 "reset": true, 00:15:40.453 "nvme_admin": false, 00:15:40.453 "nvme_io": false, 00:15:40.453 "nvme_io_md": false, 00:15:40.453 "write_zeroes": true, 00:15:40.453 "zcopy": true, 00:15:40.453 "get_zone_info": false, 00:15:40.453 "zone_management": false, 00:15:40.453 "zone_append": false, 00:15:40.453 "compare": false, 00:15:40.453 "compare_and_write": false, 00:15:40.453 "abort": true, 00:15:40.453 "seek_hole": false, 00:15:40.453 "seek_data": false, 00:15:40.453 "copy": true, 00:15:40.453 "nvme_iov_md": false 00:15:40.453 }, 00:15:40.453 "memory_domains": [ 00:15:40.453 { 00:15:40.453 "dma_device_id": "system", 00:15:40.453 "dma_device_type": 1 00:15:40.453 }, 00:15:40.453 { 00:15:40.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.453 "dma_device_type": 2 00:15:40.453 } 00:15:40.453 ], 00:15:40.453 "driver_specific": {} 00:15:40.453 }' 00:15:40.453 21:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:40.453 21:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:40.453 21:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:40.453 21:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:40.453 21:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:40.712 21:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:40.712 21:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:40.712 21:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:40.712 21:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:40.712 21:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:40.712 21:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:40.712 21:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:40.712 21:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:40.712 21:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:40.712 21:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:40.972 21:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:40.972 "name": "BaseBdev2", 00:15:40.972 "aliases": [ 00:15:40.972 "e7ee3331-38a9-4e3d-a5bb-c4a7fd781384" 00:15:40.972 ], 00:15:40.972 "product_name": "Malloc disk", 00:15:40.972 "block_size": 512, 00:15:40.972 "num_blocks": 65536, 00:15:40.972 "uuid": "e7ee3331-38a9-4e3d-a5bb-c4a7fd781384", 00:15:40.972 "assigned_rate_limits": { 00:15:40.972 "rw_ios_per_sec": 0, 00:15:40.972 "rw_mbytes_per_sec": 0, 00:15:40.972 "r_mbytes_per_sec": 0, 00:15:40.972 "w_mbytes_per_sec": 0 00:15:40.972 }, 00:15:40.972 "claimed": true, 00:15:40.972 "claim_type": "exclusive_write", 00:15:40.972 "zoned": false, 00:15:40.972 "supported_io_types": { 00:15:40.972 "read": true, 00:15:40.972 "write": true, 00:15:40.972 "unmap": true, 00:15:40.972 "flush": true, 00:15:40.972 "reset": true, 00:15:40.972 "nvme_admin": false, 00:15:40.972 "nvme_io": false, 00:15:40.972 "nvme_io_md": false, 00:15:40.972 "write_zeroes": true, 00:15:40.972 "zcopy": true, 00:15:40.972 "get_zone_info": false, 00:15:40.972 "zone_management": false, 00:15:40.972 "zone_append": false, 00:15:40.972 "compare": false, 00:15:40.972 "compare_and_write": false, 00:15:40.972 "abort": true, 00:15:40.972 "seek_hole": false, 00:15:40.972 "seek_data": false, 00:15:40.972 "copy": true, 00:15:40.972 "nvme_iov_md": false 00:15:40.972 }, 00:15:40.972 "memory_domains": [ 00:15:40.972 { 00:15:40.972 "dma_device_id": "system", 00:15:40.972 "dma_device_type": 1 00:15:40.972 }, 00:15:40.972 { 00:15:40.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.972 "dma_device_type": 2 00:15:40.972 } 00:15:40.972 ], 00:15:40.972 "driver_specific": {} 00:15:40.972 }' 00:15:40.972 21:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:40.972 21:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:40.972 21:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:40.972 21:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:40.972 21:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:41.232 21:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:41.232 21:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:41.232 21:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:41.232 21:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:41.232 21:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:41.232 21:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:41.232 21:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:41.232 21:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:41.232 21:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:41.232 21:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:41.492 21:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:41.492 "name": "BaseBdev3", 00:15:41.492 "aliases": [ 00:15:41.492 "183fc7fe-2807-44cd-9725-432b79efadc2" 00:15:41.492 ], 00:15:41.492 "product_name": "Malloc disk", 00:15:41.492 "block_size": 512, 00:15:41.492 "num_blocks": 65536, 00:15:41.492 "uuid": "183fc7fe-2807-44cd-9725-432b79efadc2", 00:15:41.492 "assigned_rate_limits": { 00:15:41.492 "rw_ios_per_sec": 0, 00:15:41.492 "rw_mbytes_per_sec": 0, 00:15:41.492 "r_mbytes_per_sec": 0, 00:15:41.492 "w_mbytes_per_sec": 0 00:15:41.492 }, 00:15:41.492 "claimed": true, 00:15:41.492 "claim_type": "exclusive_write", 00:15:41.492 "zoned": false, 00:15:41.492 "supported_io_types": { 00:15:41.492 "read": true, 00:15:41.492 "write": true, 00:15:41.492 "unmap": true, 00:15:41.492 "flush": true, 00:15:41.492 "reset": true, 00:15:41.492 "nvme_admin": false, 00:15:41.492 "nvme_io": false, 00:15:41.492 "nvme_io_md": false, 00:15:41.492 "write_zeroes": true, 00:15:41.492 "zcopy": true, 00:15:41.492 "get_zone_info": false, 00:15:41.492 "zone_management": false, 00:15:41.492 "zone_append": false, 00:15:41.492 "compare": false, 00:15:41.492 "compare_and_write": false, 00:15:41.492 "abort": true, 00:15:41.492 "seek_hole": false, 00:15:41.492 "seek_data": false, 00:15:41.492 "copy": true, 00:15:41.492 "nvme_iov_md": false 00:15:41.492 }, 00:15:41.492 "memory_domains": [ 00:15:41.492 { 00:15:41.492 "dma_device_id": "system", 00:15:41.492 "dma_device_type": 1 00:15:41.492 }, 00:15:41.492 { 00:15:41.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.492 "dma_device_type": 2 00:15:41.492 } 00:15:41.492 ], 00:15:41.492 "driver_specific": {} 00:15:41.492 }' 00:15:41.492 21:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:41.492 21:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:41.492 21:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:41.492 21:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:41.492 21:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:41.492 21:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:41.492 21:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:41.492 21:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:41.752 21:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:41.752 21:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:41.752 21:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:41.752 21:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:41.752 21:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:41.752 [2024-07-13 21:59:01.125017] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:41.752 [2024-07-13 21:59:01.125042] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:41.752 [2024-07-13 21:59:01.125088] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:42.011 21:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:42.011 21:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:15:42.011 21:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:42.011 21:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:15:42.011 21:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:15:42.011 21:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:15:42.011 21:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:42.011 21:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:15:42.011 21:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:42.011 21:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:42.011 21:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:42.011 21:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:42.011 21:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:42.011 21:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:42.011 21:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:42.011 21:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:42.012 21:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.012 21:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:42.012 "name": "Existed_Raid", 00:15:42.012 "uuid": "17578ac4-3dbd-4b74-b908-4f8d54c5de78", 00:15:42.012 "strip_size_kb": 64, 00:15:42.012 "state": "offline", 00:15:42.012 "raid_level": "concat", 00:15:42.012 "superblock": true, 00:15:42.012 "num_base_bdevs": 3, 00:15:42.012 "num_base_bdevs_discovered": 2, 00:15:42.012 "num_base_bdevs_operational": 2, 00:15:42.012 "base_bdevs_list": [ 00:15:42.012 { 00:15:42.012 "name": null, 00:15:42.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.012 "is_configured": false, 00:15:42.012 "data_offset": 2048, 00:15:42.012 "data_size": 63488 00:15:42.012 }, 00:15:42.012 { 00:15:42.012 "name": "BaseBdev2", 00:15:42.012 "uuid": "e7ee3331-38a9-4e3d-a5bb-c4a7fd781384", 00:15:42.012 "is_configured": true, 00:15:42.012 "data_offset": 2048, 00:15:42.012 "data_size": 63488 00:15:42.012 }, 00:15:42.012 { 00:15:42.012 "name": "BaseBdev3", 00:15:42.012 "uuid": "183fc7fe-2807-44cd-9725-432b79efadc2", 00:15:42.012 "is_configured": true, 00:15:42.012 "data_offset": 2048, 00:15:42.012 "data_size": 63488 00:15:42.012 } 00:15:42.012 ] 00:15:42.012 }' 00:15:42.012 21:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:42.012 21:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.600 21:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:42.600 21:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:42.600 21:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:42.600 21:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:42.874 21:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:42.874 21:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:42.874 21:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:42.875 [2024-07-13 21:59:02.165302] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:43.134 21:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:43.134 21:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:43.134 21:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.134 21:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:43.134 21:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:43.134 21:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:43.134 21:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:43.393 [2024-07-13 21:59:02.599447] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:43.393 [2024-07-13 21:59:02.599495] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:15:43.393 21:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:43.393 21:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:43.393 21:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.393 21:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:43.652 21:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:43.652 21:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:43.652 21:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:15:43.652 21:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:15:43.652 21:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:43.652 21:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:43.911 BaseBdev2 00:15:43.911 21:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:15:43.911 21:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:43.911 21:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:43.911 21:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:43.911 21:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:43.911 21:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:43.911 21:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:43.911 21:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:44.170 [ 00:15:44.170 { 00:15:44.170 "name": "BaseBdev2", 00:15:44.170 "aliases": [ 00:15:44.170 "c840e13c-85b1-472c-88ce-934ac6d46de6" 00:15:44.170 ], 00:15:44.170 "product_name": "Malloc disk", 00:15:44.170 "block_size": 512, 00:15:44.170 "num_blocks": 65536, 00:15:44.170 "uuid": "c840e13c-85b1-472c-88ce-934ac6d46de6", 00:15:44.170 "assigned_rate_limits": { 00:15:44.170 "rw_ios_per_sec": 0, 00:15:44.170 "rw_mbytes_per_sec": 0, 00:15:44.170 "r_mbytes_per_sec": 0, 00:15:44.170 "w_mbytes_per_sec": 0 00:15:44.170 }, 00:15:44.170 "claimed": false, 00:15:44.170 "zoned": false, 00:15:44.170 "supported_io_types": { 00:15:44.170 "read": true, 00:15:44.170 "write": true, 00:15:44.170 "unmap": true, 00:15:44.170 "flush": true, 00:15:44.170 "reset": true, 00:15:44.170 "nvme_admin": false, 00:15:44.170 "nvme_io": false, 00:15:44.170 "nvme_io_md": false, 00:15:44.170 "write_zeroes": true, 00:15:44.170 "zcopy": true, 00:15:44.170 "get_zone_info": false, 00:15:44.170 "zone_management": false, 00:15:44.170 "zone_append": false, 00:15:44.170 "compare": false, 00:15:44.170 "compare_and_write": false, 00:15:44.170 "abort": true, 00:15:44.170 "seek_hole": false, 00:15:44.170 "seek_data": false, 00:15:44.170 "copy": true, 00:15:44.170 "nvme_iov_md": false 00:15:44.170 }, 00:15:44.170 "memory_domains": [ 00:15:44.170 { 00:15:44.170 "dma_device_id": "system", 00:15:44.170 "dma_device_type": 1 00:15:44.170 }, 00:15:44.170 { 00:15:44.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.170 "dma_device_type": 2 00:15:44.170 } 00:15:44.170 ], 00:15:44.170 "driver_specific": {} 00:15:44.170 } 00:15:44.170 ] 00:15:44.170 21:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:44.170 21:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:44.170 21:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:44.170 21:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:44.429 BaseBdev3 00:15:44.429 21:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:15:44.429 21:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:15:44.429 21:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:44.429 21:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:44.429 21:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:44.429 21:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:44.429 21:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:44.429 21:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:44.688 [ 00:15:44.688 { 00:15:44.688 "name": "BaseBdev3", 00:15:44.688 "aliases": [ 00:15:44.688 "5b698bf7-7c04-429c-8f3d-ba8db6e2b2e6" 00:15:44.688 ], 00:15:44.688 "product_name": "Malloc disk", 00:15:44.688 "block_size": 512, 00:15:44.688 "num_blocks": 65536, 00:15:44.688 "uuid": "5b698bf7-7c04-429c-8f3d-ba8db6e2b2e6", 00:15:44.688 "assigned_rate_limits": { 00:15:44.688 "rw_ios_per_sec": 0, 00:15:44.688 "rw_mbytes_per_sec": 0, 00:15:44.688 "r_mbytes_per_sec": 0, 00:15:44.688 "w_mbytes_per_sec": 0 00:15:44.688 }, 00:15:44.688 "claimed": false, 00:15:44.688 "zoned": false, 00:15:44.688 "supported_io_types": { 00:15:44.688 "read": true, 00:15:44.688 "write": true, 00:15:44.688 "unmap": true, 00:15:44.688 "flush": true, 00:15:44.688 "reset": true, 00:15:44.688 "nvme_admin": false, 00:15:44.688 "nvme_io": false, 00:15:44.688 "nvme_io_md": false, 00:15:44.688 "write_zeroes": true, 00:15:44.688 "zcopy": true, 00:15:44.688 "get_zone_info": false, 00:15:44.688 "zone_management": false, 00:15:44.688 "zone_append": false, 00:15:44.688 "compare": false, 00:15:44.688 "compare_and_write": false, 00:15:44.688 "abort": true, 00:15:44.688 "seek_hole": false, 00:15:44.688 "seek_data": false, 00:15:44.688 "copy": true, 00:15:44.688 "nvme_iov_md": false 00:15:44.688 }, 00:15:44.688 "memory_domains": [ 00:15:44.688 { 00:15:44.688 "dma_device_id": "system", 00:15:44.688 "dma_device_type": 1 00:15:44.688 }, 00:15:44.688 { 00:15:44.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.688 "dma_device_type": 2 00:15:44.688 } 00:15:44.688 ], 00:15:44.688 "driver_specific": {} 00:15:44.688 } 00:15:44.688 ] 00:15:44.688 21:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:44.688 21:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:44.688 21:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:44.688 21:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:44.946 [2024-07-13 21:59:04.110669] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:44.946 [2024-07-13 21:59:04.110711] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:44.946 [2024-07-13 21:59:04.110753] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:44.946 [2024-07-13 21:59:04.112516] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:44.946 21:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:44.946 21:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:44.946 21:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:44.947 21:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:44.947 21:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:44.947 21:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:44.947 21:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:44.947 21:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:44.947 21:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:44.947 21:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:44.947 21:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.947 21:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:44.947 21:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:44.947 "name": "Existed_Raid", 00:15:44.947 "uuid": "9976e3ec-f947-466f-aec4-7d4ca37223f4", 00:15:44.947 "strip_size_kb": 64, 00:15:44.947 "state": "configuring", 00:15:44.947 "raid_level": "concat", 00:15:44.947 "superblock": true, 00:15:44.947 "num_base_bdevs": 3, 00:15:44.947 "num_base_bdevs_discovered": 2, 00:15:44.947 "num_base_bdevs_operational": 3, 00:15:44.947 "base_bdevs_list": [ 00:15:44.947 { 00:15:44.947 "name": "BaseBdev1", 00:15:44.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.947 "is_configured": false, 00:15:44.947 "data_offset": 0, 00:15:44.947 "data_size": 0 00:15:44.947 }, 00:15:44.947 { 00:15:44.947 "name": "BaseBdev2", 00:15:44.947 "uuid": "c840e13c-85b1-472c-88ce-934ac6d46de6", 00:15:44.947 "is_configured": true, 00:15:44.947 "data_offset": 2048, 00:15:44.947 "data_size": 63488 00:15:44.947 }, 00:15:44.947 { 00:15:44.947 "name": "BaseBdev3", 00:15:44.947 "uuid": "5b698bf7-7c04-429c-8f3d-ba8db6e2b2e6", 00:15:44.947 "is_configured": true, 00:15:44.947 "data_offset": 2048, 00:15:44.947 "data_size": 63488 00:15:44.947 } 00:15:44.947 ] 00:15:44.947 }' 00:15:44.947 21:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:44.947 21:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.514 21:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:15:45.773 [2024-07-13 21:59:04.904730] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:45.774 21:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:45.774 21:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:45.774 21:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:45.774 21:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:45.774 21:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:45.774 21:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:45.774 21:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:45.774 21:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:45.774 21:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:45.774 21:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:45.774 21:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.774 21:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.774 21:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:45.774 "name": "Existed_Raid", 00:15:45.774 "uuid": "9976e3ec-f947-466f-aec4-7d4ca37223f4", 00:15:45.774 "strip_size_kb": 64, 00:15:45.774 "state": "configuring", 00:15:45.774 "raid_level": "concat", 00:15:45.774 "superblock": true, 00:15:45.774 "num_base_bdevs": 3, 00:15:45.774 "num_base_bdevs_discovered": 1, 00:15:45.774 "num_base_bdevs_operational": 3, 00:15:45.774 "base_bdevs_list": [ 00:15:45.774 { 00:15:45.774 "name": "BaseBdev1", 00:15:45.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.774 "is_configured": false, 00:15:45.774 "data_offset": 0, 00:15:45.774 "data_size": 0 00:15:45.774 }, 00:15:45.774 { 00:15:45.774 "name": null, 00:15:45.774 "uuid": "c840e13c-85b1-472c-88ce-934ac6d46de6", 00:15:45.774 "is_configured": false, 00:15:45.774 "data_offset": 2048, 00:15:45.774 "data_size": 63488 00:15:45.774 }, 00:15:45.774 { 00:15:45.774 "name": "BaseBdev3", 00:15:45.774 "uuid": "5b698bf7-7c04-429c-8f3d-ba8db6e2b2e6", 00:15:45.774 "is_configured": true, 00:15:45.774 "data_offset": 2048, 00:15:45.774 "data_size": 63488 00:15:45.774 } 00:15:45.774 ] 00:15:45.774 }' 00:15:45.774 21:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:45.774 21:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.341 21:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:46.341 21:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:46.600 21:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:15:46.600 21:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:46.600 [2024-07-13 21:59:05.935932] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:46.600 BaseBdev1 00:15:46.600 21:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:15:46.600 21:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:46.600 21:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:46.600 21:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:46.600 21:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:46.600 21:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:46.600 21:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:46.858 21:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:47.117 [ 00:15:47.117 { 00:15:47.117 "name": "BaseBdev1", 00:15:47.117 "aliases": [ 00:15:47.117 "37bd5cd8-205c-4e81-9ffb-463c1da952e8" 00:15:47.117 ], 00:15:47.117 "product_name": "Malloc disk", 00:15:47.117 "block_size": 512, 00:15:47.117 "num_blocks": 65536, 00:15:47.117 "uuid": "37bd5cd8-205c-4e81-9ffb-463c1da952e8", 00:15:47.117 "assigned_rate_limits": { 00:15:47.117 "rw_ios_per_sec": 0, 00:15:47.117 "rw_mbytes_per_sec": 0, 00:15:47.117 "r_mbytes_per_sec": 0, 00:15:47.117 "w_mbytes_per_sec": 0 00:15:47.117 }, 00:15:47.117 "claimed": true, 00:15:47.117 "claim_type": "exclusive_write", 00:15:47.117 "zoned": false, 00:15:47.117 "supported_io_types": { 00:15:47.117 "read": true, 00:15:47.117 "write": true, 00:15:47.117 "unmap": true, 00:15:47.117 "flush": true, 00:15:47.117 "reset": true, 00:15:47.117 "nvme_admin": false, 00:15:47.117 "nvme_io": false, 00:15:47.117 "nvme_io_md": false, 00:15:47.117 "write_zeroes": true, 00:15:47.117 "zcopy": true, 00:15:47.118 "get_zone_info": false, 00:15:47.118 "zone_management": false, 00:15:47.118 "zone_append": false, 00:15:47.118 "compare": false, 00:15:47.118 "compare_and_write": false, 00:15:47.118 "abort": true, 00:15:47.118 "seek_hole": false, 00:15:47.118 "seek_data": false, 00:15:47.118 "copy": true, 00:15:47.118 "nvme_iov_md": false 00:15:47.118 }, 00:15:47.118 "memory_domains": [ 00:15:47.118 { 00:15:47.118 "dma_device_id": "system", 00:15:47.118 "dma_device_type": 1 00:15:47.118 }, 00:15:47.118 { 00:15:47.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.118 "dma_device_type": 2 00:15:47.118 } 00:15:47.118 ], 00:15:47.118 "driver_specific": {} 00:15:47.118 } 00:15:47.118 ] 00:15:47.118 21:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:47.118 21:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:47.118 21:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:47.118 21:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:47.118 21:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:47.118 21:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:47.118 21:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:47.118 21:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:47.118 21:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:47.118 21:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:47.118 21:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:47.118 21:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:47.118 21:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.118 21:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:47.118 "name": "Existed_Raid", 00:15:47.118 "uuid": "9976e3ec-f947-466f-aec4-7d4ca37223f4", 00:15:47.118 "strip_size_kb": 64, 00:15:47.118 "state": "configuring", 00:15:47.118 "raid_level": "concat", 00:15:47.118 "superblock": true, 00:15:47.118 "num_base_bdevs": 3, 00:15:47.118 "num_base_bdevs_discovered": 2, 00:15:47.118 "num_base_bdevs_operational": 3, 00:15:47.118 "base_bdevs_list": [ 00:15:47.118 { 00:15:47.118 "name": "BaseBdev1", 00:15:47.118 "uuid": "37bd5cd8-205c-4e81-9ffb-463c1da952e8", 00:15:47.118 "is_configured": true, 00:15:47.118 "data_offset": 2048, 00:15:47.118 "data_size": 63488 00:15:47.118 }, 00:15:47.118 { 00:15:47.118 "name": null, 00:15:47.118 "uuid": "c840e13c-85b1-472c-88ce-934ac6d46de6", 00:15:47.118 "is_configured": false, 00:15:47.118 "data_offset": 2048, 00:15:47.118 "data_size": 63488 00:15:47.118 }, 00:15:47.118 { 00:15:47.118 "name": "BaseBdev3", 00:15:47.118 "uuid": "5b698bf7-7c04-429c-8f3d-ba8db6e2b2e6", 00:15:47.118 "is_configured": true, 00:15:47.118 "data_offset": 2048, 00:15:47.118 "data_size": 63488 00:15:47.118 } 00:15:47.118 ] 00:15:47.118 }' 00:15:47.118 21:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:47.118 21:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.684 21:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:47.684 21:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:47.942 21:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:15:47.942 21:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:15:47.942 [2024-07-13 21:59:07.223388] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:47.942 21:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:47.942 21:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:47.942 21:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:47.942 21:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:47.942 21:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:47.942 21:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:47.942 21:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:47.942 21:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:47.942 21:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:47.942 21:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:47.942 21:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:47.942 21:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:48.201 21:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:48.201 "name": "Existed_Raid", 00:15:48.201 "uuid": "9976e3ec-f947-466f-aec4-7d4ca37223f4", 00:15:48.201 "strip_size_kb": 64, 00:15:48.201 "state": "configuring", 00:15:48.201 "raid_level": "concat", 00:15:48.201 "superblock": true, 00:15:48.201 "num_base_bdevs": 3, 00:15:48.201 "num_base_bdevs_discovered": 1, 00:15:48.201 "num_base_bdevs_operational": 3, 00:15:48.201 "base_bdevs_list": [ 00:15:48.201 { 00:15:48.201 "name": "BaseBdev1", 00:15:48.201 "uuid": "37bd5cd8-205c-4e81-9ffb-463c1da952e8", 00:15:48.201 "is_configured": true, 00:15:48.201 "data_offset": 2048, 00:15:48.201 "data_size": 63488 00:15:48.201 }, 00:15:48.201 { 00:15:48.201 "name": null, 00:15:48.201 "uuid": "c840e13c-85b1-472c-88ce-934ac6d46de6", 00:15:48.201 "is_configured": false, 00:15:48.201 "data_offset": 2048, 00:15:48.201 "data_size": 63488 00:15:48.201 }, 00:15:48.201 { 00:15:48.201 "name": null, 00:15:48.201 "uuid": "5b698bf7-7c04-429c-8f3d-ba8db6e2b2e6", 00:15:48.201 "is_configured": false, 00:15:48.201 "data_offset": 2048, 00:15:48.201 "data_size": 63488 00:15:48.201 } 00:15:48.201 ] 00:15:48.201 }' 00:15:48.201 21:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:48.201 21:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.770 21:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:48.770 21:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:48.770 21:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:15:48.770 21:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:49.029 [2024-07-13 21:59:08.210022] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:49.029 21:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:49.029 21:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:49.029 21:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:49.029 21:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:49.029 21:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:49.029 21:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:49.029 21:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:49.029 21:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:49.029 21:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:49.029 21:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:49.029 21:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:49.029 21:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.029 21:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:49.029 "name": "Existed_Raid", 00:15:49.029 "uuid": "9976e3ec-f947-466f-aec4-7d4ca37223f4", 00:15:49.029 "strip_size_kb": 64, 00:15:49.029 "state": "configuring", 00:15:49.029 "raid_level": "concat", 00:15:49.029 "superblock": true, 00:15:49.029 "num_base_bdevs": 3, 00:15:49.029 "num_base_bdevs_discovered": 2, 00:15:49.029 "num_base_bdevs_operational": 3, 00:15:49.029 "base_bdevs_list": [ 00:15:49.029 { 00:15:49.029 "name": "BaseBdev1", 00:15:49.029 "uuid": "37bd5cd8-205c-4e81-9ffb-463c1da952e8", 00:15:49.029 "is_configured": true, 00:15:49.029 "data_offset": 2048, 00:15:49.029 "data_size": 63488 00:15:49.029 }, 00:15:49.029 { 00:15:49.029 "name": null, 00:15:49.029 "uuid": "c840e13c-85b1-472c-88ce-934ac6d46de6", 00:15:49.029 "is_configured": false, 00:15:49.029 "data_offset": 2048, 00:15:49.029 "data_size": 63488 00:15:49.029 }, 00:15:49.029 { 00:15:49.029 "name": "BaseBdev3", 00:15:49.029 "uuid": "5b698bf7-7c04-429c-8f3d-ba8db6e2b2e6", 00:15:49.029 "is_configured": true, 00:15:49.029 "data_offset": 2048, 00:15:49.029 "data_size": 63488 00:15:49.029 } 00:15:49.029 ] 00:15:49.029 }' 00:15:49.029 21:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:49.029 21:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.598 21:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:49.598 21:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:49.857 21:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:15:49.857 21:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:49.857 [2024-07-13 21:59:09.136504] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:49.857 21:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:49.857 21:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:49.857 21:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:49.857 21:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:49.857 21:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:49.857 21:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:49.857 21:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:49.857 21:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:49.857 21:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:49.857 21:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:50.117 21:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:50.117 21:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.117 21:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:50.117 "name": "Existed_Raid", 00:15:50.117 "uuid": "9976e3ec-f947-466f-aec4-7d4ca37223f4", 00:15:50.117 "strip_size_kb": 64, 00:15:50.117 "state": "configuring", 00:15:50.117 "raid_level": "concat", 00:15:50.117 "superblock": true, 00:15:50.117 "num_base_bdevs": 3, 00:15:50.117 "num_base_bdevs_discovered": 1, 00:15:50.117 "num_base_bdevs_operational": 3, 00:15:50.117 "base_bdevs_list": [ 00:15:50.117 { 00:15:50.117 "name": null, 00:15:50.117 "uuid": "37bd5cd8-205c-4e81-9ffb-463c1da952e8", 00:15:50.117 "is_configured": false, 00:15:50.117 "data_offset": 2048, 00:15:50.117 "data_size": 63488 00:15:50.117 }, 00:15:50.117 { 00:15:50.117 "name": null, 00:15:50.117 "uuid": "c840e13c-85b1-472c-88ce-934ac6d46de6", 00:15:50.117 "is_configured": false, 00:15:50.117 "data_offset": 2048, 00:15:50.117 "data_size": 63488 00:15:50.117 }, 00:15:50.117 { 00:15:50.117 "name": "BaseBdev3", 00:15:50.117 "uuid": "5b698bf7-7c04-429c-8f3d-ba8db6e2b2e6", 00:15:50.117 "is_configured": true, 00:15:50.117 "data_offset": 2048, 00:15:50.117 "data_size": 63488 00:15:50.117 } 00:15:50.117 ] 00:15:50.117 }' 00:15:50.117 21:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:50.117 21:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.685 21:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:50.685 21:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:50.685 21:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:15:50.685 21:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:50.945 [2024-07-13 21:59:10.198966] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:50.945 21:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:50.945 21:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:50.945 21:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:50.945 21:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:50.945 21:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:50.945 21:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:50.945 21:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:50.946 21:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:50.946 21:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:50.946 21:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:50.946 21:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.946 21:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:51.205 21:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:51.205 "name": "Existed_Raid", 00:15:51.205 "uuid": "9976e3ec-f947-466f-aec4-7d4ca37223f4", 00:15:51.205 "strip_size_kb": 64, 00:15:51.205 "state": "configuring", 00:15:51.205 "raid_level": "concat", 00:15:51.205 "superblock": true, 00:15:51.205 "num_base_bdevs": 3, 00:15:51.205 "num_base_bdevs_discovered": 2, 00:15:51.205 "num_base_bdevs_operational": 3, 00:15:51.205 "base_bdevs_list": [ 00:15:51.205 { 00:15:51.205 "name": null, 00:15:51.205 "uuid": "37bd5cd8-205c-4e81-9ffb-463c1da952e8", 00:15:51.205 "is_configured": false, 00:15:51.205 "data_offset": 2048, 00:15:51.205 "data_size": 63488 00:15:51.205 }, 00:15:51.205 { 00:15:51.205 "name": "BaseBdev2", 00:15:51.205 "uuid": "c840e13c-85b1-472c-88ce-934ac6d46de6", 00:15:51.205 "is_configured": true, 00:15:51.205 "data_offset": 2048, 00:15:51.205 "data_size": 63488 00:15:51.205 }, 00:15:51.205 { 00:15:51.205 "name": "BaseBdev3", 00:15:51.205 "uuid": "5b698bf7-7c04-429c-8f3d-ba8db6e2b2e6", 00:15:51.205 "is_configured": true, 00:15:51.205 "data_offset": 2048, 00:15:51.205 "data_size": 63488 00:15:51.205 } 00:15:51.205 ] 00:15:51.205 }' 00:15:51.205 21:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:51.205 21:59:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.774 21:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:51.774 21:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:51.774 21:59:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:15:51.774 21:59:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:51.774 21:59:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:52.033 21:59:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 37bd5cd8-205c-4e81-9ffb-463c1da952e8 00:15:52.033 [2024-07-13 21:59:11.396041] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:52.033 [2024-07-13 21:59:11.396253] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041780 00:15:52.033 [2024-07-13 21:59:11.396271] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:52.033 [2024-07-13 21:59:11.396502] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010980 00:15:52.033 [2024-07-13 21:59:11.396659] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041780 00:15:52.033 [2024-07-13 21:59:11.396670] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000041780 00:15:52.033 [2024-07-13 21:59:11.396809] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:52.033 NewBaseBdev 00:15:52.033 21:59:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:15:52.033 21:59:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:15:52.033 21:59:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:52.033 21:59:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:52.033 21:59:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:52.033 21:59:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:52.033 21:59:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:52.292 21:59:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:52.550 [ 00:15:52.550 { 00:15:52.550 "name": "NewBaseBdev", 00:15:52.550 "aliases": [ 00:15:52.550 "37bd5cd8-205c-4e81-9ffb-463c1da952e8" 00:15:52.550 ], 00:15:52.550 "product_name": "Malloc disk", 00:15:52.550 "block_size": 512, 00:15:52.550 "num_blocks": 65536, 00:15:52.550 "uuid": "37bd5cd8-205c-4e81-9ffb-463c1da952e8", 00:15:52.550 "assigned_rate_limits": { 00:15:52.550 "rw_ios_per_sec": 0, 00:15:52.550 "rw_mbytes_per_sec": 0, 00:15:52.550 "r_mbytes_per_sec": 0, 00:15:52.550 "w_mbytes_per_sec": 0 00:15:52.550 }, 00:15:52.550 "claimed": true, 00:15:52.550 "claim_type": "exclusive_write", 00:15:52.550 "zoned": false, 00:15:52.550 "supported_io_types": { 00:15:52.550 "read": true, 00:15:52.550 "write": true, 00:15:52.550 "unmap": true, 00:15:52.550 "flush": true, 00:15:52.550 "reset": true, 00:15:52.550 "nvme_admin": false, 00:15:52.550 "nvme_io": false, 00:15:52.550 "nvme_io_md": false, 00:15:52.550 "write_zeroes": true, 00:15:52.550 "zcopy": true, 00:15:52.550 "get_zone_info": false, 00:15:52.550 "zone_management": false, 00:15:52.550 "zone_append": false, 00:15:52.550 "compare": false, 00:15:52.550 "compare_and_write": false, 00:15:52.550 "abort": true, 00:15:52.550 "seek_hole": false, 00:15:52.550 "seek_data": false, 00:15:52.550 "copy": true, 00:15:52.550 "nvme_iov_md": false 00:15:52.550 }, 00:15:52.550 "memory_domains": [ 00:15:52.550 { 00:15:52.550 "dma_device_id": "system", 00:15:52.550 "dma_device_type": 1 00:15:52.550 }, 00:15:52.550 { 00:15:52.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.550 "dma_device_type": 2 00:15:52.550 } 00:15:52.550 ], 00:15:52.550 "driver_specific": {} 00:15:52.550 } 00:15:52.550 ] 00:15:52.550 21:59:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:52.550 21:59:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:15:52.551 21:59:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:52.551 21:59:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:52.551 21:59:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:52.551 21:59:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:52.551 21:59:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:52.551 21:59:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:52.551 21:59:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:52.551 21:59:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:52.551 21:59:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:52.551 21:59:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:52.551 21:59:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.551 21:59:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:52.551 "name": "Existed_Raid", 00:15:52.551 "uuid": "9976e3ec-f947-466f-aec4-7d4ca37223f4", 00:15:52.551 "strip_size_kb": 64, 00:15:52.551 "state": "online", 00:15:52.551 "raid_level": "concat", 00:15:52.551 "superblock": true, 00:15:52.551 "num_base_bdevs": 3, 00:15:52.551 "num_base_bdevs_discovered": 3, 00:15:52.551 "num_base_bdevs_operational": 3, 00:15:52.551 "base_bdevs_list": [ 00:15:52.551 { 00:15:52.551 "name": "NewBaseBdev", 00:15:52.551 "uuid": "37bd5cd8-205c-4e81-9ffb-463c1da952e8", 00:15:52.551 "is_configured": true, 00:15:52.551 "data_offset": 2048, 00:15:52.551 "data_size": 63488 00:15:52.551 }, 00:15:52.551 { 00:15:52.551 "name": "BaseBdev2", 00:15:52.551 "uuid": "c840e13c-85b1-472c-88ce-934ac6d46de6", 00:15:52.551 "is_configured": true, 00:15:52.551 "data_offset": 2048, 00:15:52.551 "data_size": 63488 00:15:52.551 }, 00:15:52.551 { 00:15:52.551 "name": "BaseBdev3", 00:15:52.551 "uuid": "5b698bf7-7c04-429c-8f3d-ba8db6e2b2e6", 00:15:52.551 "is_configured": true, 00:15:52.551 "data_offset": 2048, 00:15:52.551 "data_size": 63488 00:15:52.551 } 00:15:52.551 ] 00:15:52.551 }' 00:15:52.551 21:59:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:52.551 21:59:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.119 21:59:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:15:53.119 21:59:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:53.119 21:59:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:53.119 21:59:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:53.119 21:59:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:53.119 21:59:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:15:53.119 21:59:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:53.119 21:59:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:53.378 [2024-07-13 21:59:12.551440] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:53.378 21:59:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:53.378 "name": "Existed_Raid", 00:15:53.378 "aliases": [ 00:15:53.378 "9976e3ec-f947-466f-aec4-7d4ca37223f4" 00:15:53.378 ], 00:15:53.378 "product_name": "Raid Volume", 00:15:53.378 "block_size": 512, 00:15:53.378 "num_blocks": 190464, 00:15:53.378 "uuid": "9976e3ec-f947-466f-aec4-7d4ca37223f4", 00:15:53.378 "assigned_rate_limits": { 00:15:53.378 "rw_ios_per_sec": 0, 00:15:53.378 "rw_mbytes_per_sec": 0, 00:15:53.378 "r_mbytes_per_sec": 0, 00:15:53.378 "w_mbytes_per_sec": 0 00:15:53.378 }, 00:15:53.378 "claimed": false, 00:15:53.378 "zoned": false, 00:15:53.378 "supported_io_types": { 00:15:53.378 "read": true, 00:15:53.378 "write": true, 00:15:53.378 "unmap": true, 00:15:53.378 "flush": true, 00:15:53.378 "reset": true, 00:15:53.378 "nvme_admin": false, 00:15:53.378 "nvme_io": false, 00:15:53.378 "nvme_io_md": false, 00:15:53.378 "write_zeroes": true, 00:15:53.378 "zcopy": false, 00:15:53.378 "get_zone_info": false, 00:15:53.378 "zone_management": false, 00:15:53.378 "zone_append": false, 00:15:53.379 "compare": false, 00:15:53.379 "compare_and_write": false, 00:15:53.379 "abort": false, 00:15:53.379 "seek_hole": false, 00:15:53.379 "seek_data": false, 00:15:53.379 "copy": false, 00:15:53.379 "nvme_iov_md": false 00:15:53.379 }, 00:15:53.379 "memory_domains": [ 00:15:53.379 { 00:15:53.379 "dma_device_id": "system", 00:15:53.379 "dma_device_type": 1 00:15:53.379 }, 00:15:53.379 { 00:15:53.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.379 "dma_device_type": 2 00:15:53.379 }, 00:15:53.379 { 00:15:53.379 "dma_device_id": "system", 00:15:53.379 "dma_device_type": 1 00:15:53.379 }, 00:15:53.379 { 00:15:53.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.379 "dma_device_type": 2 00:15:53.379 }, 00:15:53.379 { 00:15:53.379 "dma_device_id": "system", 00:15:53.379 "dma_device_type": 1 00:15:53.379 }, 00:15:53.379 { 00:15:53.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.379 "dma_device_type": 2 00:15:53.379 } 00:15:53.379 ], 00:15:53.379 "driver_specific": { 00:15:53.379 "raid": { 00:15:53.379 "uuid": "9976e3ec-f947-466f-aec4-7d4ca37223f4", 00:15:53.379 "strip_size_kb": 64, 00:15:53.379 "state": "online", 00:15:53.379 "raid_level": "concat", 00:15:53.379 "superblock": true, 00:15:53.379 "num_base_bdevs": 3, 00:15:53.379 "num_base_bdevs_discovered": 3, 00:15:53.379 "num_base_bdevs_operational": 3, 00:15:53.379 "base_bdevs_list": [ 00:15:53.379 { 00:15:53.379 "name": "NewBaseBdev", 00:15:53.379 "uuid": "37bd5cd8-205c-4e81-9ffb-463c1da952e8", 00:15:53.379 "is_configured": true, 00:15:53.379 "data_offset": 2048, 00:15:53.379 "data_size": 63488 00:15:53.379 }, 00:15:53.379 { 00:15:53.379 "name": "BaseBdev2", 00:15:53.379 "uuid": "c840e13c-85b1-472c-88ce-934ac6d46de6", 00:15:53.379 "is_configured": true, 00:15:53.379 "data_offset": 2048, 00:15:53.379 "data_size": 63488 00:15:53.379 }, 00:15:53.379 { 00:15:53.379 "name": "BaseBdev3", 00:15:53.379 "uuid": "5b698bf7-7c04-429c-8f3d-ba8db6e2b2e6", 00:15:53.379 "is_configured": true, 00:15:53.379 "data_offset": 2048, 00:15:53.379 "data_size": 63488 00:15:53.379 } 00:15:53.379 ] 00:15:53.379 } 00:15:53.379 } 00:15:53.379 }' 00:15:53.379 21:59:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:53.379 21:59:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:15:53.379 BaseBdev2 00:15:53.379 BaseBdev3' 00:15:53.379 21:59:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:53.379 21:59:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:15:53.379 21:59:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:53.638 21:59:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:53.638 "name": "NewBaseBdev", 00:15:53.638 "aliases": [ 00:15:53.638 "37bd5cd8-205c-4e81-9ffb-463c1da952e8" 00:15:53.638 ], 00:15:53.638 "product_name": "Malloc disk", 00:15:53.639 "block_size": 512, 00:15:53.639 "num_blocks": 65536, 00:15:53.639 "uuid": "37bd5cd8-205c-4e81-9ffb-463c1da952e8", 00:15:53.639 "assigned_rate_limits": { 00:15:53.639 "rw_ios_per_sec": 0, 00:15:53.639 "rw_mbytes_per_sec": 0, 00:15:53.639 "r_mbytes_per_sec": 0, 00:15:53.639 "w_mbytes_per_sec": 0 00:15:53.639 }, 00:15:53.639 "claimed": true, 00:15:53.639 "claim_type": "exclusive_write", 00:15:53.639 "zoned": false, 00:15:53.639 "supported_io_types": { 00:15:53.639 "read": true, 00:15:53.639 "write": true, 00:15:53.639 "unmap": true, 00:15:53.639 "flush": true, 00:15:53.639 "reset": true, 00:15:53.639 "nvme_admin": false, 00:15:53.639 "nvme_io": false, 00:15:53.639 "nvme_io_md": false, 00:15:53.639 "write_zeroes": true, 00:15:53.639 "zcopy": true, 00:15:53.639 "get_zone_info": false, 00:15:53.639 "zone_management": false, 00:15:53.639 "zone_append": false, 00:15:53.639 "compare": false, 00:15:53.639 "compare_and_write": false, 00:15:53.639 "abort": true, 00:15:53.639 "seek_hole": false, 00:15:53.639 "seek_data": false, 00:15:53.639 "copy": true, 00:15:53.639 "nvme_iov_md": false 00:15:53.639 }, 00:15:53.639 "memory_domains": [ 00:15:53.639 { 00:15:53.639 "dma_device_id": "system", 00:15:53.639 "dma_device_type": 1 00:15:53.639 }, 00:15:53.639 { 00:15:53.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.639 "dma_device_type": 2 00:15:53.639 } 00:15:53.639 ], 00:15:53.639 "driver_specific": {} 00:15:53.639 }' 00:15:53.639 21:59:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:53.639 21:59:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:53.639 21:59:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:53.639 21:59:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:53.639 21:59:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:53.639 21:59:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:53.639 21:59:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:53.639 21:59:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:53.639 21:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:53.639 21:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:53.898 21:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:53.898 21:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:53.898 21:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:53.898 21:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:53.898 21:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:53.898 21:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:53.898 "name": "BaseBdev2", 00:15:53.898 "aliases": [ 00:15:53.898 "c840e13c-85b1-472c-88ce-934ac6d46de6" 00:15:53.898 ], 00:15:53.898 "product_name": "Malloc disk", 00:15:53.898 "block_size": 512, 00:15:53.898 "num_blocks": 65536, 00:15:53.898 "uuid": "c840e13c-85b1-472c-88ce-934ac6d46de6", 00:15:53.898 "assigned_rate_limits": { 00:15:53.898 "rw_ios_per_sec": 0, 00:15:53.898 "rw_mbytes_per_sec": 0, 00:15:53.898 "r_mbytes_per_sec": 0, 00:15:53.898 "w_mbytes_per_sec": 0 00:15:53.898 }, 00:15:53.898 "claimed": true, 00:15:53.898 "claim_type": "exclusive_write", 00:15:53.898 "zoned": false, 00:15:53.898 "supported_io_types": { 00:15:53.898 "read": true, 00:15:53.898 "write": true, 00:15:53.898 "unmap": true, 00:15:53.898 "flush": true, 00:15:53.898 "reset": true, 00:15:53.898 "nvme_admin": false, 00:15:53.898 "nvme_io": false, 00:15:53.898 "nvme_io_md": false, 00:15:53.898 "write_zeroes": true, 00:15:53.898 "zcopy": true, 00:15:53.898 "get_zone_info": false, 00:15:53.898 "zone_management": false, 00:15:53.898 "zone_append": false, 00:15:53.898 "compare": false, 00:15:53.898 "compare_and_write": false, 00:15:53.898 "abort": true, 00:15:53.898 "seek_hole": false, 00:15:53.898 "seek_data": false, 00:15:53.898 "copy": true, 00:15:53.898 "nvme_iov_md": false 00:15:53.898 }, 00:15:53.898 "memory_domains": [ 00:15:53.898 { 00:15:53.898 "dma_device_id": "system", 00:15:53.898 "dma_device_type": 1 00:15:53.898 }, 00:15:53.898 { 00:15:53.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.898 "dma_device_type": 2 00:15:53.898 } 00:15:53.898 ], 00:15:53.898 "driver_specific": {} 00:15:53.898 }' 00:15:53.898 21:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:54.156 21:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:54.156 21:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:54.156 21:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:54.156 21:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:54.156 21:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:54.156 21:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:54.156 21:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:54.156 21:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:54.156 21:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:54.156 21:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:54.156 21:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:54.156 21:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:54.156 21:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:54.156 21:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:54.414 21:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:54.414 "name": "BaseBdev3", 00:15:54.414 "aliases": [ 00:15:54.414 "5b698bf7-7c04-429c-8f3d-ba8db6e2b2e6" 00:15:54.414 ], 00:15:54.414 "product_name": "Malloc disk", 00:15:54.414 "block_size": 512, 00:15:54.414 "num_blocks": 65536, 00:15:54.414 "uuid": "5b698bf7-7c04-429c-8f3d-ba8db6e2b2e6", 00:15:54.414 "assigned_rate_limits": { 00:15:54.414 "rw_ios_per_sec": 0, 00:15:54.414 "rw_mbytes_per_sec": 0, 00:15:54.414 "r_mbytes_per_sec": 0, 00:15:54.414 "w_mbytes_per_sec": 0 00:15:54.414 }, 00:15:54.414 "claimed": true, 00:15:54.414 "claim_type": "exclusive_write", 00:15:54.414 "zoned": false, 00:15:54.414 "supported_io_types": { 00:15:54.414 "read": true, 00:15:54.414 "write": true, 00:15:54.414 "unmap": true, 00:15:54.414 "flush": true, 00:15:54.414 "reset": true, 00:15:54.414 "nvme_admin": false, 00:15:54.414 "nvme_io": false, 00:15:54.414 "nvme_io_md": false, 00:15:54.414 "write_zeroes": true, 00:15:54.414 "zcopy": true, 00:15:54.414 "get_zone_info": false, 00:15:54.414 "zone_management": false, 00:15:54.414 "zone_append": false, 00:15:54.414 "compare": false, 00:15:54.414 "compare_and_write": false, 00:15:54.414 "abort": true, 00:15:54.414 "seek_hole": false, 00:15:54.414 "seek_data": false, 00:15:54.414 "copy": true, 00:15:54.414 "nvme_iov_md": false 00:15:54.414 }, 00:15:54.414 "memory_domains": [ 00:15:54.414 { 00:15:54.414 "dma_device_id": "system", 00:15:54.414 "dma_device_type": 1 00:15:54.414 }, 00:15:54.414 { 00:15:54.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:54.414 "dma_device_type": 2 00:15:54.414 } 00:15:54.414 ], 00:15:54.414 "driver_specific": {} 00:15:54.414 }' 00:15:54.414 21:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:54.414 21:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:54.414 21:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:54.414 21:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:54.672 21:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:54.672 21:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:54.672 21:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:54.672 21:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:54.672 21:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:54.672 21:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:54.672 21:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:54.672 21:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:54.672 21:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:54.930 [2024-07-13 21:59:14.143323] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:54.930 [2024-07-13 21:59:14.143347] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:54.930 [2024-07-13 21:59:14.143417] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:54.930 [2024-07-13 21:59:14.143472] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:54.930 [2024-07-13 21:59:14.143488] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041780 name Existed_Raid, state offline 00:15:54.930 21:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 1386990 00:15:54.930 21:59:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 1386990 ']' 00:15:54.930 21:59:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 1386990 00:15:54.930 21:59:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:15:54.930 21:59:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:54.930 21:59:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1386990 00:15:54.930 21:59:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:54.930 21:59:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:54.930 21:59:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1386990' 00:15:54.930 killing process with pid 1386990 00:15:54.930 21:59:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 1386990 00:15:54.930 [2024-07-13 21:59:14.212974] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:54.930 21:59:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 1386990 00:15:55.188 [2024-07-13 21:59:14.438535] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:56.619 21:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:15:56.619 00:15:56.619 real 0m22.986s 00:15:56.619 user 0m40.284s 00:15:56.619 sys 0m4.284s 00:15:56.619 21:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:56.619 21:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.619 ************************************ 00:15:56.619 END TEST raid_state_function_test_sb 00:15:56.619 ************************************ 00:15:56.619 21:59:15 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:56.619 21:59:15 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:15:56.619 21:59:15 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:15:56.619 21:59:15 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:56.619 21:59:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:56.619 ************************************ 00:15:56.619 START TEST raid_superblock_test 00:15:56.619 ************************************ 00:15:56.619 21:59:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 3 00:15:56.619 21:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:15:56.619 21:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:15:56.619 21:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:15:56.619 21:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:15:56.619 21:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:15:56.619 21:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:15:56.619 21:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:15:56.619 21:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:15:56.619 21:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:15:56.619 21:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:15:56.619 21:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:15:56.619 21:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:15:56.619 21:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:15:56.619 21:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:15:56.619 21:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:15:56.619 21:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:15:56.619 21:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=1391517 00:15:56.619 21:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 1391517 /var/tmp/spdk-raid.sock 00:15:56.619 21:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:56.619 21:59:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 1391517 ']' 00:15:56.619 21:59:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:56.619 21:59:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:56.619 21:59:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:56.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:56.619 21:59:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:56.619 21:59:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.619 [2024-07-13 21:59:15.834051] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:56.619 [2024-07-13 21:59:15.834141] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1391517 ] 00:15:56.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:56.619 EAL: Requested device 0000:3d:01.0 cannot be used 00:15:56.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:56.619 EAL: Requested device 0000:3d:01.1 cannot be used 00:15:56.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:56.619 EAL: Requested device 0000:3d:01.2 cannot be used 00:15:56.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:56.619 EAL: Requested device 0000:3d:01.3 cannot be used 00:15:56.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:56.619 EAL: Requested device 0000:3d:01.4 cannot be used 00:15:56.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:56.619 EAL: Requested device 0000:3d:01.5 cannot be used 00:15:56.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:56.619 EAL: Requested device 0000:3d:01.6 cannot be used 00:15:56.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:56.619 EAL: Requested device 0000:3d:01.7 cannot be used 00:15:56.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:56.619 EAL: Requested device 0000:3d:02.0 cannot be used 00:15:56.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:56.619 EAL: Requested device 0000:3d:02.1 cannot be used 00:15:56.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:56.619 EAL: Requested device 0000:3d:02.2 cannot be used 00:15:56.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:56.619 EAL: Requested device 0000:3d:02.3 cannot be used 00:15:56.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:56.619 EAL: Requested device 0000:3d:02.4 cannot be used 00:15:56.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:56.619 EAL: Requested device 0000:3d:02.5 cannot be used 00:15:56.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:56.619 EAL: Requested device 0000:3d:02.6 cannot be used 00:15:56.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:56.619 EAL: Requested device 0000:3d:02.7 cannot be used 00:15:56.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:56.619 EAL: Requested device 0000:3f:01.0 cannot be used 00:15:56.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:56.619 EAL: Requested device 0000:3f:01.1 cannot be used 00:15:56.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:56.619 EAL: Requested device 0000:3f:01.2 cannot be used 00:15:56.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:56.619 EAL: Requested device 0000:3f:01.3 cannot be used 00:15:56.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:56.619 EAL: Requested device 0000:3f:01.4 cannot be used 00:15:56.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:56.619 EAL: Requested device 0000:3f:01.5 cannot be used 00:15:56.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:56.619 EAL: Requested device 0000:3f:01.6 cannot be used 00:15:56.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:56.619 EAL: Requested device 0000:3f:01.7 cannot be used 00:15:56.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:56.619 EAL: Requested device 0000:3f:02.0 cannot be used 00:15:56.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:56.619 EAL: Requested device 0000:3f:02.1 cannot be used 00:15:56.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:56.619 EAL: Requested device 0000:3f:02.2 cannot be used 00:15:56.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:56.619 EAL: Requested device 0000:3f:02.3 cannot be used 00:15:56.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:56.619 EAL: Requested device 0000:3f:02.4 cannot be used 00:15:56.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:56.619 EAL: Requested device 0000:3f:02.5 cannot be used 00:15:56.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:56.620 EAL: Requested device 0000:3f:02.6 cannot be used 00:15:56.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:56.620 EAL: Requested device 0000:3f:02.7 cannot be used 00:15:56.620 [2024-07-13 21:59:15.993056] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.880 [2024-07-13 21:59:16.205499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.203 [2024-07-13 21:59:16.452298] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:57.203 [2024-07-13 21:59:16.452330] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:57.462 21:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:57.462 21:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:15:57.462 21:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:15:57.462 21:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:57.462 21:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:15:57.462 21:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:15:57.462 21:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:57.462 21:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:57.462 21:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:57.462 21:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:57.462 21:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:57.462 malloc1 00:15:57.462 21:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:57.721 [2024-07-13 21:59:16.953948] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:57.721 [2024-07-13 21:59:16.954005] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.721 [2024-07-13 21:59:16.954045] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:15:57.721 [2024-07-13 21:59:16.954058] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.721 [2024-07-13 21:59:16.956112] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.721 [2024-07-13 21:59:16.956141] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:57.721 pt1 00:15:57.721 21:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:57.721 21:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:57.721 21:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:15:57.721 21:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:15:57.721 21:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:57.721 21:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:57.721 21:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:57.721 21:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:57.721 21:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:57.980 malloc2 00:15:57.980 21:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:57.980 [2024-07-13 21:59:17.318448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:57.980 [2024-07-13 21:59:17.318494] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.980 [2024-07-13 21:59:17.318514] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:15:57.980 [2024-07-13 21:59:17.318525] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.980 [2024-07-13 21:59:17.320633] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.980 [2024-07-13 21:59:17.320664] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:57.980 pt2 00:15:57.980 21:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:57.980 21:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:57.980 21:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:15:57.980 21:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:15:57.980 21:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:57.980 21:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:57.980 21:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:57.980 21:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:57.980 21:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:15:58.239 malloc3 00:15:58.239 21:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:58.499 [2024-07-13 21:59:17.694551] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:58.499 [2024-07-13 21:59:17.694599] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.499 [2024-07-13 21:59:17.694622] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040e80 00:15:58.499 [2024-07-13 21:59:17.694634] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.499 [2024-07-13 21:59:17.696717] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.499 [2024-07-13 21:59:17.696747] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:58.499 pt3 00:15:58.499 21:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:58.499 21:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:58.499 21:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:15:58.499 [2024-07-13 21:59:17.859053] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:58.499 [2024-07-13 21:59:17.860800] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:58.499 [2024-07-13 21:59:17.860867] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:58.499 [2024-07-13 21:59:17.861046] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041480 00:15:58.499 [2024-07-13 21:59:17.861065] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:58.499 [2024-07-13 21:59:17.861328] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:15:58.499 [2024-07-13 21:59:17.861521] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041480 00:15:58.499 [2024-07-13 21:59:17.861532] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041480 00:15:58.499 [2024-07-13 21:59:17.861681] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:58.499 21:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:15:58.499 21:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:58.499 21:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:58.499 21:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:58.499 21:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:58.499 21:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:58.499 21:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:58.499 21:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:58.499 21:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:58.499 21:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:58.499 21:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:58.499 21:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.758 21:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:58.758 "name": "raid_bdev1", 00:15:58.758 "uuid": "3d2ad98a-4e8a-497c-af1f-89319c573b3d", 00:15:58.758 "strip_size_kb": 64, 00:15:58.758 "state": "online", 00:15:58.758 "raid_level": "concat", 00:15:58.758 "superblock": true, 00:15:58.758 "num_base_bdevs": 3, 00:15:58.758 "num_base_bdevs_discovered": 3, 00:15:58.758 "num_base_bdevs_operational": 3, 00:15:58.758 "base_bdevs_list": [ 00:15:58.758 { 00:15:58.758 "name": "pt1", 00:15:58.758 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:58.758 "is_configured": true, 00:15:58.758 "data_offset": 2048, 00:15:58.758 "data_size": 63488 00:15:58.758 }, 00:15:58.758 { 00:15:58.758 "name": "pt2", 00:15:58.758 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:58.758 "is_configured": true, 00:15:58.758 "data_offset": 2048, 00:15:58.758 "data_size": 63488 00:15:58.758 }, 00:15:58.758 { 00:15:58.758 "name": "pt3", 00:15:58.758 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:58.758 "is_configured": true, 00:15:58.758 "data_offset": 2048, 00:15:58.758 "data_size": 63488 00:15:58.758 } 00:15:58.758 ] 00:15:58.758 }' 00:15:58.758 21:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:58.758 21:59:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.325 21:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:15:59.325 21:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:59.325 21:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:59.325 21:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:59.325 21:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:59.325 21:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:59.325 21:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:59.325 21:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:59.325 [2024-07-13 21:59:18.665400] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:59.325 21:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:59.325 "name": "raid_bdev1", 00:15:59.325 "aliases": [ 00:15:59.325 "3d2ad98a-4e8a-497c-af1f-89319c573b3d" 00:15:59.325 ], 00:15:59.325 "product_name": "Raid Volume", 00:15:59.325 "block_size": 512, 00:15:59.325 "num_blocks": 190464, 00:15:59.325 "uuid": "3d2ad98a-4e8a-497c-af1f-89319c573b3d", 00:15:59.325 "assigned_rate_limits": { 00:15:59.325 "rw_ios_per_sec": 0, 00:15:59.325 "rw_mbytes_per_sec": 0, 00:15:59.325 "r_mbytes_per_sec": 0, 00:15:59.325 "w_mbytes_per_sec": 0 00:15:59.325 }, 00:15:59.325 "claimed": false, 00:15:59.325 "zoned": false, 00:15:59.325 "supported_io_types": { 00:15:59.325 "read": true, 00:15:59.325 "write": true, 00:15:59.325 "unmap": true, 00:15:59.325 "flush": true, 00:15:59.325 "reset": true, 00:15:59.325 "nvme_admin": false, 00:15:59.325 "nvme_io": false, 00:15:59.325 "nvme_io_md": false, 00:15:59.325 "write_zeroes": true, 00:15:59.325 "zcopy": false, 00:15:59.325 "get_zone_info": false, 00:15:59.325 "zone_management": false, 00:15:59.325 "zone_append": false, 00:15:59.325 "compare": false, 00:15:59.325 "compare_and_write": false, 00:15:59.325 "abort": false, 00:15:59.325 "seek_hole": false, 00:15:59.325 "seek_data": false, 00:15:59.325 "copy": false, 00:15:59.325 "nvme_iov_md": false 00:15:59.325 }, 00:15:59.325 "memory_domains": [ 00:15:59.325 { 00:15:59.325 "dma_device_id": "system", 00:15:59.325 "dma_device_type": 1 00:15:59.325 }, 00:15:59.325 { 00:15:59.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.325 "dma_device_type": 2 00:15:59.325 }, 00:15:59.325 { 00:15:59.325 "dma_device_id": "system", 00:15:59.325 "dma_device_type": 1 00:15:59.325 }, 00:15:59.325 { 00:15:59.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.325 "dma_device_type": 2 00:15:59.325 }, 00:15:59.325 { 00:15:59.325 "dma_device_id": "system", 00:15:59.325 "dma_device_type": 1 00:15:59.325 }, 00:15:59.325 { 00:15:59.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.325 "dma_device_type": 2 00:15:59.325 } 00:15:59.325 ], 00:15:59.325 "driver_specific": { 00:15:59.325 "raid": { 00:15:59.325 "uuid": "3d2ad98a-4e8a-497c-af1f-89319c573b3d", 00:15:59.325 "strip_size_kb": 64, 00:15:59.325 "state": "online", 00:15:59.325 "raid_level": "concat", 00:15:59.325 "superblock": true, 00:15:59.325 "num_base_bdevs": 3, 00:15:59.325 "num_base_bdevs_discovered": 3, 00:15:59.325 "num_base_bdevs_operational": 3, 00:15:59.325 "base_bdevs_list": [ 00:15:59.325 { 00:15:59.325 "name": "pt1", 00:15:59.325 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:59.325 "is_configured": true, 00:15:59.325 "data_offset": 2048, 00:15:59.325 "data_size": 63488 00:15:59.325 }, 00:15:59.325 { 00:15:59.325 "name": "pt2", 00:15:59.325 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:59.325 "is_configured": true, 00:15:59.325 "data_offset": 2048, 00:15:59.325 "data_size": 63488 00:15:59.325 }, 00:15:59.325 { 00:15:59.325 "name": "pt3", 00:15:59.325 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:59.325 "is_configured": true, 00:15:59.325 "data_offset": 2048, 00:15:59.325 "data_size": 63488 00:15:59.325 } 00:15:59.325 ] 00:15:59.325 } 00:15:59.325 } 00:15:59.325 }' 00:15:59.325 21:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:59.584 21:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:59.584 pt2 00:15:59.584 pt3' 00:15:59.584 21:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:59.584 21:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:59.584 21:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:59.584 21:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:59.584 "name": "pt1", 00:15:59.584 "aliases": [ 00:15:59.584 "00000000-0000-0000-0000-000000000001" 00:15:59.584 ], 00:15:59.584 "product_name": "passthru", 00:15:59.584 "block_size": 512, 00:15:59.584 "num_blocks": 65536, 00:15:59.584 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:59.584 "assigned_rate_limits": { 00:15:59.584 "rw_ios_per_sec": 0, 00:15:59.584 "rw_mbytes_per_sec": 0, 00:15:59.584 "r_mbytes_per_sec": 0, 00:15:59.584 "w_mbytes_per_sec": 0 00:15:59.584 }, 00:15:59.584 "claimed": true, 00:15:59.584 "claim_type": "exclusive_write", 00:15:59.584 "zoned": false, 00:15:59.584 "supported_io_types": { 00:15:59.584 "read": true, 00:15:59.584 "write": true, 00:15:59.584 "unmap": true, 00:15:59.584 "flush": true, 00:15:59.584 "reset": true, 00:15:59.584 "nvme_admin": false, 00:15:59.584 "nvme_io": false, 00:15:59.584 "nvme_io_md": false, 00:15:59.584 "write_zeroes": true, 00:15:59.584 "zcopy": true, 00:15:59.584 "get_zone_info": false, 00:15:59.584 "zone_management": false, 00:15:59.584 "zone_append": false, 00:15:59.584 "compare": false, 00:15:59.584 "compare_and_write": false, 00:15:59.584 "abort": true, 00:15:59.584 "seek_hole": false, 00:15:59.584 "seek_data": false, 00:15:59.584 "copy": true, 00:15:59.584 "nvme_iov_md": false 00:15:59.584 }, 00:15:59.584 "memory_domains": [ 00:15:59.584 { 00:15:59.584 "dma_device_id": "system", 00:15:59.584 "dma_device_type": 1 00:15:59.584 }, 00:15:59.584 { 00:15:59.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.584 "dma_device_type": 2 00:15:59.584 } 00:15:59.584 ], 00:15:59.584 "driver_specific": { 00:15:59.584 "passthru": { 00:15:59.584 "name": "pt1", 00:15:59.584 "base_bdev_name": "malloc1" 00:15:59.584 } 00:15:59.584 } 00:15:59.584 }' 00:15:59.584 21:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:59.584 21:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:59.843 21:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:59.843 21:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:59.843 21:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:59.843 21:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:59.843 21:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:59.843 21:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:59.843 21:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:59.843 21:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:59.843 21:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:59.843 21:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:59.843 21:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:59.843 21:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:59.843 21:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:00.102 21:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:00.102 "name": "pt2", 00:16:00.102 "aliases": [ 00:16:00.102 "00000000-0000-0000-0000-000000000002" 00:16:00.102 ], 00:16:00.102 "product_name": "passthru", 00:16:00.102 "block_size": 512, 00:16:00.102 "num_blocks": 65536, 00:16:00.102 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:00.102 "assigned_rate_limits": { 00:16:00.102 "rw_ios_per_sec": 0, 00:16:00.102 "rw_mbytes_per_sec": 0, 00:16:00.102 "r_mbytes_per_sec": 0, 00:16:00.102 "w_mbytes_per_sec": 0 00:16:00.102 }, 00:16:00.102 "claimed": true, 00:16:00.102 "claim_type": "exclusive_write", 00:16:00.102 "zoned": false, 00:16:00.102 "supported_io_types": { 00:16:00.102 "read": true, 00:16:00.102 "write": true, 00:16:00.102 "unmap": true, 00:16:00.102 "flush": true, 00:16:00.102 "reset": true, 00:16:00.102 "nvme_admin": false, 00:16:00.102 "nvme_io": false, 00:16:00.102 "nvme_io_md": false, 00:16:00.102 "write_zeroes": true, 00:16:00.102 "zcopy": true, 00:16:00.102 "get_zone_info": false, 00:16:00.102 "zone_management": false, 00:16:00.102 "zone_append": false, 00:16:00.102 "compare": false, 00:16:00.102 "compare_and_write": false, 00:16:00.102 "abort": true, 00:16:00.102 "seek_hole": false, 00:16:00.102 "seek_data": false, 00:16:00.102 "copy": true, 00:16:00.102 "nvme_iov_md": false 00:16:00.102 }, 00:16:00.102 "memory_domains": [ 00:16:00.102 { 00:16:00.102 "dma_device_id": "system", 00:16:00.102 "dma_device_type": 1 00:16:00.102 }, 00:16:00.102 { 00:16:00.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.102 "dma_device_type": 2 00:16:00.102 } 00:16:00.102 ], 00:16:00.102 "driver_specific": { 00:16:00.102 "passthru": { 00:16:00.102 "name": "pt2", 00:16:00.102 "base_bdev_name": "malloc2" 00:16:00.102 } 00:16:00.102 } 00:16:00.102 }' 00:16:00.102 21:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:00.102 21:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:00.102 21:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:00.102 21:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:00.102 21:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:00.360 21:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:00.360 21:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:00.360 21:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:00.360 21:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:00.360 21:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:00.360 21:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:00.360 21:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:00.360 21:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:00.360 21:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:16:00.360 21:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:00.619 21:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:00.619 "name": "pt3", 00:16:00.619 "aliases": [ 00:16:00.619 "00000000-0000-0000-0000-000000000003" 00:16:00.619 ], 00:16:00.619 "product_name": "passthru", 00:16:00.619 "block_size": 512, 00:16:00.619 "num_blocks": 65536, 00:16:00.619 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:00.619 "assigned_rate_limits": { 00:16:00.619 "rw_ios_per_sec": 0, 00:16:00.619 "rw_mbytes_per_sec": 0, 00:16:00.619 "r_mbytes_per_sec": 0, 00:16:00.619 "w_mbytes_per_sec": 0 00:16:00.619 }, 00:16:00.619 "claimed": true, 00:16:00.619 "claim_type": "exclusive_write", 00:16:00.619 "zoned": false, 00:16:00.619 "supported_io_types": { 00:16:00.619 "read": true, 00:16:00.619 "write": true, 00:16:00.619 "unmap": true, 00:16:00.619 "flush": true, 00:16:00.619 "reset": true, 00:16:00.619 "nvme_admin": false, 00:16:00.619 "nvme_io": false, 00:16:00.619 "nvme_io_md": false, 00:16:00.619 "write_zeroes": true, 00:16:00.619 "zcopy": true, 00:16:00.619 "get_zone_info": false, 00:16:00.619 "zone_management": false, 00:16:00.619 "zone_append": false, 00:16:00.619 "compare": false, 00:16:00.619 "compare_and_write": false, 00:16:00.619 "abort": true, 00:16:00.619 "seek_hole": false, 00:16:00.619 "seek_data": false, 00:16:00.619 "copy": true, 00:16:00.619 "nvme_iov_md": false 00:16:00.619 }, 00:16:00.619 "memory_domains": [ 00:16:00.619 { 00:16:00.619 "dma_device_id": "system", 00:16:00.619 "dma_device_type": 1 00:16:00.619 }, 00:16:00.619 { 00:16:00.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.619 "dma_device_type": 2 00:16:00.619 } 00:16:00.619 ], 00:16:00.619 "driver_specific": { 00:16:00.619 "passthru": { 00:16:00.619 "name": "pt3", 00:16:00.619 "base_bdev_name": "malloc3" 00:16:00.619 } 00:16:00.619 } 00:16:00.619 }' 00:16:00.619 21:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:00.619 21:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:00.619 21:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:00.619 21:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:00.619 21:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:00.619 21:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:00.619 21:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:00.878 21:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:00.878 21:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:00.878 21:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:00.878 21:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:00.878 21:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:00.878 21:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:00.878 21:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:16:01.137 [2024-07-13 21:59:20.301735] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:01.137 21:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=3d2ad98a-4e8a-497c-af1f-89319c573b3d 00:16:01.137 21:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 3d2ad98a-4e8a-497c-af1f-89319c573b3d ']' 00:16:01.137 21:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:01.137 [2024-07-13 21:59:20.477897] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:01.138 [2024-07-13 21:59:20.477935] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:01.138 [2024-07-13 21:59:20.478017] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:01.138 [2024-07-13 21:59:20.478078] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:01.138 [2024-07-13 21:59:20.478089] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041480 name raid_bdev1, state offline 00:16:01.138 21:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:01.138 21:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:16:01.397 21:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:16:01.397 21:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:16:01.397 21:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:01.397 21:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:01.655 21:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:01.655 21:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:01.655 21:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:01.655 21:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:01.914 21:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:01.914 21:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:02.172 21:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:16:02.172 21:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:02.172 21:59:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:16:02.172 21:59:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:02.172 21:59:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:02.172 21:59:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:02.172 21:59:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:02.172 21:59:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:02.172 21:59:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:02.172 21:59:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:02.173 21:59:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:02.173 21:59:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:16:02.173 21:59:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:02.173 [2024-07-13 21:59:21.480514] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:02.173 [2024-07-13 21:59:21.482237] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:02.173 [2024-07-13 21:59:21.482294] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:02.173 [2024-07-13 21:59:21.482341] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:02.173 [2024-07-13 21:59:21.482407] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:02.173 [2024-07-13 21:59:21.482428] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:02.173 [2024-07-13 21:59:21.482447] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:02.173 [2024-07-13 21:59:21.482457] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041a80 name raid_bdev1, state configuring 00:16:02.173 request: 00:16:02.173 { 00:16:02.173 "name": "raid_bdev1", 00:16:02.173 "raid_level": "concat", 00:16:02.173 "base_bdevs": [ 00:16:02.173 "malloc1", 00:16:02.173 "malloc2", 00:16:02.173 "malloc3" 00:16:02.173 ], 00:16:02.173 "strip_size_kb": 64, 00:16:02.173 "superblock": false, 00:16:02.173 "method": "bdev_raid_create", 00:16:02.173 "req_id": 1 00:16:02.173 } 00:16:02.173 Got JSON-RPC error response 00:16:02.173 response: 00:16:02.173 { 00:16:02.173 "code": -17, 00:16:02.173 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:02.173 } 00:16:02.173 21:59:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:16:02.173 21:59:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:02.173 21:59:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:02.173 21:59:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:02.173 21:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:02.173 21:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:16:02.432 21:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:16:02.432 21:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:16:02.432 21:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:02.691 [2024-07-13 21:59:21.825369] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:02.691 [2024-07-13 21:59:21.825433] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.691 [2024-07-13 21:59:21.825457] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042080 00:16:02.691 [2024-07-13 21:59:21.825468] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.691 [2024-07-13 21:59:21.827684] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.691 [2024-07-13 21:59:21.827715] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:02.691 [2024-07-13 21:59:21.827808] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:02.691 [2024-07-13 21:59:21.827869] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:02.691 pt1 00:16:02.691 21:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:16:02.691 21:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:02.691 21:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:02.691 21:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:02.691 21:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:02.691 21:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:02.691 21:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:02.691 21:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:02.691 21:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:02.691 21:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:02.691 21:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:02.691 21:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.691 21:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:02.691 "name": "raid_bdev1", 00:16:02.691 "uuid": "3d2ad98a-4e8a-497c-af1f-89319c573b3d", 00:16:02.691 "strip_size_kb": 64, 00:16:02.691 "state": "configuring", 00:16:02.691 "raid_level": "concat", 00:16:02.691 "superblock": true, 00:16:02.691 "num_base_bdevs": 3, 00:16:02.691 "num_base_bdevs_discovered": 1, 00:16:02.691 "num_base_bdevs_operational": 3, 00:16:02.691 "base_bdevs_list": [ 00:16:02.691 { 00:16:02.691 "name": "pt1", 00:16:02.691 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:02.691 "is_configured": true, 00:16:02.691 "data_offset": 2048, 00:16:02.691 "data_size": 63488 00:16:02.691 }, 00:16:02.691 { 00:16:02.691 "name": null, 00:16:02.691 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:02.691 "is_configured": false, 00:16:02.691 "data_offset": 2048, 00:16:02.691 "data_size": 63488 00:16:02.691 }, 00:16:02.691 { 00:16:02.691 "name": null, 00:16:02.691 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:02.691 "is_configured": false, 00:16:02.691 "data_offset": 2048, 00:16:02.691 "data_size": 63488 00:16:02.691 } 00:16:02.691 ] 00:16:02.691 }' 00:16:02.691 21:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:02.691 21:59:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.259 21:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:16:03.259 21:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:03.518 [2024-07-13 21:59:22.659547] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:03.518 [2024-07-13 21:59:22.659630] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.518 [2024-07-13 21:59:22.659656] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042980 00:16:03.518 [2024-07-13 21:59:22.659667] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.518 [2024-07-13 21:59:22.660183] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.518 [2024-07-13 21:59:22.660206] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:03.518 [2024-07-13 21:59:22.660293] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:03.518 [2024-07-13 21:59:22.660316] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:03.518 pt2 00:16:03.518 21:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:03.519 [2024-07-13 21:59:22.828055] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:03.519 21:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:16:03.519 21:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:03.519 21:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:03.519 21:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:03.519 21:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:03.519 21:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:03.519 21:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:03.519 21:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:03.519 21:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:03.519 21:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:03.519 21:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:03.519 21:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.778 21:59:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:03.778 "name": "raid_bdev1", 00:16:03.778 "uuid": "3d2ad98a-4e8a-497c-af1f-89319c573b3d", 00:16:03.778 "strip_size_kb": 64, 00:16:03.778 "state": "configuring", 00:16:03.778 "raid_level": "concat", 00:16:03.778 "superblock": true, 00:16:03.778 "num_base_bdevs": 3, 00:16:03.778 "num_base_bdevs_discovered": 1, 00:16:03.778 "num_base_bdevs_operational": 3, 00:16:03.778 "base_bdevs_list": [ 00:16:03.778 { 00:16:03.778 "name": "pt1", 00:16:03.778 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:03.778 "is_configured": true, 00:16:03.778 "data_offset": 2048, 00:16:03.778 "data_size": 63488 00:16:03.778 }, 00:16:03.778 { 00:16:03.778 "name": null, 00:16:03.778 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:03.778 "is_configured": false, 00:16:03.778 "data_offset": 2048, 00:16:03.778 "data_size": 63488 00:16:03.778 }, 00:16:03.778 { 00:16:03.778 "name": null, 00:16:03.778 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:03.778 "is_configured": false, 00:16:03.778 "data_offset": 2048, 00:16:03.778 "data_size": 63488 00:16:03.778 } 00:16:03.778 ] 00:16:03.778 }' 00:16:03.778 21:59:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:03.778 21:59:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.350 21:59:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:16:04.350 21:59:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:04.350 21:59:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:04.350 [2024-07-13 21:59:23.642142] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:04.350 [2024-07-13 21:59:23.642209] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.350 [2024-07-13 21:59:23.642231] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042c80 00:16:04.350 [2024-07-13 21:59:23.642245] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.350 [2024-07-13 21:59:23.642728] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.350 [2024-07-13 21:59:23.642751] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:04.350 [2024-07-13 21:59:23.642834] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:04.350 [2024-07-13 21:59:23.642858] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:04.350 pt2 00:16:04.350 21:59:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:16:04.350 21:59:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:04.350 21:59:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:04.610 [2024-07-13 21:59:23.810609] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:04.610 [2024-07-13 21:59:23.810669] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.610 [2024-07-13 21:59:23.810690] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042f80 00:16:04.610 [2024-07-13 21:59:23.810703] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.610 [2024-07-13 21:59:23.811211] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.610 [2024-07-13 21:59:23.811232] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:04.610 [2024-07-13 21:59:23.811308] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:04.610 [2024-07-13 21:59:23.811336] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:04.610 [2024-07-13 21:59:23.811483] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042680 00:16:04.610 [2024-07-13 21:59:23.811496] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:04.610 [2024-07-13 21:59:23.811730] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:16:04.610 [2024-07-13 21:59:23.811925] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042680 00:16:04.610 [2024-07-13 21:59:23.811936] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000042680 00:16:04.610 [2024-07-13 21:59:23.812099] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.610 pt3 00:16:04.610 21:59:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:16:04.610 21:59:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:04.610 21:59:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:04.610 21:59:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:04.610 21:59:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:04.610 21:59:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:04.610 21:59:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:04.610 21:59:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:04.610 21:59:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:04.610 21:59:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:04.610 21:59:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:04.610 21:59:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:04.610 21:59:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:04.610 21:59:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.869 21:59:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:04.869 "name": "raid_bdev1", 00:16:04.869 "uuid": "3d2ad98a-4e8a-497c-af1f-89319c573b3d", 00:16:04.869 "strip_size_kb": 64, 00:16:04.869 "state": "online", 00:16:04.869 "raid_level": "concat", 00:16:04.869 "superblock": true, 00:16:04.869 "num_base_bdevs": 3, 00:16:04.869 "num_base_bdevs_discovered": 3, 00:16:04.869 "num_base_bdevs_operational": 3, 00:16:04.869 "base_bdevs_list": [ 00:16:04.869 { 00:16:04.869 "name": "pt1", 00:16:04.870 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:04.870 "is_configured": true, 00:16:04.870 "data_offset": 2048, 00:16:04.870 "data_size": 63488 00:16:04.870 }, 00:16:04.870 { 00:16:04.870 "name": "pt2", 00:16:04.870 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:04.870 "is_configured": true, 00:16:04.870 "data_offset": 2048, 00:16:04.870 "data_size": 63488 00:16:04.870 }, 00:16:04.870 { 00:16:04.870 "name": "pt3", 00:16:04.870 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:04.870 "is_configured": true, 00:16:04.870 "data_offset": 2048, 00:16:04.870 "data_size": 63488 00:16:04.870 } 00:16:04.870 ] 00:16:04.870 }' 00:16:04.870 21:59:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:04.870 21:59:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.437 21:59:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:16:05.437 21:59:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:05.437 21:59:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:05.437 21:59:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:05.437 21:59:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:05.437 21:59:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:05.437 21:59:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:05.437 21:59:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:05.437 [2024-07-13 21:59:24.673106] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:05.437 21:59:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:05.437 "name": "raid_bdev1", 00:16:05.437 "aliases": [ 00:16:05.437 "3d2ad98a-4e8a-497c-af1f-89319c573b3d" 00:16:05.437 ], 00:16:05.437 "product_name": "Raid Volume", 00:16:05.437 "block_size": 512, 00:16:05.437 "num_blocks": 190464, 00:16:05.437 "uuid": "3d2ad98a-4e8a-497c-af1f-89319c573b3d", 00:16:05.437 "assigned_rate_limits": { 00:16:05.437 "rw_ios_per_sec": 0, 00:16:05.437 "rw_mbytes_per_sec": 0, 00:16:05.437 "r_mbytes_per_sec": 0, 00:16:05.437 "w_mbytes_per_sec": 0 00:16:05.437 }, 00:16:05.437 "claimed": false, 00:16:05.437 "zoned": false, 00:16:05.437 "supported_io_types": { 00:16:05.437 "read": true, 00:16:05.437 "write": true, 00:16:05.437 "unmap": true, 00:16:05.437 "flush": true, 00:16:05.437 "reset": true, 00:16:05.437 "nvme_admin": false, 00:16:05.437 "nvme_io": false, 00:16:05.437 "nvme_io_md": false, 00:16:05.437 "write_zeroes": true, 00:16:05.437 "zcopy": false, 00:16:05.437 "get_zone_info": false, 00:16:05.437 "zone_management": false, 00:16:05.437 "zone_append": false, 00:16:05.437 "compare": false, 00:16:05.437 "compare_and_write": false, 00:16:05.437 "abort": false, 00:16:05.437 "seek_hole": false, 00:16:05.437 "seek_data": false, 00:16:05.437 "copy": false, 00:16:05.437 "nvme_iov_md": false 00:16:05.437 }, 00:16:05.437 "memory_domains": [ 00:16:05.437 { 00:16:05.437 "dma_device_id": "system", 00:16:05.437 "dma_device_type": 1 00:16:05.437 }, 00:16:05.437 { 00:16:05.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.437 "dma_device_type": 2 00:16:05.437 }, 00:16:05.437 { 00:16:05.437 "dma_device_id": "system", 00:16:05.437 "dma_device_type": 1 00:16:05.437 }, 00:16:05.437 { 00:16:05.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.437 "dma_device_type": 2 00:16:05.437 }, 00:16:05.437 { 00:16:05.437 "dma_device_id": "system", 00:16:05.437 "dma_device_type": 1 00:16:05.437 }, 00:16:05.437 { 00:16:05.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.437 "dma_device_type": 2 00:16:05.437 } 00:16:05.437 ], 00:16:05.437 "driver_specific": { 00:16:05.437 "raid": { 00:16:05.437 "uuid": "3d2ad98a-4e8a-497c-af1f-89319c573b3d", 00:16:05.437 "strip_size_kb": 64, 00:16:05.437 "state": "online", 00:16:05.437 "raid_level": "concat", 00:16:05.437 "superblock": true, 00:16:05.437 "num_base_bdevs": 3, 00:16:05.437 "num_base_bdevs_discovered": 3, 00:16:05.437 "num_base_bdevs_operational": 3, 00:16:05.437 "base_bdevs_list": [ 00:16:05.437 { 00:16:05.437 "name": "pt1", 00:16:05.437 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:05.437 "is_configured": true, 00:16:05.437 "data_offset": 2048, 00:16:05.437 "data_size": 63488 00:16:05.437 }, 00:16:05.437 { 00:16:05.437 "name": "pt2", 00:16:05.437 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:05.437 "is_configured": true, 00:16:05.437 "data_offset": 2048, 00:16:05.437 "data_size": 63488 00:16:05.437 }, 00:16:05.437 { 00:16:05.437 "name": "pt3", 00:16:05.437 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:05.437 "is_configured": true, 00:16:05.437 "data_offset": 2048, 00:16:05.437 "data_size": 63488 00:16:05.437 } 00:16:05.437 ] 00:16:05.437 } 00:16:05.437 } 00:16:05.437 }' 00:16:05.437 21:59:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:05.437 21:59:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:05.437 pt2 00:16:05.437 pt3' 00:16:05.437 21:59:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:05.437 21:59:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:05.437 21:59:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:05.696 21:59:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:05.696 "name": "pt1", 00:16:05.696 "aliases": [ 00:16:05.696 "00000000-0000-0000-0000-000000000001" 00:16:05.696 ], 00:16:05.696 "product_name": "passthru", 00:16:05.696 "block_size": 512, 00:16:05.696 "num_blocks": 65536, 00:16:05.696 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:05.696 "assigned_rate_limits": { 00:16:05.696 "rw_ios_per_sec": 0, 00:16:05.696 "rw_mbytes_per_sec": 0, 00:16:05.696 "r_mbytes_per_sec": 0, 00:16:05.696 "w_mbytes_per_sec": 0 00:16:05.696 }, 00:16:05.696 "claimed": true, 00:16:05.696 "claim_type": "exclusive_write", 00:16:05.696 "zoned": false, 00:16:05.696 "supported_io_types": { 00:16:05.696 "read": true, 00:16:05.696 "write": true, 00:16:05.696 "unmap": true, 00:16:05.696 "flush": true, 00:16:05.696 "reset": true, 00:16:05.696 "nvme_admin": false, 00:16:05.696 "nvme_io": false, 00:16:05.696 "nvme_io_md": false, 00:16:05.696 "write_zeroes": true, 00:16:05.696 "zcopy": true, 00:16:05.696 "get_zone_info": false, 00:16:05.696 "zone_management": false, 00:16:05.696 "zone_append": false, 00:16:05.696 "compare": false, 00:16:05.696 "compare_and_write": false, 00:16:05.696 "abort": true, 00:16:05.696 "seek_hole": false, 00:16:05.696 "seek_data": false, 00:16:05.696 "copy": true, 00:16:05.696 "nvme_iov_md": false 00:16:05.696 }, 00:16:05.696 "memory_domains": [ 00:16:05.696 { 00:16:05.696 "dma_device_id": "system", 00:16:05.696 "dma_device_type": 1 00:16:05.696 }, 00:16:05.696 { 00:16:05.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.696 "dma_device_type": 2 00:16:05.696 } 00:16:05.696 ], 00:16:05.696 "driver_specific": { 00:16:05.696 "passthru": { 00:16:05.696 "name": "pt1", 00:16:05.696 "base_bdev_name": "malloc1" 00:16:05.696 } 00:16:05.696 } 00:16:05.696 }' 00:16:05.696 21:59:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:05.696 21:59:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:05.696 21:59:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:05.696 21:59:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:05.696 21:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:05.696 21:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:05.696 21:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:05.955 21:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:05.955 21:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:05.955 21:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:05.955 21:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:05.955 21:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:05.955 21:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:05.955 21:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:05.955 21:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:06.213 21:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:06.213 "name": "pt2", 00:16:06.213 "aliases": [ 00:16:06.213 "00000000-0000-0000-0000-000000000002" 00:16:06.213 ], 00:16:06.213 "product_name": "passthru", 00:16:06.213 "block_size": 512, 00:16:06.213 "num_blocks": 65536, 00:16:06.213 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:06.213 "assigned_rate_limits": { 00:16:06.213 "rw_ios_per_sec": 0, 00:16:06.213 "rw_mbytes_per_sec": 0, 00:16:06.213 "r_mbytes_per_sec": 0, 00:16:06.213 "w_mbytes_per_sec": 0 00:16:06.213 }, 00:16:06.213 "claimed": true, 00:16:06.213 "claim_type": "exclusive_write", 00:16:06.213 "zoned": false, 00:16:06.213 "supported_io_types": { 00:16:06.213 "read": true, 00:16:06.213 "write": true, 00:16:06.213 "unmap": true, 00:16:06.213 "flush": true, 00:16:06.213 "reset": true, 00:16:06.213 "nvme_admin": false, 00:16:06.213 "nvme_io": false, 00:16:06.213 "nvme_io_md": false, 00:16:06.213 "write_zeroes": true, 00:16:06.213 "zcopy": true, 00:16:06.213 "get_zone_info": false, 00:16:06.213 "zone_management": false, 00:16:06.213 "zone_append": false, 00:16:06.213 "compare": false, 00:16:06.213 "compare_and_write": false, 00:16:06.213 "abort": true, 00:16:06.213 "seek_hole": false, 00:16:06.213 "seek_data": false, 00:16:06.214 "copy": true, 00:16:06.214 "nvme_iov_md": false 00:16:06.214 }, 00:16:06.214 "memory_domains": [ 00:16:06.214 { 00:16:06.214 "dma_device_id": "system", 00:16:06.214 "dma_device_type": 1 00:16:06.214 }, 00:16:06.214 { 00:16:06.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.214 "dma_device_type": 2 00:16:06.214 } 00:16:06.214 ], 00:16:06.214 "driver_specific": { 00:16:06.214 "passthru": { 00:16:06.214 "name": "pt2", 00:16:06.214 "base_bdev_name": "malloc2" 00:16:06.214 } 00:16:06.214 } 00:16:06.214 }' 00:16:06.214 21:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:06.214 21:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:06.214 21:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:06.214 21:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:06.214 21:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:06.214 21:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:06.214 21:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:06.214 21:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:06.473 21:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:06.473 21:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:06.473 21:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:06.473 21:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:06.473 21:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:06.473 21:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:16:06.473 21:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:06.473 21:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:06.473 "name": "pt3", 00:16:06.473 "aliases": [ 00:16:06.473 "00000000-0000-0000-0000-000000000003" 00:16:06.473 ], 00:16:06.473 "product_name": "passthru", 00:16:06.473 "block_size": 512, 00:16:06.473 "num_blocks": 65536, 00:16:06.473 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:06.473 "assigned_rate_limits": { 00:16:06.473 "rw_ios_per_sec": 0, 00:16:06.473 "rw_mbytes_per_sec": 0, 00:16:06.473 "r_mbytes_per_sec": 0, 00:16:06.473 "w_mbytes_per_sec": 0 00:16:06.473 }, 00:16:06.473 "claimed": true, 00:16:06.473 "claim_type": "exclusive_write", 00:16:06.473 "zoned": false, 00:16:06.473 "supported_io_types": { 00:16:06.473 "read": true, 00:16:06.473 "write": true, 00:16:06.473 "unmap": true, 00:16:06.473 "flush": true, 00:16:06.473 "reset": true, 00:16:06.473 "nvme_admin": false, 00:16:06.473 "nvme_io": false, 00:16:06.473 "nvme_io_md": false, 00:16:06.473 "write_zeroes": true, 00:16:06.473 "zcopy": true, 00:16:06.473 "get_zone_info": false, 00:16:06.473 "zone_management": false, 00:16:06.473 "zone_append": false, 00:16:06.473 "compare": false, 00:16:06.473 "compare_and_write": false, 00:16:06.473 "abort": true, 00:16:06.473 "seek_hole": false, 00:16:06.473 "seek_data": false, 00:16:06.473 "copy": true, 00:16:06.473 "nvme_iov_md": false 00:16:06.473 }, 00:16:06.473 "memory_domains": [ 00:16:06.473 { 00:16:06.473 "dma_device_id": "system", 00:16:06.473 "dma_device_type": 1 00:16:06.473 }, 00:16:06.473 { 00:16:06.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.473 "dma_device_type": 2 00:16:06.473 } 00:16:06.473 ], 00:16:06.473 "driver_specific": { 00:16:06.473 "passthru": { 00:16:06.473 "name": "pt3", 00:16:06.473 "base_bdev_name": "malloc3" 00:16:06.473 } 00:16:06.473 } 00:16:06.473 }' 00:16:06.473 21:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:06.732 21:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:06.732 21:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:06.732 21:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:06.732 21:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:06.732 21:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:06.732 21:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:06.732 21:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:06.732 21:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:06.732 21:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:06.991 21:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:06.991 21:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:06.991 21:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:16:06.991 21:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:06.991 [2024-07-13 21:59:26.325468] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:06.991 21:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 3d2ad98a-4e8a-497c-af1f-89319c573b3d '!=' 3d2ad98a-4e8a-497c-af1f-89319c573b3d ']' 00:16:06.991 21:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:16:06.991 21:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:06.991 21:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:06.991 21:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 1391517 00:16:06.991 21:59:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 1391517 ']' 00:16:06.991 21:59:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 1391517 00:16:06.991 21:59:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:16:06.991 21:59:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:06.991 21:59:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1391517 00:16:07.251 21:59:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:07.251 21:59:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:07.251 21:59:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1391517' 00:16:07.251 killing process with pid 1391517 00:16:07.251 21:59:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 1391517 00:16:07.251 [2024-07-13 21:59:26.398562] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:07.251 [2024-07-13 21:59:26.398661] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:07.251 [2024-07-13 21:59:26.398723] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:07.251 21:59:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 1391517 00:16:07.251 [2024-07-13 21:59:26.398741] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042680 name raid_bdev1, state offline 00:16:07.251 [2024-07-13 21:59:26.621382] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:08.631 21:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:16:08.631 00:16:08.631 real 0m12.119s 00:16:08.631 user 0m20.427s 00:16:08.631 sys 0m2.208s 00:16:08.631 21:59:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:08.632 21:59:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.632 ************************************ 00:16:08.632 END TEST raid_superblock_test 00:16:08.632 ************************************ 00:16:08.632 21:59:27 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:08.632 21:59:27 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:16:08.632 21:59:27 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:08.632 21:59:27 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:08.632 21:59:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:08.632 ************************************ 00:16:08.632 START TEST raid_read_error_test 00:16:08.632 ************************************ 00:16:08.632 21:59:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 3 read 00:16:08.632 21:59:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:16:08.632 21:59:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:16:08.632 21:59:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:16:08.632 21:59:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:16:08.632 21:59:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:08.632 21:59:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:16:08.632 21:59:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:08.632 21:59:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:08.632 21:59:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:16:08.632 21:59:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:08.632 21:59:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:08.632 21:59:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:16:08.632 21:59:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:08.632 21:59:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:08.632 21:59:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:08.632 21:59:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:16:08.632 21:59:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:16:08.632 21:59:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:16:08.632 21:59:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:16:08.632 21:59:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:16:08.632 21:59:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:16:08.632 21:59:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:16:08.632 21:59:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:16:08.632 21:59:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:16:08.632 21:59:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:16:08.632 21:59:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.M1M53Kyo54 00:16:08.632 21:59:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:08.632 21:59:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1393923 00:16:08.632 21:59:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1393923 /var/tmp/spdk-raid.sock 00:16:08.632 21:59:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 1393923 ']' 00:16:08.632 21:59:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:08.632 21:59:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:08.632 21:59:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:08.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:08.632 21:59:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:08.632 21:59:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.892 [2024-07-13 21:59:28.032211] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:08.892 [2024-07-13 21:59:28.032307] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1393923 ] 00:16:08.892 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.892 EAL: Requested device 0000:3d:01.0 cannot be used 00:16:08.892 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.892 EAL: Requested device 0000:3d:01.1 cannot be used 00:16:08.892 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.892 EAL: Requested device 0000:3d:01.2 cannot be used 00:16:08.892 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.892 EAL: Requested device 0000:3d:01.3 cannot be used 00:16:08.892 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.892 EAL: Requested device 0000:3d:01.4 cannot be used 00:16:08.892 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.892 EAL: Requested device 0000:3d:01.5 cannot be used 00:16:08.892 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.892 EAL: Requested device 0000:3d:01.6 cannot be used 00:16:08.892 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.892 EAL: Requested device 0000:3d:01.7 cannot be used 00:16:08.892 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.892 EAL: Requested device 0000:3d:02.0 cannot be used 00:16:08.892 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.892 EAL: Requested device 0000:3d:02.1 cannot be used 00:16:08.892 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.892 EAL: Requested device 0000:3d:02.2 cannot be used 00:16:08.892 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.892 EAL: Requested device 0000:3d:02.3 cannot be used 00:16:08.892 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.892 EAL: Requested device 0000:3d:02.4 cannot be used 00:16:08.892 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.892 EAL: Requested device 0000:3d:02.5 cannot be used 00:16:08.892 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.892 EAL: Requested device 0000:3d:02.6 cannot be used 00:16:08.892 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.892 EAL: Requested device 0000:3d:02.7 cannot be used 00:16:08.892 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.892 EAL: Requested device 0000:3f:01.0 cannot be used 00:16:08.892 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.892 EAL: Requested device 0000:3f:01.1 cannot be used 00:16:08.892 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.892 EAL: Requested device 0000:3f:01.2 cannot be used 00:16:08.892 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.892 EAL: Requested device 0000:3f:01.3 cannot be used 00:16:08.892 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.892 EAL: Requested device 0000:3f:01.4 cannot be used 00:16:08.892 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.892 EAL: Requested device 0000:3f:01.5 cannot be used 00:16:08.892 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.892 EAL: Requested device 0000:3f:01.6 cannot be used 00:16:08.892 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.892 EAL: Requested device 0000:3f:01.7 cannot be used 00:16:08.892 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.892 EAL: Requested device 0000:3f:02.0 cannot be used 00:16:08.892 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.892 EAL: Requested device 0000:3f:02.1 cannot be used 00:16:08.892 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.892 EAL: Requested device 0000:3f:02.2 cannot be used 00:16:08.892 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.892 EAL: Requested device 0000:3f:02.3 cannot be used 00:16:08.892 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.892 EAL: Requested device 0000:3f:02.4 cannot be used 00:16:08.892 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.892 EAL: Requested device 0000:3f:02.5 cannot be used 00:16:08.892 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.892 EAL: Requested device 0000:3f:02.6 cannot be used 00:16:08.892 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.892 EAL: Requested device 0000:3f:02.7 cannot be used 00:16:08.892 [2024-07-13 21:59:28.193279] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.151 [2024-07-13 21:59:28.397370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.411 [2024-07-13 21:59:28.636449] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:09.411 [2024-07-13 21:59:28.636477] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:09.670 21:59:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:09.670 21:59:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:16:09.670 21:59:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:09.670 21:59:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:09.670 BaseBdev1_malloc 00:16:09.670 21:59:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:09.929 true 00:16:09.929 21:59:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:10.189 [2024-07-13 21:59:29.336915] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:10.189 [2024-07-13 21:59:29.336966] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.189 [2024-07-13 21:59:29.336991] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:16:10.189 [2024-07-13 21:59:29.337008] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.189 [2024-07-13 21:59:29.339115] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.189 [2024-07-13 21:59:29.339146] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:10.189 BaseBdev1 00:16:10.189 21:59:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:10.189 21:59:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:10.189 BaseBdev2_malloc 00:16:10.189 21:59:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:10.448 true 00:16:10.448 21:59:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:10.708 [2024-07-13 21:59:29.869584] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:10.708 [2024-07-13 21:59:29.869635] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.708 [2024-07-13 21:59:29.869672] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:16:10.708 [2024-07-13 21:59:29.869689] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.708 [2024-07-13 21:59:29.871769] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.708 [2024-07-13 21:59:29.871799] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:10.708 BaseBdev2 00:16:10.708 21:59:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:10.708 21:59:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:10.708 BaseBdev3_malloc 00:16:10.708 21:59:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:16:10.967 true 00:16:10.967 21:59:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:11.290 [2024-07-13 21:59:30.440757] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:11.290 [2024-07-13 21:59:30.440808] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.290 [2024-07-13 21:59:30.440830] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041780 00:16:11.290 [2024-07-13 21:59:30.440844] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.290 [2024-07-13 21:59:30.442949] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.290 [2024-07-13 21:59:30.442979] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:11.290 BaseBdev3 00:16:11.290 21:59:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:16:11.290 [2024-07-13 21:59:30.613240] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:11.290 [2024-07-13 21:59:30.615145] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:11.290 [2024-07-13 21:59:30.615213] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:11.290 [2024-07-13 21:59:30.615430] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041d80 00:16:11.290 [2024-07-13 21:59:30.615444] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:11.290 [2024-07-13 21:59:30.615710] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:16:11.290 [2024-07-13 21:59:30.615914] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041d80 00:16:11.290 [2024-07-13 21:59:30.615932] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041d80 00:16:11.290 [2024-07-13 21:59:30.616095] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:11.290 21:59:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:11.290 21:59:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:11.290 21:59:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:11.290 21:59:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:11.290 21:59:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:11.290 21:59:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:11.290 21:59:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:11.290 21:59:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:11.290 21:59:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:11.290 21:59:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:11.290 21:59:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:11.290 21:59:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.565 21:59:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:11.565 "name": "raid_bdev1", 00:16:11.565 "uuid": "e9f0c23e-0a85-4408-bf17-9cf0778e9845", 00:16:11.565 "strip_size_kb": 64, 00:16:11.565 "state": "online", 00:16:11.565 "raid_level": "concat", 00:16:11.565 "superblock": true, 00:16:11.565 "num_base_bdevs": 3, 00:16:11.565 "num_base_bdevs_discovered": 3, 00:16:11.565 "num_base_bdevs_operational": 3, 00:16:11.565 "base_bdevs_list": [ 00:16:11.565 { 00:16:11.565 "name": "BaseBdev1", 00:16:11.565 "uuid": "64e1e257-034c-56fc-afad-1db399fb0247", 00:16:11.565 "is_configured": true, 00:16:11.565 "data_offset": 2048, 00:16:11.565 "data_size": 63488 00:16:11.565 }, 00:16:11.565 { 00:16:11.565 "name": "BaseBdev2", 00:16:11.565 "uuid": "936833bd-6116-5b46-b28a-c044c6def380", 00:16:11.565 "is_configured": true, 00:16:11.565 "data_offset": 2048, 00:16:11.565 "data_size": 63488 00:16:11.565 }, 00:16:11.565 { 00:16:11.565 "name": "BaseBdev3", 00:16:11.565 "uuid": "d2dd11b2-568e-5489-a242-852867809c04", 00:16:11.565 "is_configured": true, 00:16:11.565 "data_offset": 2048, 00:16:11.565 "data_size": 63488 00:16:11.565 } 00:16:11.565 ] 00:16:11.565 }' 00:16:11.565 21:59:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:11.565 21:59:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.134 21:59:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:16:12.134 21:59:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:12.134 [2024-07-13 21:59:31.348570] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:16:13.072 21:59:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:13.072 21:59:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:16:13.072 21:59:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:16:13.072 21:59:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:16:13.072 21:59:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:13.072 21:59:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:13.072 21:59:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:13.072 21:59:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:13.072 21:59:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:13.072 21:59:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:13.072 21:59:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:13.072 21:59:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:13.072 21:59:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:13.072 21:59:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:13.072 21:59:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:13.072 21:59:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.332 21:59:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:13.332 "name": "raid_bdev1", 00:16:13.332 "uuid": "e9f0c23e-0a85-4408-bf17-9cf0778e9845", 00:16:13.332 "strip_size_kb": 64, 00:16:13.332 "state": "online", 00:16:13.332 "raid_level": "concat", 00:16:13.332 "superblock": true, 00:16:13.332 "num_base_bdevs": 3, 00:16:13.332 "num_base_bdevs_discovered": 3, 00:16:13.332 "num_base_bdevs_operational": 3, 00:16:13.332 "base_bdevs_list": [ 00:16:13.332 { 00:16:13.332 "name": "BaseBdev1", 00:16:13.332 "uuid": "64e1e257-034c-56fc-afad-1db399fb0247", 00:16:13.332 "is_configured": true, 00:16:13.332 "data_offset": 2048, 00:16:13.332 "data_size": 63488 00:16:13.332 }, 00:16:13.332 { 00:16:13.332 "name": "BaseBdev2", 00:16:13.332 "uuid": "936833bd-6116-5b46-b28a-c044c6def380", 00:16:13.332 "is_configured": true, 00:16:13.332 "data_offset": 2048, 00:16:13.332 "data_size": 63488 00:16:13.332 }, 00:16:13.332 { 00:16:13.332 "name": "BaseBdev3", 00:16:13.332 "uuid": "d2dd11b2-568e-5489-a242-852867809c04", 00:16:13.332 "is_configured": true, 00:16:13.332 "data_offset": 2048, 00:16:13.332 "data_size": 63488 00:16:13.332 } 00:16:13.332 ] 00:16:13.332 }' 00:16:13.332 21:59:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:13.332 21:59:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.901 21:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:13.901 [2024-07-13 21:59:33.256959] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:13.901 [2024-07-13 21:59:33.257000] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:13.901 [2024-07-13 21:59:33.259298] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:13.901 [2024-07-13 21:59:33.259338] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:13.901 [2024-07-13 21:59:33.259372] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:13.901 [2024-07-13 21:59:33.259383] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041d80 name raid_bdev1, state offline 00:16:13.901 0 00:16:13.901 21:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1393923 00:16:13.901 21:59:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 1393923 ']' 00:16:13.901 21:59:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 1393923 00:16:13.901 21:59:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:16:13.901 21:59:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:13.901 21:59:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1393923 00:16:14.160 21:59:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:14.160 21:59:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:14.160 21:59:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1393923' 00:16:14.160 killing process with pid 1393923 00:16:14.160 21:59:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 1393923 00:16:14.160 [2024-07-13 21:59:33.333418] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:14.160 21:59:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 1393923 00:16:14.160 [2024-07-13 21:59:33.497187] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:15.538 21:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.M1M53Kyo54 00:16:15.538 21:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:16:15.538 21:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:16:15.538 21:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.53 00:16:15.538 21:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:16:15.538 21:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:15.538 21:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:15.538 21:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.53 != \0\.\0\0 ]] 00:16:15.538 00:16:15.538 real 0m6.817s 00:16:15.538 user 0m9.535s 00:16:15.538 sys 0m1.145s 00:16:15.538 21:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:15.538 21:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.538 ************************************ 00:16:15.538 END TEST raid_read_error_test 00:16:15.538 ************************************ 00:16:15.538 21:59:34 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:15.538 21:59:34 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:16:15.538 21:59:34 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:15.538 21:59:34 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:15.538 21:59:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:15.538 ************************************ 00:16:15.538 START TEST raid_write_error_test 00:16:15.538 ************************************ 00:16:15.538 21:59:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 3 write 00:16:15.538 21:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:16:15.538 21:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:16:15.538 21:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:16:15.538 21:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:16:15.538 21:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:15.538 21:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:16:15.538 21:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:15.538 21:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:15.538 21:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:16:15.538 21:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:15.538 21:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:15.538 21:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:16:15.538 21:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:15.538 21:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:15.538 21:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:15.538 21:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:16:15.538 21:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:16:15.538 21:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:16:15.538 21:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:16:15.538 21:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:16:15.538 21:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:16:15.538 21:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:16:15.538 21:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:16:15.538 21:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:16:15.538 21:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:16:15.539 21:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.GYEgXVFyNm 00:16:15.539 21:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1395099 00:16:15.539 21:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1395099 /var/tmp/spdk-raid.sock 00:16:15.539 21:59:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:15.539 21:59:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 1395099 ']' 00:16:15.539 21:59:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:15.539 21:59:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:15.539 21:59:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:15.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:15.539 21:59:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:15.539 21:59:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.798 [2024-07-13 21:59:34.946093] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:15.798 [2024-07-13 21:59:34.946213] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1395099 ] 00:16:15.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.798 EAL: Requested device 0000:3d:01.0 cannot be used 00:16:15.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.798 EAL: Requested device 0000:3d:01.1 cannot be used 00:16:15.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.798 EAL: Requested device 0000:3d:01.2 cannot be used 00:16:15.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.798 EAL: Requested device 0000:3d:01.3 cannot be used 00:16:15.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.798 EAL: Requested device 0000:3d:01.4 cannot be used 00:16:15.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.798 EAL: Requested device 0000:3d:01.5 cannot be used 00:16:15.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.798 EAL: Requested device 0000:3d:01.6 cannot be used 00:16:15.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.798 EAL: Requested device 0000:3d:01.7 cannot be used 00:16:15.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.798 EAL: Requested device 0000:3d:02.0 cannot be used 00:16:15.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.798 EAL: Requested device 0000:3d:02.1 cannot be used 00:16:15.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.798 EAL: Requested device 0000:3d:02.2 cannot be used 00:16:15.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.798 EAL: Requested device 0000:3d:02.3 cannot be used 00:16:15.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.798 EAL: Requested device 0000:3d:02.4 cannot be used 00:16:15.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.798 EAL: Requested device 0000:3d:02.5 cannot be used 00:16:15.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.798 EAL: Requested device 0000:3d:02.6 cannot be used 00:16:15.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.798 EAL: Requested device 0000:3d:02.7 cannot be used 00:16:15.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.798 EAL: Requested device 0000:3f:01.0 cannot be used 00:16:15.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.798 EAL: Requested device 0000:3f:01.1 cannot be used 00:16:15.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.798 EAL: Requested device 0000:3f:01.2 cannot be used 00:16:15.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.798 EAL: Requested device 0000:3f:01.3 cannot be used 00:16:15.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.798 EAL: Requested device 0000:3f:01.4 cannot be used 00:16:15.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.798 EAL: Requested device 0000:3f:01.5 cannot be used 00:16:15.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.798 EAL: Requested device 0000:3f:01.6 cannot be used 00:16:15.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.798 EAL: Requested device 0000:3f:01.7 cannot be used 00:16:15.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.798 EAL: Requested device 0000:3f:02.0 cannot be used 00:16:15.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.798 EAL: Requested device 0000:3f:02.1 cannot be used 00:16:15.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.798 EAL: Requested device 0000:3f:02.2 cannot be used 00:16:15.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.798 EAL: Requested device 0000:3f:02.3 cannot be used 00:16:15.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.798 EAL: Requested device 0000:3f:02.4 cannot be used 00:16:15.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.798 EAL: Requested device 0000:3f:02.5 cannot be used 00:16:15.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.798 EAL: Requested device 0000:3f:02.6 cannot be used 00:16:15.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.798 EAL: Requested device 0000:3f:02.7 cannot be used 00:16:15.798 [2024-07-13 21:59:35.108780] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.058 [2024-07-13 21:59:35.312859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.317 [2024-07-13 21:59:35.546985] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:16.317 [2024-07-13 21:59:35.547024] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:16.317 21:59:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:16.318 21:59:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:16:16.318 21:59:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:16.576 21:59:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:16.576 BaseBdev1_malloc 00:16:16.576 21:59:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:16.835 true 00:16:16.835 21:59:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:16.835 [2024-07-13 21:59:36.224170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:16.835 [2024-07-13 21:59:36.224221] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.835 [2024-07-13 21:59:36.224258] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:16:16.835 [2024-07-13 21:59:36.224274] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.094 [2024-07-13 21:59:36.226410] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.094 [2024-07-13 21:59:36.226441] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:17.094 BaseBdev1 00:16:17.094 21:59:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:17.094 21:59:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:17.094 BaseBdev2_malloc 00:16:17.094 21:59:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:17.353 true 00:16:17.353 21:59:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:17.612 [2024-07-13 21:59:36.772711] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:17.612 [2024-07-13 21:59:36.772763] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.612 [2024-07-13 21:59:36.772783] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:16:17.612 [2024-07-13 21:59:36.772799] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.612 [2024-07-13 21:59:36.774944] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.612 [2024-07-13 21:59:36.774976] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:17.612 BaseBdev2 00:16:17.612 21:59:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:17.612 21:59:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:17.612 BaseBdev3_malloc 00:16:17.612 21:59:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:16:17.871 true 00:16:17.871 21:59:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:18.129 [2024-07-13 21:59:37.301411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:18.129 [2024-07-13 21:59:37.301460] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.129 [2024-07-13 21:59:37.301485] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041780 00:16:18.129 [2024-07-13 21:59:37.301498] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.129 [2024-07-13 21:59:37.303593] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.129 [2024-07-13 21:59:37.303625] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:18.129 BaseBdev3 00:16:18.129 21:59:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:16:18.129 [2024-07-13 21:59:37.457850] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:18.129 [2024-07-13 21:59:37.459645] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:18.129 [2024-07-13 21:59:37.459714] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:18.129 [2024-07-13 21:59:37.459926] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041d80 00:16:18.129 [2024-07-13 21:59:37.459940] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:18.129 [2024-07-13 21:59:37.460196] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:16:18.129 [2024-07-13 21:59:37.460388] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041d80 00:16:18.129 [2024-07-13 21:59:37.460404] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041d80 00:16:18.129 [2024-07-13 21:59:37.460558] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.129 21:59:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:18.129 21:59:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:18.129 21:59:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:18.129 21:59:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:18.129 21:59:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:18.129 21:59:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:18.129 21:59:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:18.129 21:59:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:18.129 21:59:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:18.129 21:59:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:18.130 21:59:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:18.130 21:59:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.388 21:59:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:18.388 "name": "raid_bdev1", 00:16:18.388 "uuid": "b93b5c49-fae9-419a-bad6-ac8b71db3040", 00:16:18.388 "strip_size_kb": 64, 00:16:18.388 "state": "online", 00:16:18.388 "raid_level": "concat", 00:16:18.388 "superblock": true, 00:16:18.388 "num_base_bdevs": 3, 00:16:18.388 "num_base_bdevs_discovered": 3, 00:16:18.388 "num_base_bdevs_operational": 3, 00:16:18.388 "base_bdevs_list": [ 00:16:18.388 { 00:16:18.388 "name": "BaseBdev1", 00:16:18.388 "uuid": "e4869afc-ecd0-5bc4-b0be-ef18a5463192", 00:16:18.388 "is_configured": true, 00:16:18.388 "data_offset": 2048, 00:16:18.388 "data_size": 63488 00:16:18.388 }, 00:16:18.388 { 00:16:18.388 "name": "BaseBdev2", 00:16:18.388 "uuid": "86b829a3-b4cf-58e8-a991-514daacb81fd", 00:16:18.388 "is_configured": true, 00:16:18.388 "data_offset": 2048, 00:16:18.388 "data_size": 63488 00:16:18.388 }, 00:16:18.388 { 00:16:18.388 "name": "BaseBdev3", 00:16:18.388 "uuid": "4a61694b-a4b4-59fd-a18d-8fe24eb45d6a", 00:16:18.388 "is_configured": true, 00:16:18.388 "data_offset": 2048, 00:16:18.388 "data_size": 63488 00:16:18.388 } 00:16:18.388 ] 00:16:18.388 }' 00:16:18.388 21:59:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:18.388 21:59:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.956 21:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:16:18.956 21:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:18.956 [2024-07-13 21:59:38.225287] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:16:19.892 21:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:20.152 21:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:16:20.152 21:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:16:20.152 21:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:16:20.152 21:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:20.152 21:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:20.152 21:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:20.152 21:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:20.152 21:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:20.152 21:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:20.152 21:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:20.152 21:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:20.152 21:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:20.152 21:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:20.152 21:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:20.152 21:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.152 21:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:20.152 "name": "raid_bdev1", 00:16:20.152 "uuid": "b93b5c49-fae9-419a-bad6-ac8b71db3040", 00:16:20.152 "strip_size_kb": 64, 00:16:20.152 "state": "online", 00:16:20.152 "raid_level": "concat", 00:16:20.152 "superblock": true, 00:16:20.152 "num_base_bdevs": 3, 00:16:20.152 "num_base_bdevs_discovered": 3, 00:16:20.152 "num_base_bdevs_operational": 3, 00:16:20.152 "base_bdevs_list": [ 00:16:20.152 { 00:16:20.152 "name": "BaseBdev1", 00:16:20.152 "uuid": "e4869afc-ecd0-5bc4-b0be-ef18a5463192", 00:16:20.152 "is_configured": true, 00:16:20.152 "data_offset": 2048, 00:16:20.152 "data_size": 63488 00:16:20.152 }, 00:16:20.152 { 00:16:20.152 "name": "BaseBdev2", 00:16:20.152 "uuid": "86b829a3-b4cf-58e8-a991-514daacb81fd", 00:16:20.152 "is_configured": true, 00:16:20.152 "data_offset": 2048, 00:16:20.152 "data_size": 63488 00:16:20.152 }, 00:16:20.152 { 00:16:20.152 "name": "BaseBdev3", 00:16:20.152 "uuid": "4a61694b-a4b4-59fd-a18d-8fe24eb45d6a", 00:16:20.152 "is_configured": true, 00:16:20.152 "data_offset": 2048, 00:16:20.152 "data_size": 63488 00:16:20.152 } 00:16:20.152 ] 00:16:20.152 }' 00:16:20.152 21:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:20.152 21:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.718 21:59:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:20.977 [2024-07-13 21:59:40.153942] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:20.977 [2024-07-13 21:59:40.153979] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:20.977 [2024-07-13 21:59:40.156386] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:20.977 [2024-07-13 21:59:40.156428] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:20.977 [2024-07-13 21:59:40.156465] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:20.977 [2024-07-13 21:59:40.156475] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041d80 name raid_bdev1, state offline 00:16:20.977 0 00:16:20.977 21:59:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1395099 00:16:20.977 21:59:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 1395099 ']' 00:16:20.977 21:59:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 1395099 00:16:20.977 21:59:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:16:20.977 21:59:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:20.977 21:59:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1395099 00:16:20.977 21:59:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:20.977 21:59:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:20.977 21:59:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1395099' 00:16:20.977 killing process with pid 1395099 00:16:20.977 21:59:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 1395099 00:16:20.977 [2024-07-13 21:59:40.229028] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:20.977 21:59:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 1395099 00:16:21.236 [2024-07-13 21:59:40.400992] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:22.614 21:59:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.GYEgXVFyNm 00:16:22.614 21:59:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:16:22.614 21:59:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:16:22.614 21:59:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.52 00:16:22.614 21:59:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:16:22.614 21:59:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:22.614 21:59:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:22.614 21:59:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.52 != \0\.\0\0 ]] 00:16:22.614 00:16:22.614 real 0m6.842s 00:16:22.614 user 0m9.608s 00:16:22.614 sys 0m1.095s 00:16:22.614 21:59:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:22.614 21:59:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.614 ************************************ 00:16:22.614 END TEST raid_write_error_test 00:16:22.614 ************************************ 00:16:22.614 21:59:41 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:22.614 21:59:41 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:16:22.614 21:59:41 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:16:22.614 21:59:41 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:22.614 21:59:41 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:22.614 21:59:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:22.614 ************************************ 00:16:22.614 START TEST raid_state_function_test 00:16:22.614 ************************************ 00:16:22.614 21:59:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 3 false 00:16:22.614 21:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:16:22.614 21:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:16:22.614 21:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:16:22.614 21:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:22.614 21:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:22.614 21:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:22.614 21:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:16:22.614 21:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:22.614 21:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:22.614 21:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:16:22.614 21:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:22.614 21:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:22.614 21:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:16:22.614 21:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:22.614 21:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:22.614 21:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:22.614 21:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:22.614 21:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:22.614 21:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:22.614 21:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:22.614 21:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:22.614 21:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:16:22.614 21:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:16:22.614 21:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:16:22.614 21:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:16:22.614 21:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=1396516 00:16:22.614 21:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1396516' 00:16:22.614 Process raid pid: 1396516 00:16:22.614 21:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:22.614 21:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 1396516 /var/tmp/spdk-raid.sock 00:16:22.614 21:59:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 1396516 ']' 00:16:22.614 21:59:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:22.614 21:59:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:22.614 21:59:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:22.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:22.614 21:59:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:22.614 21:59:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.614 [2024-07-13 21:59:41.871640] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:22.614 [2024-07-13 21:59:41.871727] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:22.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:22.614 EAL: Requested device 0000:3d:01.0 cannot be used 00:16:22.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:22.614 EAL: Requested device 0000:3d:01.1 cannot be used 00:16:22.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:22.614 EAL: Requested device 0000:3d:01.2 cannot be used 00:16:22.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:22.614 EAL: Requested device 0000:3d:01.3 cannot be used 00:16:22.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:22.614 EAL: Requested device 0000:3d:01.4 cannot be used 00:16:22.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:22.614 EAL: Requested device 0000:3d:01.5 cannot be used 00:16:22.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:22.614 EAL: Requested device 0000:3d:01.6 cannot be used 00:16:22.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:22.614 EAL: Requested device 0000:3d:01.7 cannot be used 00:16:22.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:22.614 EAL: Requested device 0000:3d:02.0 cannot be used 00:16:22.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:22.614 EAL: Requested device 0000:3d:02.1 cannot be used 00:16:22.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:22.614 EAL: Requested device 0000:3d:02.2 cannot be used 00:16:22.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:22.614 EAL: Requested device 0000:3d:02.3 cannot be used 00:16:22.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:22.614 EAL: Requested device 0000:3d:02.4 cannot be used 00:16:22.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:22.614 EAL: Requested device 0000:3d:02.5 cannot be used 00:16:22.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:22.614 EAL: Requested device 0000:3d:02.6 cannot be used 00:16:22.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:22.614 EAL: Requested device 0000:3d:02.7 cannot be used 00:16:22.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:22.614 EAL: Requested device 0000:3f:01.0 cannot be used 00:16:22.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:22.615 EAL: Requested device 0000:3f:01.1 cannot be used 00:16:22.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:22.615 EAL: Requested device 0000:3f:01.2 cannot be used 00:16:22.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:22.615 EAL: Requested device 0000:3f:01.3 cannot be used 00:16:22.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:22.615 EAL: Requested device 0000:3f:01.4 cannot be used 00:16:22.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:22.615 EAL: Requested device 0000:3f:01.5 cannot be used 00:16:22.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:22.615 EAL: Requested device 0000:3f:01.6 cannot be used 00:16:22.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:22.615 EAL: Requested device 0000:3f:01.7 cannot be used 00:16:22.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:22.615 EAL: Requested device 0000:3f:02.0 cannot be used 00:16:22.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:22.615 EAL: Requested device 0000:3f:02.1 cannot be used 00:16:22.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:22.615 EAL: Requested device 0000:3f:02.2 cannot be used 00:16:22.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:22.615 EAL: Requested device 0000:3f:02.3 cannot be used 00:16:22.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:22.615 EAL: Requested device 0000:3f:02.4 cannot be used 00:16:22.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:22.615 EAL: Requested device 0000:3f:02.5 cannot be used 00:16:22.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:22.615 EAL: Requested device 0000:3f:02.6 cannot be used 00:16:22.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:22.615 EAL: Requested device 0000:3f:02.7 cannot be used 00:16:22.874 [2024-07-13 21:59:42.035979] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.874 [2024-07-13 21:59:42.250008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.133 [2024-07-13 21:59:42.494329] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:23.133 [2024-07-13 21:59:42.494363] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:23.393 21:59:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:23.393 21:59:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:16:23.393 21:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:23.652 [2024-07-13 21:59:42.785954] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:23.652 [2024-07-13 21:59:42.786013] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:23.652 [2024-07-13 21:59:42.786024] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:23.652 [2024-07-13 21:59:42.786052] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:23.652 [2024-07-13 21:59:42.786063] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:23.652 [2024-07-13 21:59:42.786075] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:23.652 21:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:23.652 21:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:23.652 21:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:23.652 21:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:23.652 21:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:23.652 21:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:23.652 21:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:23.652 21:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:23.652 21:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:23.652 21:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:23.652 21:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:23.652 21:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.652 21:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:23.652 "name": "Existed_Raid", 00:16:23.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.652 "strip_size_kb": 0, 00:16:23.652 "state": "configuring", 00:16:23.652 "raid_level": "raid1", 00:16:23.652 "superblock": false, 00:16:23.652 "num_base_bdevs": 3, 00:16:23.652 "num_base_bdevs_discovered": 0, 00:16:23.652 "num_base_bdevs_operational": 3, 00:16:23.652 "base_bdevs_list": [ 00:16:23.652 { 00:16:23.652 "name": "BaseBdev1", 00:16:23.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.652 "is_configured": false, 00:16:23.652 "data_offset": 0, 00:16:23.652 "data_size": 0 00:16:23.652 }, 00:16:23.652 { 00:16:23.652 "name": "BaseBdev2", 00:16:23.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.652 "is_configured": false, 00:16:23.652 "data_offset": 0, 00:16:23.652 "data_size": 0 00:16:23.652 }, 00:16:23.652 { 00:16:23.652 "name": "BaseBdev3", 00:16:23.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.652 "is_configured": false, 00:16:23.652 "data_offset": 0, 00:16:23.652 "data_size": 0 00:16:23.652 } 00:16:23.652 ] 00:16:23.652 }' 00:16:23.652 21:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:23.652 21:59:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.241 21:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:24.241 [2024-07-13 21:59:43.624044] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:24.241 [2024-07-13 21:59:43.624083] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:16:24.500 21:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:24.500 [2024-07-13 21:59:43.796525] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:24.500 [2024-07-13 21:59:43.796567] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:24.500 [2024-07-13 21:59:43.796577] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:24.500 [2024-07-13 21:59:43.796591] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:24.500 [2024-07-13 21:59:43.796599] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:24.500 [2024-07-13 21:59:43.796610] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:24.500 21:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:24.758 [2024-07-13 21:59:43.996985] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:24.758 BaseBdev1 00:16:24.758 21:59:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:24.758 21:59:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:24.758 21:59:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:24.758 21:59:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:24.758 21:59:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:24.758 21:59:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:24.758 21:59:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:25.018 21:59:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:25.018 [ 00:16:25.018 { 00:16:25.018 "name": "BaseBdev1", 00:16:25.018 "aliases": [ 00:16:25.018 "3052165e-d0ea-4a50-85cf-3f7cfb3cdf93" 00:16:25.018 ], 00:16:25.018 "product_name": "Malloc disk", 00:16:25.018 "block_size": 512, 00:16:25.018 "num_blocks": 65536, 00:16:25.018 "uuid": "3052165e-d0ea-4a50-85cf-3f7cfb3cdf93", 00:16:25.018 "assigned_rate_limits": { 00:16:25.018 "rw_ios_per_sec": 0, 00:16:25.018 "rw_mbytes_per_sec": 0, 00:16:25.018 "r_mbytes_per_sec": 0, 00:16:25.018 "w_mbytes_per_sec": 0 00:16:25.018 }, 00:16:25.018 "claimed": true, 00:16:25.018 "claim_type": "exclusive_write", 00:16:25.018 "zoned": false, 00:16:25.018 "supported_io_types": { 00:16:25.018 "read": true, 00:16:25.018 "write": true, 00:16:25.018 "unmap": true, 00:16:25.018 "flush": true, 00:16:25.018 "reset": true, 00:16:25.018 "nvme_admin": false, 00:16:25.018 "nvme_io": false, 00:16:25.018 "nvme_io_md": false, 00:16:25.018 "write_zeroes": true, 00:16:25.018 "zcopy": true, 00:16:25.018 "get_zone_info": false, 00:16:25.018 "zone_management": false, 00:16:25.018 "zone_append": false, 00:16:25.018 "compare": false, 00:16:25.018 "compare_and_write": false, 00:16:25.018 "abort": true, 00:16:25.018 "seek_hole": false, 00:16:25.018 "seek_data": false, 00:16:25.018 "copy": true, 00:16:25.018 "nvme_iov_md": false 00:16:25.018 }, 00:16:25.018 "memory_domains": [ 00:16:25.018 { 00:16:25.018 "dma_device_id": "system", 00:16:25.018 "dma_device_type": 1 00:16:25.018 }, 00:16:25.018 { 00:16:25.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.018 "dma_device_type": 2 00:16:25.018 } 00:16:25.018 ], 00:16:25.018 "driver_specific": {} 00:16:25.018 } 00:16:25.018 ] 00:16:25.018 21:59:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:25.018 21:59:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:25.018 21:59:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:25.018 21:59:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:25.018 21:59:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:25.018 21:59:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:25.018 21:59:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:25.018 21:59:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:25.018 21:59:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:25.018 21:59:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:25.018 21:59:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:25.018 21:59:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:25.018 21:59:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.277 21:59:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:25.277 "name": "Existed_Raid", 00:16:25.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.277 "strip_size_kb": 0, 00:16:25.277 "state": "configuring", 00:16:25.277 "raid_level": "raid1", 00:16:25.277 "superblock": false, 00:16:25.277 "num_base_bdevs": 3, 00:16:25.277 "num_base_bdevs_discovered": 1, 00:16:25.277 "num_base_bdevs_operational": 3, 00:16:25.277 "base_bdevs_list": [ 00:16:25.277 { 00:16:25.277 "name": "BaseBdev1", 00:16:25.277 "uuid": "3052165e-d0ea-4a50-85cf-3f7cfb3cdf93", 00:16:25.277 "is_configured": true, 00:16:25.277 "data_offset": 0, 00:16:25.277 "data_size": 65536 00:16:25.277 }, 00:16:25.277 { 00:16:25.277 "name": "BaseBdev2", 00:16:25.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.277 "is_configured": false, 00:16:25.277 "data_offset": 0, 00:16:25.277 "data_size": 0 00:16:25.277 }, 00:16:25.277 { 00:16:25.277 "name": "BaseBdev3", 00:16:25.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.277 "is_configured": false, 00:16:25.277 "data_offset": 0, 00:16:25.277 "data_size": 0 00:16:25.277 } 00:16:25.277 ] 00:16:25.277 }' 00:16:25.277 21:59:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:25.277 21:59:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.845 21:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:25.845 [2024-07-13 21:59:45.188117] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:25.845 [2024-07-13 21:59:45.188168] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:16:25.845 21:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:26.104 [2024-07-13 21:59:45.356632] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:26.104 [2024-07-13 21:59:45.358321] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:26.104 [2024-07-13 21:59:45.358355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:26.104 [2024-07-13 21:59:45.358366] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:26.104 [2024-07-13 21:59:45.358377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:26.104 21:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:26.104 21:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:26.104 21:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:26.104 21:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:26.104 21:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:26.104 21:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:26.104 21:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:26.104 21:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:26.104 21:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:26.104 21:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:26.104 21:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:26.104 21:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:26.104 21:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.104 21:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:26.403 21:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:26.403 "name": "Existed_Raid", 00:16:26.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.403 "strip_size_kb": 0, 00:16:26.403 "state": "configuring", 00:16:26.403 "raid_level": "raid1", 00:16:26.403 "superblock": false, 00:16:26.403 "num_base_bdevs": 3, 00:16:26.403 "num_base_bdevs_discovered": 1, 00:16:26.403 "num_base_bdevs_operational": 3, 00:16:26.403 "base_bdevs_list": [ 00:16:26.403 { 00:16:26.403 "name": "BaseBdev1", 00:16:26.403 "uuid": "3052165e-d0ea-4a50-85cf-3f7cfb3cdf93", 00:16:26.403 "is_configured": true, 00:16:26.403 "data_offset": 0, 00:16:26.403 "data_size": 65536 00:16:26.403 }, 00:16:26.403 { 00:16:26.403 "name": "BaseBdev2", 00:16:26.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.403 "is_configured": false, 00:16:26.403 "data_offset": 0, 00:16:26.403 "data_size": 0 00:16:26.403 }, 00:16:26.403 { 00:16:26.403 "name": "BaseBdev3", 00:16:26.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.403 "is_configured": false, 00:16:26.403 "data_offset": 0, 00:16:26.403 "data_size": 0 00:16:26.403 } 00:16:26.403 ] 00:16:26.403 }' 00:16:26.403 21:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:26.403 21:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.663 21:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:26.922 [2024-07-13 21:59:46.240852] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:26.922 BaseBdev2 00:16:26.922 21:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:26.922 21:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:26.922 21:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:26.922 21:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:26.922 21:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:26.922 21:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:26.922 21:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:27.181 21:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:27.440 [ 00:16:27.440 { 00:16:27.440 "name": "BaseBdev2", 00:16:27.440 "aliases": [ 00:16:27.440 "720f00ba-799c-404e-887c-3a13d19f61ff" 00:16:27.440 ], 00:16:27.440 "product_name": "Malloc disk", 00:16:27.440 "block_size": 512, 00:16:27.440 "num_blocks": 65536, 00:16:27.440 "uuid": "720f00ba-799c-404e-887c-3a13d19f61ff", 00:16:27.440 "assigned_rate_limits": { 00:16:27.440 "rw_ios_per_sec": 0, 00:16:27.440 "rw_mbytes_per_sec": 0, 00:16:27.440 "r_mbytes_per_sec": 0, 00:16:27.440 "w_mbytes_per_sec": 0 00:16:27.440 }, 00:16:27.440 "claimed": true, 00:16:27.440 "claim_type": "exclusive_write", 00:16:27.440 "zoned": false, 00:16:27.440 "supported_io_types": { 00:16:27.440 "read": true, 00:16:27.440 "write": true, 00:16:27.440 "unmap": true, 00:16:27.440 "flush": true, 00:16:27.440 "reset": true, 00:16:27.440 "nvme_admin": false, 00:16:27.440 "nvme_io": false, 00:16:27.440 "nvme_io_md": false, 00:16:27.440 "write_zeroes": true, 00:16:27.440 "zcopy": true, 00:16:27.440 "get_zone_info": false, 00:16:27.440 "zone_management": false, 00:16:27.440 "zone_append": false, 00:16:27.440 "compare": false, 00:16:27.440 "compare_and_write": false, 00:16:27.440 "abort": true, 00:16:27.440 "seek_hole": false, 00:16:27.440 "seek_data": false, 00:16:27.440 "copy": true, 00:16:27.440 "nvme_iov_md": false 00:16:27.440 }, 00:16:27.440 "memory_domains": [ 00:16:27.440 { 00:16:27.440 "dma_device_id": "system", 00:16:27.440 "dma_device_type": 1 00:16:27.440 }, 00:16:27.440 { 00:16:27.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.440 "dma_device_type": 2 00:16:27.440 } 00:16:27.440 ], 00:16:27.440 "driver_specific": {} 00:16:27.440 } 00:16:27.440 ] 00:16:27.440 21:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:27.440 21:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:27.440 21:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:27.440 21:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:27.440 21:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:27.440 21:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:27.440 21:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:27.440 21:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:27.440 21:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:27.440 21:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:27.441 21:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:27.441 21:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:27.441 21:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:27.441 21:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:27.441 21:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.441 21:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:27.441 "name": "Existed_Raid", 00:16:27.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.441 "strip_size_kb": 0, 00:16:27.441 "state": "configuring", 00:16:27.441 "raid_level": "raid1", 00:16:27.441 "superblock": false, 00:16:27.441 "num_base_bdevs": 3, 00:16:27.441 "num_base_bdevs_discovered": 2, 00:16:27.441 "num_base_bdevs_operational": 3, 00:16:27.441 "base_bdevs_list": [ 00:16:27.441 { 00:16:27.441 "name": "BaseBdev1", 00:16:27.441 "uuid": "3052165e-d0ea-4a50-85cf-3f7cfb3cdf93", 00:16:27.441 "is_configured": true, 00:16:27.441 "data_offset": 0, 00:16:27.441 "data_size": 65536 00:16:27.441 }, 00:16:27.441 { 00:16:27.441 "name": "BaseBdev2", 00:16:27.441 "uuid": "720f00ba-799c-404e-887c-3a13d19f61ff", 00:16:27.441 "is_configured": true, 00:16:27.441 "data_offset": 0, 00:16:27.441 "data_size": 65536 00:16:27.441 }, 00:16:27.441 { 00:16:27.441 "name": "BaseBdev3", 00:16:27.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.441 "is_configured": false, 00:16:27.441 "data_offset": 0, 00:16:27.441 "data_size": 0 00:16:27.441 } 00:16:27.441 ] 00:16:27.441 }' 00:16:27.441 21:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:27.441 21:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.009 21:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:28.268 [2024-07-13 21:59:47.459127] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:28.268 [2024-07-13 21:59:47.459173] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:16:28.268 [2024-07-13 21:59:47.459189] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:28.268 [2024-07-13 21:59:47.459440] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:16:28.268 [2024-07-13 21:59:47.459643] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:16:28.268 [2024-07-13 21:59:47.459654] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:16:28.268 [2024-07-13 21:59:47.459921] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:28.268 BaseBdev3 00:16:28.268 21:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:16:28.268 21:59:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:28.268 21:59:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:28.268 21:59:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:28.268 21:59:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:28.268 21:59:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:28.268 21:59:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:28.268 21:59:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:28.528 [ 00:16:28.528 { 00:16:28.528 "name": "BaseBdev3", 00:16:28.528 "aliases": [ 00:16:28.528 "9ef26b59-8929-4f65-b197-851af51e6151" 00:16:28.528 ], 00:16:28.528 "product_name": "Malloc disk", 00:16:28.528 "block_size": 512, 00:16:28.528 "num_blocks": 65536, 00:16:28.528 "uuid": "9ef26b59-8929-4f65-b197-851af51e6151", 00:16:28.528 "assigned_rate_limits": { 00:16:28.528 "rw_ios_per_sec": 0, 00:16:28.528 "rw_mbytes_per_sec": 0, 00:16:28.528 "r_mbytes_per_sec": 0, 00:16:28.528 "w_mbytes_per_sec": 0 00:16:28.528 }, 00:16:28.528 "claimed": true, 00:16:28.528 "claim_type": "exclusive_write", 00:16:28.528 "zoned": false, 00:16:28.528 "supported_io_types": { 00:16:28.528 "read": true, 00:16:28.528 "write": true, 00:16:28.528 "unmap": true, 00:16:28.528 "flush": true, 00:16:28.528 "reset": true, 00:16:28.528 "nvme_admin": false, 00:16:28.528 "nvme_io": false, 00:16:28.528 "nvme_io_md": false, 00:16:28.528 "write_zeroes": true, 00:16:28.528 "zcopy": true, 00:16:28.528 "get_zone_info": false, 00:16:28.528 "zone_management": false, 00:16:28.528 "zone_append": false, 00:16:28.528 "compare": false, 00:16:28.528 "compare_and_write": false, 00:16:28.528 "abort": true, 00:16:28.528 "seek_hole": false, 00:16:28.528 "seek_data": false, 00:16:28.528 "copy": true, 00:16:28.528 "nvme_iov_md": false 00:16:28.528 }, 00:16:28.528 "memory_domains": [ 00:16:28.528 { 00:16:28.528 "dma_device_id": "system", 00:16:28.528 "dma_device_type": 1 00:16:28.528 }, 00:16:28.528 { 00:16:28.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.528 "dma_device_type": 2 00:16:28.528 } 00:16:28.528 ], 00:16:28.528 "driver_specific": {} 00:16:28.528 } 00:16:28.528 ] 00:16:28.528 21:59:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:28.528 21:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:28.528 21:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:28.528 21:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:28.528 21:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:28.528 21:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:28.528 21:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:28.528 21:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:28.528 21:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:28.528 21:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:28.528 21:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:28.528 21:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:28.528 21:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:28.528 21:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:28.528 21:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.787 21:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:28.787 "name": "Existed_Raid", 00:16:28.787 "uuid": "0f7ab255-dbca-4082-88bb-b558a75a5c89", 00:16:28.787 "strip_size_kb": 0, 00:16:28.787 "state": "online", 00:16:28.787 "raid_level": "raid1", 00:16:28.787 "superblock": false, 00:16:28.787 "num_base_bdevs": 3, 00:16:28.787 "num_base_bdevs_discovered": 3, 00:16:28.787 "num_base_bdevs_operational": 3, 00:16:28.787 "base_bdevs_list": [ 00:16:28.787 { 00:16:28.787 "name": "BaseBdev1", 00:16:28.787 "uuid": "3052165e-d0ea-4a50-85cf-3f7cfb3cdf93", 00:16:28.787 "is_configured": true, 00:16:28.787 "data_offset": 0, 00:16:28.787 "data_size": 65536 00:16:28.787 }, 00:16:28.787 { 00:16:28.787 "name": "BaseBdev2", 00:16:28.787 "uuid": "720f00ba-799c-404e-887c-3a13d19f61ff", 00:16:28.787 "is_configured": true, 00:16:28.787 "data_offset": 0, 00:16:28.787 "data_size": 65536 00:16:28.787 }, 00:16:28.787 { 00:16:28.787 "name": "BaseBdev3", 00:16:28.787 "uuid": "9ef26b59-8929-4f65-b197-851af51e6151", 00:16:28.787 "is_configured": true, 00:16:28.787 "data_offset": 0, 00:16:28.787 "data_size": 65536 00:16:28.787 } 00:16:28.787 ] 00:16:28.787 }' 00:16:28.787 21:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:28.787 21:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.357 21:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:29.357 21:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:29.357 21:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:29.357 21:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:29.357 21:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:29.357 21:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:29.357 21:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:29.357 21:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:29.357 [2024-07-13 21:59:48.638499] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:29.357 21:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:29.357 "name": "Existed_Raid", 00:16:29.357 "aliases": [ 00:16:29.357 "0f7ab255-dbca-4082-88bb-b558a75a5c89" 00:16:29.357 ], 00:16:29.357 "product_name": "Raid Volume", 00:16:29.357 "block_size": 512, 00:16:29.357 "num_blocks": 65536, 00:16:29.357 "uuid": "0f7ab255-dbca-4082-88bb-b558a75a5c89", 00:16:29.357 "assigned_rate_limits": { 00:16:29.357 "rw_ios_per_sec": 0, 00:16:29.357 "rw_mbytes_per_sec": 0, 00:16:29.357 "r_mbytes_per_sec": 0, 00:16:29.357 "w_mbytes_per_sec": 0 00:16:29.357 }, 00:16:29.357 "claimed": false, 00:16:29.357 "zoned": false, 00:16:29.357 "supported_io_types": { 00:16:29.357 "read": true, 00:16:29.357 "write": true, 00:16:29.357 "unmap": false, 00:16:29.357 "flush": false, 00:16:29.357 "reset": true, 00:16:29.357 "nvme_admin": false, 00:16:29.357 "nvme_io": false, 00:16:29.357 "nvme_io_md": false, 00:16:29.357 "write_zeroes": true, 00:16:29.357 "zcopy": false, 00:16:29.357 "get_zone_info": false, 00:16:29.357 "zone_management": false, 00:16:29.357 "zone_append": false, 00:16:29.357 "compare": false, 00:16:29.357 "compare_and_write": false, 00:16:29.357 "abort": false, 00:16:29.357 "seek_hole": false, 00:16:29.357 "seek_data": false, 00:16:29.357 "copy": false, 00:16:29.357 "nvme_iov_md": false 00:16:29.357 }, 00:16:29.357 "memory_domains": [ 00:16:29.357 { 00:16:29.357 "dma_device_id": "system", 00:16:29.357 "dma_device_type": 1 00:16:29.357 }, 00:16:29.357 { 00:16:29.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.357 "dma_device_type": 2 00:16:29.357 }, 00:16:29.357 { 00:16:29.357 "dma_device_id": "system", 00:16:29.357 "dma_device_type": 1 00:16:29.357 }, 00:16:29.357 { 00:16:29.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.357 "dma_device_type": 2 00:16:29.357 }, 00:16:29.357 { 00:16:29.357 "dma_device_id": "system", 00:16:29.357 "dma_device_type": 1 00:16:29.357 }, 00:16:29.357 { 00:16:29.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.357 "dma_device_type": 2 00:16:29.357 } 00:16:29.357 ], 00:16:29.357 "driver_specific": { 00:16:29.357 "raid": { 00:16:29.357 "uuid": "0f7ab255-dbca-4082-88bb-b558a75a5c89", 00:16:29.357 "strip_size_kb": 0, 00:16:29.357 "state": "online", 00:16:29.357 "raid_level": "raid1", 00:16:29.357 "superblock": false, 00:16:29.357 "num_base_bdevs": 3, 00:16:29.357 "num_base_bdevs_discovered": 3, 00:16:29.357 "num_base_bdevs_operational": 3, 00:16:29.357 "base_bdevs_list": [ 00:16:29.357 { 00:16:29.357 "name": "BaseBdev1", 00:16:29.357 "uuid": "3052165e-d0ea-4a50-85cf-3f7cfb3cdf93", 00:16:29.357 "is_configured": true, 00:16:29.357 "data_offset": 0, 00:16:29.357 "data_size": 65536 00:16:29.357 }, 00:16:29.357 { 00:16:29.357 "name": "BaseBdev2", 00:16:29.357 "uuid": "720f00ba-799c-404e-887c-3a13d19f61ff", 00:16:29.357 "is_configured": true, 00:16:29.357 "data_offset": 0, 00:16:29.357 "data_size": 65536 00:16:29.357 }, 00:16:29.357 { 00:16:29.357 "name": "BaseBdev3", 00:16:29.357 "uuid": "9ef26b59-8929-4f65-b197-851af51e6151", 00:16:29.357 "is_configured": true, 00:16:29.357 "data_offset": 0, 00:16:29.357 "data_size": 65536 00:16:29.357 } 00:16:29.357 ] 00:16:29.357 } 00:16:29.357 } 00:16:29.357 }' 00:16:29.357 21:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:29.357 21:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:29.357 BaseBdev2 00:16:29.357 BaseBdev3' 00:16:29.357 21:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:29.357 21:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:29.357 21:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:29.616 21:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:29.616 "name": "BaseBdev1", 00:16:29.616 "aliases": [ 00:16:29.616 "3052165e-d0ea-4a50-85cf-3f7cfb3cdf93" 00:16:29.616 ], 00:16:29.616 "product_name": "Malloc disk", 00:16:29.616 "block_size": 512, 00:16:29.616 "num_blocks": 65536, 00:16:29.616 "uuid": "3052165e-d0ea-4a50-85cf-3f7cfb3cdf93", 00:16:29.616 "assigned_rate_limits": { 00:16:29.616 "rw_ios_per_sec": 0, 00:16:29.616 "rw_mbytes_per_sec": 0, 00:16:29.616 "r_mbytes_per_sec": 0, 00:16:29.616 "w_mbytes_per_sec": 0 00:16:29.616 }, 00:16:29.616 "claimed": true, 00:16:29.616 "claim_type": "exclusive_write", 00:16:29.616 "zoned": false, 00:16:29.616 "supported_io_types": { 00:16:29.616 "read": true, 00:16:29.616 "write": true, 00:16:29.616 "unmap": true, 00:16:29.616 "flush": true, 00:16:29.616 "reset": true, 00:16:29.616 "nvme_admin": false, 00:16:29.616 "nvme_io": false, 00:16:29.616 "nvme_io_md": false, 00:16:29.616 "write_zeroes": true, 00:16:29.616 "zcopy": true, 00:16:29.616 "get_zone_info": false, 00:16:29.616 "zone_management": false, 00:16:29.616 "zone_append": false, 00:16:29.616 "compare": false, 00:16:29.616 "compare_and_write": false, 00:16:29.617 "abort": true, 00:16:29.617 "seek_hole": false, 00:16:29.617 "seek_data": false, 00:16:29.617 "copy": true, 00:16:29.617 "nvme_iov_md": false 00:16:29.617 }, 00:16:29.617 "memory_domains": [ 00:16:29.617 { 00:16:29.617 "dma_device_id": "system", 00:16:29.617 "dma_device_type": 1 00:16:29.617 }, 00:16:29.617 { 00:16:29.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.617 "dma_device_type": 2 00:16:29.617 } 00:16:29.617 ], 00:16:29.617 "driver_specific": {} 00:16:29.617 }' 00:16:29.617 21:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:29.617 21:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:29.617 21:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:29.617 21:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:29.617 21:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:29.876 21:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:29.876 21:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:29.876 21:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:29.876 21:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:29.876 21:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:29.876 21:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:29.876 21:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:29.876 21:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:29.876 21:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:29.876 21:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:30.135 21:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:30.135 "name": "BaseBdev2", 00:16:30.135 "aliases": [ 00:16:30.135 "720f00ba-799c-404e-887c-3a13d19f61ff" 00:16:30.135 ], 00:16:30.135 "product_name": "Malloc disk", 00:16:30.135 "block_size": 512, 00:16:30.135 "num_blocks": 65536, 00:16:30.135 "uuid": "720f00ba-799c-404e-887c-3a13d19f61ff", 00:16:30.135 "assigned_rate_limits": { 00:16:30.135 "rw_ios_per_sec": 0, 00:16:30.135 "rw_mbytes_per_sec": 0, 00:16:30.135 "r_mbytes_per_sec": 0, 00:16:30.135 "w_mbytes_per_sec": 0 00:16:30.135 }, 00:16:30.135 "claimed": true, 00:16:30.135 "claim_type": "exclusive_write", 00:16:30.135 "zoned": false, 00:16:30.135 "supported_io_types": { 00:16:30.135 "read": true, 00:16:30.135 "write": true, 00:16:30.135 "unmap": true, 00:16:30.135 "flush": true, 00:16:30.135 "reset": true, 00:16:30.135 "nvme_admin": false, 00:16:30.135 "nvme_io": false, 00:16:30.135 "nvme_io_md": false, 00:16:30.135 "write_zeroes": true, 00:16:30.135 "zcopy": true, 00:16:30.135 "get_zone_info": false, 00:16:30.135 "zone_management": false, 00:16:30.135 "zone_append": false, 00:16:30.135 "compare": false, 00:16:30.135 "compare_and_write": false, 00:16:30.135 "abort": true, 00:16:30.135 "seek_hole": false, 00:16:30.135 "seek_data": false, 00:16:30.135 "copy": true, 00:16:30.135 "nvme_iov_md": false 00:16:30.135 }, 00:16:30.135 "memory_domains": [ 00:16:30.135 { 00:16:30.135 "dma_device_id": "system", 00:16:30.135 "dma_device_type": 1 00:16:30.135 }, 00:16:30.135 { 00:16:30.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.135 "dma_device_type": 2 00:16:30.135 } 00:16:30.135 ], 00:16:30.135 "driver_specific": {} 00:16:30.135 }' 00:16:30.135 21:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:30.135 21:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:30.135 21:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:30.135 21:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:30.135 21:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:30.135 21:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:30.135 21:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:30.395 21:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:30.395 21:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:30.395 21:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:30.395 21:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:30.395 21:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:30.395 21:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:30.395 21:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:30.395 21:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:30.654 21:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:30.654 "name": "BaseBdev3", 00:16:30.654 "aliases": [ 00:16:30.654 "9ef26b59-8929-4f65-b197-851af51e6151" 00:16:30.654 ], 00:16:30.654 "product_name": "Malloc disk", 00:16:30.654 "block_size": 512, 00:16:30.654 "num_blocks": 65536, 00:16:30.654 "uuid": "9ef26b59-8929-4f65-b197-851af51e6151", 00:16:30.654 "assigned_rate_limits": { 00:16:30.654 "rw_ios_per_sec": 0, 00:16:30.654 "rw_mbytes_per_sec": 0, 00:16:30.654 "r_mbytes_per_sec": 0, 00:16:30.654 "w_mbytes_per_sec": 0 00:16:30.654 }, 00:16:30.654 "claimed": true, 00:16:30.654 "claim_type": "exclusive_write", 00:16:30.654 "zoned": false, 00:16:30.654 "supported_io_types": { 00:16:30.654 "read": true, 00:16:30.654 "write": true, 00:16:30.654 "unmap": true, 00:16:30.654 "flush": true, 00:16:30.654 "reset": true, 00:16:30.654 "nvme_admin": false, 00:16:30.654 "nvme_io": false, 00:16:30.654 "nvme_io_md": false, 00:16:30.654 "write_zeroes": true, 00:16:30.654 "zcopy": true, 00:16:30.654 "get_zone_info": false, 00:16:30.654 "zone_management": false, 00:16:30.654 "zone_append": false, 00:16:30.654 "compare": false, 00:16:30.654 "compare_and_write": false, 00:16:30.654 "abort": true, 00:16:30.654 "seek_hole": false, 00:16:30.654 "seek_data": false, 00:16:30.654 "copy": true, 00:16:30.654 "nvme_iov_md": false 00:16:30.654 }, 00:16:30.654 "memory_domains": [ 00:16:30.654 { 00:16:30.654 "dma_device_id": "system", 00:16:30.654 "dma_device_type": 1 00:16:30.654 }, 00:16:30.654 { 00:16:30.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.654 "dma_device_type": 2 00:16:30.654 } 00:16:30.654 ], 00:16:30.654 "driver_specific": {} 00:16:30.654 }' 00:16:30.654 21:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:30.654 21:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:30.654 21:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:30.654 21:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:30.654 21:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:30.654 21:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:30.654 21:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:30.654 21:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:30.912 21:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:30.912 21:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:30.912 21:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:30.912 21:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:30.912 21:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:30.912 [2024-07-13 21:59:50.278653] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:31.171 21:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:31.171 21:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:16:31.171 21:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:31.171 21:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:31.171 21:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:16:31.171 21:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:31.171 21:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:31.171 21:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:31.171 21:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:31.171 21:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:31.171 21:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:31.171 21:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:31.172 21:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:31.172 21:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:31.172 21:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:31.172 21:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:31.172 21:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.172 21:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:31.172 "name": "Existed_Raid", 00:16:31.172 "uuid": "0f7ab255-dbca-4082-88bb-b558a75a5c89", 00:16:31.172 "strip_size_kb": 0, 00:16:31.172 "state": "online", 00:16:31.172 "raid_level": "raid1", 00:16:31.172 "superblock": false, 00:16:31.172 "num_base_bdevs": 3, 00:16:31.172 "num_base_bdevs_discovered": 2, 00:16:31.172 "num_base_bdevs_operational": 2, 00:16:31.172 "base_bdevs_list": [ 00:16:31.172 { 00:16:31.172 "name": null, 00:16:31.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.172 "is_configured": false, 00:16:31.172 "data_offset": 0, 00:16:31.172 "data_size": 65536 00:16:31.172 }, 00:16:31.172 { 00:16:31.172 "name": "BaseBdev2", 00:16:31.172 "uuid": "720f00ba-799c-404e-887c-3a13d19f61ff", 00:16:31.172 "is_configured": true, 00:16:31.172 "data_offset": 0, 00:16:31.172 "data_size": 65536 00:16:31.172 }, 00:16:31.172 { 00:16:31.172 "name": "BaseBdev3", 00:16:31.172 "uuid": "9ef26b59-8929-4f65-b197-851af51e6151", 00:16:31.172 "is_configured": true, 00:16:31.172 "data_offset": 0, 00:16:31.172 "data_size": 65536 00:16:31.172 } 00:16:31.172 ] 00:16:31.172 }' 00:16:31.172 21:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:31.172 21:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.739 21:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:31.739 21:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:31.739 21:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:31.739 21:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:31.998 21:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:31.998 21:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:31.998 21:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:31.998 [2024-07-13 21:59:51.293353] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:32.257 21:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:32.257 21:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:32.257 21:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:32.257 21:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:32.257 21:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:32.257 21:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:32.257 21:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:32.516 [2024-07-13 21:59:51.714290] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:32.517 [2024-07-13 21:59:51.714386] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:32.517 [2024-07-13 21:59:51.807386] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:32.517 [2024-07-13 21:59:51.807438] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:32.517 [2024-07-13 21:59:51.807452] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:16:32.517 21:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:32.517 21:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:32.517 21:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:32.517 21:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:32.775 21:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:32.775 21:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:32.775 21:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:16:32.775 21:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:16:32.775 21:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:32.775 21:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:33.034 BaseBdev2 00:16:33.034 21:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:16:33.034 21:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:33.034 21:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:33.034 21:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:33.034 21:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:33.034 21:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:33.034 21:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:33.034 21:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:33.292 [ 00:16:33.292 { 00:16:33.292 "name": "BaseBdev2", 00:16:33.292 "aliases": [ 00:16:33.292 "cffb29ed-6c62-4ffa-a75e-d25b13524460" 00:16:33.292 ], 00:16:33.292 "product_name": "Malloc disk", 00:16:33.292 "block_size": 512, 00:16:33.292 "num_blocks": 65536, 00:16:33.292 "uuid": "cffb29ed-6c62-4ffa-a75e-d25b13524460", 00:16:33.292 "assigned_rate_limits": { 00:16:33.292 "rw_ios_per_sec": 0, 00:16:33.292 "rw_mbytes_per_sec": 0, 00:16:33.292 "r_mbytes_per_sec": 0, 00:16:33.292 "w_mbytes_per_sec": 0 00:16:33.292 }, 00:16:33.292 "claimed": false, 00:16:33.292 "zoned": false, 00:16:33.292 "supported_io_types": { 00:16:33.292 "read": true, 00:16:33.292 "write": true, 00:16:33.292 "unmap": true, 00:16:33.292 "flush": true, 00:16:33.292 "reset": true, 00:16:33.292 "nvme_admin": false, 00:16:33.292 "nvme_io": false, 00:16:33.292 "nvme_io_md": false, 00:16:33.292 "write_zeroes": true, 00:16:33.292 "zcopy": true, 00:16:33.292 "get_zone_info": false, 00:16:33.292 "zone_management": false, 00:16:33.292 "zone_append": false, 00:16:33.292 "compare": false, 00:16:33.292 "compare_and_write": false, 00:16:33.292 "abort": true, 00:16:33.292 "seek_hole": false, 00:16:33.292 "seek_data": false, 00:16:33.292 "copy": true, 00:16:33.292 "nvme_iov_md": false 00:16:33.292 }, 00:16:33.292 "memory_domains": [ 00:16:33.292 { 00:16:33.292 "dma_device_id": "system", 00:16:33.292 "dma_device_type": 1 00:16:33.292 }, 00:16:33.292 { 00:16:33.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.293 "dma_device_type": 2 00:16:33.293 } 00:16:33.293 ], 00:16:33.293 "driver_specific": {} 00:16:33.293 } 00:16:33.293 ] 00:16:33.293 21:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:33.293 21:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:33.293 21:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:33.293 21:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:33.550 BaseBdev3 00:16:33.550 21:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:16:33.550 21:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:33.550 21:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:33.550 21:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:33.550 21:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:33.550 21:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:33.550 21:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:33.550 21:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:33.809 [ 00:16:33.809 { 00:16:33.809 "name": "BaseBdev3", 00:16:33.809 "aliases": [ 00:16:33.809 "03bd6697-3df9-46ef-8ce5-73910579dee5" 00:16:33.809 ], 00:16:33.809 "product_name": "Malloc disk", 00:16:33.809 "block_size": 512, 00:16:33.809 "num_blocks": 65536, 00:16:33.809 "uuid": "03bd6697-3df9-46ef-8ce5-73910579dee5", 00:16:33.809 "assigned_rate_limits": { 00:16:33.809 "rw_ios_per_sec": 0, 00:16:33.809 "rw_mbytes_per_sec": 0, 00:16:33.809 "r_mbytes_per_sec": 0, 00:16:33.809 "w_mbytes_per_sec": 0 00:16:33.809 }, 00:16:33.809 "claimed": false, 00:16:33.809 "zoned": false, 00:16:33.809 "supported_io_types": { 00:16:33.809 "read": true, 00:16:33.809 "write": true, 00:16:33.809 "unmap": true, 00:16:33.809 "flush": true, 00:16:33.809 "reset": true, 00:16:33.809 "nvme_admin": false, 00:16:33.809 "nvme_io": false, 00:16:33.809 "nvme_io_md": false, 00:16:33.809 "write_zeroes": true, 00:16:33.809 "zcopy": true, 00:16:33.809 "get_zone_info": false, 00:16:33.809 "zone_management": false, 00:16:33.809 "zone_append": false, 00:16:33.809 "compare": false, 00:16:33.809 "compare_and_write": false, 00:16:33.809 "abort": true, 00:16:33.809 "seek_hole": false, 00:16:33.809 "seek_data": false, 00:16:33.809 "copy": true, 00:16:33.809 "nvme_iov_md": false 00:16:33.809 }, 00:16:33.809 "memory_domains": [ 00:16:33.809 { 00:16:33.809 "dma_device_id": "system", 00:16:33.809 "dma_device_type": 1 00:16:33.809 }, 00:16:33.809 { 00:16:33.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.809 "dma_device_type": 2 00:16:33.809 } 00:16:33.809 ], 00:16:33.809 "driver_specific": {} 00:16:33.809 } 00:16:33.809 ] 00:16:33.809 21:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:33.809 21:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:33.809 21:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:33.809 21:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:33.809 [2024-07-13 21:59:53.183360] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:33.809 [2024-07-13 21:59:53.183405] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:33.809 [2024-07-13 21:59:53.183447] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:33.809 [2024-07-13 21:59:53.185210] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:33.809 21:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:33.809 21:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:34.069 21:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:34.069 21:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:34.069 21:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:34.069 21:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:34.069 21:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:34.069 21:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:34.069 21:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:34.069 21:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:34.069 21:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:34.069 21:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.069 21:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:34.069 "name": "Existed_Raid", 00:16:34.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.069 "strip_size_kb": 0, 00:16:34.069 "state": "configuring", 00:16:34.069 "raid_level": "raid1", 00:16:34.069 "superblock": false, 00:16:34.069 "num_base_bdevs": 3, 00:16:34.069 "num_base_bdevs_discovered": 2, 00:16:34.069 "num_base_bdevs_operational": 3, 00:16:34.069 "base_bdevs_list": [ 00:16:34.069 { 00:16:34.069 "name": "BaseBdev1", 00:16:34.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.069 "is_configured": false, 00:16:34.069 "data_offset": 0, 00:16:34.069 "data_size": 0 00:16:34.069 }, 00:16:34.069 { 00:16:34.069 "name": "BaseBdev2", 00:16:34.069 "uuid": "cffb29ed-6c62-4ffa-a75e-d25b13524460", 00:16:34.069 "is_configured": true, 00:16:34.069 "data_offset": 0, 00:16:34.069 "data_size": 65536 00:16:34.069 }, 00:16:34.069 { 00:16:34.069 "name": "BaseBdev3", 00:16:34.069 "uuid": "03bd6697-3df9-46ef-8ce5-73910579dee5", 00:16:34.069 "is_configured": true, 00:16:34.069 "data_offset": 0, 00:16:34.069 "data_size": 65536 00:16:34.069 } 00:16:34.069 ] 00:16:34.069 }' 00:16:34.069 21:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:34.069 21:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.634 21:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:16:34.634 [2024-07-13 21:59:54.009541] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:34.634 21:59:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:34.893 21:59:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:34.893 21:59:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:34.893 21:59:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:34.893 21:59:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:34.893 21:59:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:34.893 21:59:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:34.893 21:59:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:34.893 21:59:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:34.893 21:59:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:34.893 21:59:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:34.893 21:59:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.893 21:59:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:34.893 "name": "Existed_Raid", 00:16:34.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.893 "strip_size_kb": 0, 00:16:34.893 "state": "configuring", 00:16:34.893 "raid_level": "raid1", 00:16:34.893 "superblock": false, 00:16:34.893 "num_base_bdevs": 3, 00:16:34.893 "num_base_bdevs_discovered": 1, 00:16:34.893 "num_base_bdevs_operational": 3, 00:16:34.893 "base_bdevs_list": [ 00:16:34.893 { 00:16:34.893 "name": "BaseBdev1", 00:16:34.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.893 "is_configured": false, 00:16:34.893 "data_offset": 0, 00:16:34.893 "data_size": 0 00:16:34.893 }, 00:16:34.893 { 00:16:34.893 "name": null, 00:16:34.893 "uuid": "cffb29ed-6c62-4ffa-a75e-d25b13524460", 00:16:34.893 "is_configured": false, 00:16:34.893 "data_offset": 0, 00:16:34.893 "data_size": 65536 00:16:34.893 }, 00:16:34.893 { 00:16:34.893 "name": "BaseBdev3", 00:16:34.893 "uuid": "03bd6697-3df9-46ef-8ce5-73910579dee5", 00:16:34.893 "is_configured": true, 00:16:34.893 "data_offset": 0, 00:16:34.893 "data_size": 65536 00:16:34.893 } 00:16:34.893 ] 00:16:34.893 }' 00:16:34.893 21:59:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:34.893 21:59:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.458 21:59:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:35.458 21:59:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:35.716 21:59:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:16:35.716 21:59:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:35.716 [2024-07-13 21:59:55.042799] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:35.716 BaseBdev1 00:16:35.716 21:59:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:16:35.716 21:59:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:35.716 21:59:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:35.716 21:59:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:35.716 21:59:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:35.716 21:59:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:35.716 21:59:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:35.975 21:59:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:36.234 [ 00:16:36.234 { 00:16:36.234 "name": "BaseBdev1", 00:16:36.234 "aliases": [ 00:16:36.234 "ef3c40f5-dd39-4542-9be1-b2684c1bd283" 00:16:36.234 ], 00:16:36.234 "product_name": "Malloc disk", 00:16:36.234 "block_size": 512, 00:16:36.234 "num_blocks": 65536, 00:16:36.234 "uuid": "ef3c40f5-dd39-4542-9be1-b2684c1bd283", 00:16:36.234 "assigned_rate_limits": { 00:16:36.234 "rw_ios_per_sec": 0, 00:16:36.234 "rw_mbytes_per_sec": 0, 00:16:36.234 "r_mbytes_per_sec": 0, 00:16:36.234 "w_mbytes_per_sec": 0 00:16:36.234 }, 00:16:36.234 "claimed": true, 00:16:36.234 "claim_type": "exclusive_write", 00:16:36.234 "zoned": false, 00:16:36.234 "supported_io_types": { 00:16:36.234 "read": true, 00:16:36.234 "write": true, 00:16:36.234 "unmap": true, 00:16:36.234 "flush": true, 00:16:36.234 "reset": true, 00:16:36.234 "nvme_admin": false, 00:16:36.234 "nvme_io": false, 00:16:36.234 "nvme_io_md": false, 00:16:36.234 "write_zeroes": true, 00:16:36.234 "zcopy": true, 00:16:36.234 "get_zone_info": false, 00:16:36.234 "zone_management": false, 00:16:36.234 "zone_append": false, 00:16:36.234 "compare": false, 00:16:36.234 "compare_and_write": false, 00:16:36.234 "abort": true, 00:16:36.234 "seek_hole": false, 00:16:36.234 "seek_data": false, 00:16:36.234 "copy": true, 00:16:36.234 "nvme_iov_md": false 00:16:36.234 }, 00:16:36.234 "memory_domains": [ 00:16:36.234 { 00:16:36.234 "dma_device_id": "system", 00:16:36.234 "dma_device_type": 1 00:16:36.234 }, 00:16:36.234 { 00:16:36.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.234 "dma_device_type": 2 00:16:36.234 } 00:16:36.234 ], 00:16:36.234 "driver_specific": {} 00:16:36.234 } 00:16:36.234 ] 00:16:36.234 21:59:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:36.234 21:59:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:36.234 21:59:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:36.234 21:59:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:36.234 21:59:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:36.234 21:59:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:36.234 21:59:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:36.234 21:59:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:36.234 21:59:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:36.234 21:59:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:36.234 21:59:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:36.234 21:59:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:36.234 21:59:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.234 21:59:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:36.234 "name": "Existed_Raid", 00:16:36.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.234 "strip_size_kb": 0, 00:16:36.234 "state": "configuring", 00:16:36.234 "raid_level": "raid1", 00:16:36.234 "superblock": false, 00:16:36.234 "num_base_bdevs": 3, 00:16:36.234 "num_base_bdevs_discovered": 2, 00:16:36.234 "num_base_bdevs_operational": 3, 00:16:36.234 "base_bdevs_list": [ 00:16:36.234 { 00:16:36.234 "name": "BaseBdev1", 00:16:36.234 "uuid": "ef3c40f5-dd39-4542-9be1-b2684c1bd283", 00:16:36.234 "is_configured": true, 00:16:36.234 "data_offset": 0, 00:16:36.234 "data_size": 65536 00:16:36.234 }, 00:16:36.234 { 00:16:36.234 "name": null, 00:16:36.234 "uuid": "cffb29ed-6c62-4ffa-a75e-d25b13524460", 00:16:36.234 "is_configured": false, 00:16:36.234 "data_offset": 0, 00:16:36.234 "data_size": 65536 00:16:36.234 }, 00:16:36.234 { 00:16:36.234 "name": "BaseBdev3", 00:16:36.234 "uuid": "03bd6697-3df9-46ef-8ce5-73910579dee5", 00:16:36.234 "is_configured": true, 00:16:36.234 "data_offset": 0, 00:16:36.234 "data_size": 65536 00:16:36.234 } 00:16:36.234 ] 00:16:36.234 }' 00:16:36.234 21:59:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:36.234 21:59:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.802 21:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:36.802 21:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:37.062 21:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:16:37.062 21:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:16:37.062 [2024-07-13 21:59:56.362447] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:37.062 21:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:37.062 21:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:37.062 21:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:37.062 21:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:37.062 21:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:37.062 21:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:37.062 21:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:37.062 21:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:37.062 21:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:37.062 21:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:37.062 21:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:37.062 21:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.321 21:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:37.321 "name": "Existed_Raid", 00:16:37.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.321 "strip_size_kb": 0, 00:16:37.321 "state": "configuring", 00:16:37.321 "raid_level": "raid1", 00:16:37.321 "superblock": false, 00:16:37.321 "num_base_bdevs": 3, 00:16:37.321 "num_base_bdevs_discovered": 1, 00:16:37.321 "num_base_bdevs_operational": 3, 00:16:37.321 "base_bdevs_list": [ 00:16:37.321 { 00:16:37.321 "name": "BaseBdev1", 00:16:37.321 "uuid": "ef3c40f5-dd39-4542-9be1-b2684c1bd283", 00:16:37.321 "is_configured": true, 00:16:37.321 "data_offset": 0, 00:16:37.321 "data_size": 65536 00:16:37.321 }, 00:16:37.321 { 00:16:37.321 "name": null, 00:16:37.321 "uuid": "cffb29ed-6c62-4ffa-a75e-d25b13524460", 00:16:37.321 "is_configured": false, 00:16:37.321 "data_offset": 0, 00:16:37.321 "data_size": 65536 00:16:37.321 }, 00:16:37.321 { 00:16:37.321 "name": null, 00:16:37.321 "uuid": "03bd6697-3df9-46ef-8ce5-73910579dee5", 00:16:37.321 "is_configured": false, 00:16:37.321 "data_offset": 0, 00:16:37.321 "data_size": 65536 00:16:37.321 } 00:16:37.321 ] 00:16:37.321 }' 00:16:37.321 21:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:37.321 21:59:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.890 21:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:37.890 21:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:37.890 21:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:16:37.890 21:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:38.149 [2024-07-13 21:59:57.345067] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:38.149 21:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:38.149 21:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:38.149 21:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:38.149 21:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:38.149 21:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:38.149 21:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:38.149 21:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:38.149 21:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:38.149 21:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:38.149 21:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:38.149 21:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:38.149 21:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.149 21:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:38.149 "name": "Existed_Raid", 00:16:38.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.149 "strip_size_kb": 0, 00:16:38.149 "state": "configuring", 00:16:38.149 "raid_level": "raid1", 00:16:38.149 "superblock": false, 00:16:38.149 "num_base_bdevs": 3, 00:16:38.149 "num_base_bdevs_discovered": 2, 00:16:38.149 "num_base_bdevs_operational": 3, 00:16:38.149 "base_bdevs_list": [ 00:16:38.149 { 00:16:38.149 "name": "BaseBdev1", 00:16:38.149 "uuid": "ef3c40f5-dd39-4542-9be1-b2684c1bd283", 00:16:38.149 "is_configured": true, 00:16:38.149 "data_offset": 0, 00:16:38.149 "data_size": 65536 00:16:38.149 }, 00:16:38.149 { 00:16:38.149 "name": null, 00:16:38.149 "uuid": "cffb29ed-6c62-4ffa-a75e-d25b13524460", 00:16:38.149 "is_configured": false, 00:16:38.149 "data_offset": 0, 00:16:38.149 "data_size": 65536 00:16:38.149 }, 00:16:38.149 { 00:16:38.149 "name": "BaseBdev3", 00:16:38.149 "uuid": "03bd6697-3df9-46ef-8ce5-73910579dee5", 00:16:38.149 "is_configured": true, 00:16:38.149 "data_offset": 0, 00:16:38.149 "data_size": 65536 00:16:38.149 } 00:16:38.149 ] 00:16:38.149 }' 00:16:38.149 21:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:38.149 21:59:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.718 21:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:38.718 21:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:38.977 21:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:16:38.977 21:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:38.977 [2024-07-13 21:59:58.331651] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:39.235 21:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:39.235 21:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:39.235 21:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:39.235 21:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:39.235 21:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:39.235 21:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:39.235 21:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:39.235 21:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:39.235 21:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:39.235 21:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:39.235 21:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:39.235 21:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:39.235 21:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:39.235 "name": "Existed_Raid", 00:16:39.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.235 "strip_size_kb": 0, 00:16:39.235 "state": "configuring", 00:16:39.235 "raid_level": "raid1", 00:16:39.235 "superblock": false, 00:16:39.235 "num_base_bdevs": 3, 00:16:39.235 "num_base_bdevs_discovered": 1, 00:16:39.235 "num_base_bdevs_operational": 3, 00:16:39.235 "base_bdevs_list": [ 00:16:39.235 { 00:16:39.235 "name": null, 00:16:39.235 "uuid": "ef3c40f5-dd39-4542-9be1-b2684c1bd283", 00:16:39.235 "is_configured": false, 00:16:39.235 "data_offset": 0, 00:16:39.235 "data_size": 65536 00:16:39.235 }, 00:16:39.235 { 00:16:39.235 "name": null, 00:16:39.235 "uuid": "cffb29ed-6c62-4ffa-a75e-d25b13524460", 00:16:39.235 "is_configured": false, 00:16:39.235 "data_offset": 0, 00:16:39.235 "data_size": 65536 00:16:39.235 }, 00:16:39.235 { 00:16:39.235 "name": "BaseBdev3", 00:16:39.235 "uuid": "03bd6697-3df9-46ef-8ce5-73910579dee5", 00:16:39.235 "is_configured": true, 00:16:39.235 "data_offset": 0, 00:16:39.235 "data_size": 65536 00:16:39.235 } 00:16:39.235 ] 00:16:39.235 }' 00:16:39.235 21:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:39.235 21:59:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.802 21:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:39.802 21:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:40.061 21:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:16:40.061 21:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:40.061 [2024-07-13 21:59:59.431161] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:40.061 21:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:40.061 21:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:40.061 21:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:40.061 21:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:40.061 21:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:40.061 21:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:40.061 21:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:40.061 21:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:40.061 21:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:40.061 21:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:40.320 21:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.320 21:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.320 21:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:40.320 "name": "Existed_Raid", 00:16:40.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.320 "strip_size_kb": 0, 00:16:40.320 "state": "configuring", 00:16:40.320 "raid_level": "raid1", 00:16:40.320 "superblock": false, 00:16:40.320 "num_base_bdevs": 3, 00:16:40.320 "num_base_bdevs_discovered": 2, 00:16:40.320 "num_base_bdevs_operational": 3, 00:16:40.320 "base_bdevs_list": [ 00:16:40.320 { 00:16:40.320 "name": null, 00:16:40.320 "uuid": "ef3c40f5-dd39-4542-9be1-b2684c1bd283", 00:16:40.320 "is_configured": false, 00:16:40.320 "data_offset": 0, 00:16:40.320 "data_size": 65536 00:16:40.320 }, 00:16:40.320 { 00:16:40.320 "name": "BaseBdev2", 00:16:40.320 "uuid": "cffb29ed-6c62-4ffa-a75e-d25b13524460", 00:16:40.320 "is_configured": true, 00:16:40.320 "data_offset": 0, 00:16:40.320 "data_size": 65536 00:16:40.320 }, 00:16:40.320 { 00:16:40.320 "name": "BaseBdev3", 00:16:40.320 "uuid": "03bd6697-3df9-46ef-8ce5-73910579dee5", 00:16:40.320 "is_configured": true, 00:16:40.320 "data_offset": 0, 00:16:40.320 "data_size": 65536 00:16:40.320 } 00:16:40.320 ] 00:16:40.320 }' 00:16:40.320 21:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:40.320 21:59:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.915 22:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.915 22:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:40.915 22:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:16:41.173 22:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.173 22:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:41.173 22:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u ef3c40f5-dd39-4542-9be1-b2684c1bd283 00:16:41.433 [2024-07-13 22:00:00.662630] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:41.433 [2024-07-13 22:00:00.662678] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041780 00:16:41.433 [2024-07-13 22:00:00.662687] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:41.433 [2024-07-13 22:00:00.662940] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010980 00:16:41.433 [2024-07-13 22:00:00.663120] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041780 00:16:41.433 [2024-07-13 22:00:00.663133] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000041780 00:16:41.433 [2024-07-13 22:00:00.663390] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:41.433 NewBaseBdev 00:16:41.433 22:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:16:41.433 22:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:16:41.433 22:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:41.433 22:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:41.433 22:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:41.433 22:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:41.433 22:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:41.692 22:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:41.692 [ 00:16:41.692 { 00:16:41.692 "name": "NewBaseBdev", 00:16:41.692 "aliases": [ 00:16:41.692 "ef3c40f5-dd39-4542-9be1-b2684c1bd283" 00:16:41.692 ], 00:16:41.692 "product_name": "Malloc disk", 00:16:41.692 "block_size": 512, 00:16:41.692 "num_blocks": 65536, 00:16:41.692 "uuid": "ef3c40f5-dd39-4542-9be1-b2684c1bd283", 00:16:41.692 "assigned_rate_limits": { 00:16:41.692 "rw_ios_per_sec": 0, 00:16:41.692 "rw_mbytes_per_sec": 0, 00:16:41.692 "r_mbytes_per_sec": 0, 00:16:41.692 "w_mbytes_per_sec": 0 00:16:41.692 }, 00:16:41.692 "claimed": true, 00:16:41.692 "claim_type": "exclusive_write", 00:16:41.692 "zoned": false, 00:16:41.692 "supported_io_types": { 00:16:41.692 "read": true, 00:16:41.692 "write": true, 00:16:41.692 "unmap": true, 00:16:41.692 "flush": true, 00:16:41.692 "reset": true, 00:16:41.692 "nvme_admin": false, 00:16:41.692 "nvme_io": false, 00:16:41.692 "nvme_io_md": false, 00:16:41.692 "write_zeroes": true, 00:16:41.692 "zcopy": true, 00:16:41.692 "get_zone_info": false, 00:16:41.692 "zone_management": false, 00:16:41.692 "zone_append": false, 00:16:41.692 "compare": false, 00:16:41.692 "compare_and_write": false, 00:16:41.692 "abort": true, 00:16:41.692 "seek_hole": false, 00:16:41.692 "seek_data": false, 00:16:41.692 "copy": true, 00:16:41.692 "nvme_iov_md": false 00:16:41.692 }, 00:16:41.692 "memory_domains": [ 00:16:41.692 { 00:16:41.692 "dma_device_id": "system", 00:16:41.692 "dma_device_type": 1 00:16:41.692 }, 00:16:41.692 { 00:16:41.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.692 "dma_device_type": 2 00:16:41.692 } 00:16:41.692 ], 00:16:41.692 "driver_specific": {} 00:16:41.692 } 00:16:41.692 ] 00:16:41.692 22:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:41.692 22:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:41.692 22:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:41.692 22:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:41.692 22:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:41.692 22:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:41.692 22:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:41.692 22:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:41.692 22:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:41.692 22:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:41.692 22:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:41.692 22:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.692 22:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.952 22:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:41.952 "name": "Existed_Raid", 00:16:41.952 "uuid": "2c41863f-7306-4332-9534-2118478e030e", 00:16:41.952 "strip_size_kb": 0, 00:16:41.952 "state": "online", 00:16:41.952 "raid_level": "raid1", 00:16:41.952 "superblock": false, 00:16:41.952 "num_base_bdevs": 3, 00:16:41.952 "num_base_bdevs_discovered": 3, 00:16:41.952 "num_base_bdevs_operational": 3, 00:16:41.952 "base_bdevs_list": [ 00:16:41.952 { 00:16:41.952 "name": "NewBaseBdev", 00:16:41.952 "uuid": "ef3c40f5-dd39-4542-9be1-b2684c1bd283", 00:16:41.952 "is_configured": true, 00:16:41.952 "data_offset": 0, 00:16:41.952 "data_size": 65536 00:16:41.952 }, 00:16:41.952 { 00:16:41.952 "name": "BaseBdev2", 00:16:41.952 "uuid": "cffb29ed-6c62-4ffa-a75e-d25b13524460", 00:16:41.952 "is_configured": true, 00:16:41.952 "data_offset": 0, 00:16:41.952 "data_size": 65536 00:16:41.952 }, 00:16:41.952 { 00:16:41.952 "name": "BaseBdev3", 00:16:41.952 "uuid": "03bd6697-3df9-46ef-8ce5-73910579dee5", 00:16:41.952 "is_configured": true, 00:16:41.952 "data_offset": 0, 00:16:41.952 "data_size": 65536 00:16:41.952 } 00:16:41.952 ] 00:16:41.952 }' 00:16:41.952 22:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:41.952 22:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.519 22:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:16:42.519 22:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:42.519 22:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:42.519 22:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:42.519 22:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:42.519 22:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:42.519 22:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:42.519 22:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:42.519 [2024-07-13 22:00:01.842186] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:42.519 22:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:42.519 "name": "Existed_Raid", 00:16:42.519 "aliases": [ 00:16:42.519 "2c41863f-7306-4332-9534-2118478e030e" 00:16:42.519 ], 00:16:42.519 "product_name": "Raid Volume", 00:16:42.519 "block_size": 512, 00:16:42.519 "num_blocks": 65536, 00:16:42.519 "uuid": "2c41863f-7306-4332-9534-2118478e030e", 00:16:42.519 "assigned_rate_limits": { 00:16:42.519 "rw_ios_per_sec": 0, 00:16:42.519 "rw_mbytes_per_sec": 0, 00:16:42.519 "r_mbytes_per_sec": 0, 00:16:42.519 "w_mbytes_per_sec": 0 00:16:42.519 }, 00:16:42.519 "claimed": false, 00:16:42.519 "zoned": false, 00:16:42.519 "supported_io_types": { 00:16:42.519 "read": true, 00:16:42.519 "write": true, 00:16:42.519 "unmap": false, 00:16:42.519 "flush": false, 00:16:42.519 "reset": true, 00:16:42.519 "nvme_admin": false, 00:16:42.519 "nvme_io": false, 00:16:42.519 "nvme_io_md": false, 00:16:42.519 "write_zeroes": true, 00:16:42.519 "zcopy": false, 00:16:42.519 "get_zone_info": false, 00:16:42.519 "zone_management": false, 00:16:42.519 "zone_append": false, 00:16:42.519 "compare": false, 00:16:42.519 "compare_and_write": false, 00:16:42.519 "abort": false, 00:16:42.519 "seek_hole": false, 00:16:42.519 "seek_data": false, 00:16:42.519 "copy": false, 00:16:42.519 "nvme_iov_md": false 00:16:42.519 }, 00:16:42.519 "memory_domains": [ 00:16:42.519 { 00:16:42.519 "dma_device_id": "system", 00:16:42.519 "dma_device_type": 1 00:16:42.519 }, 00:16:42.519 { 00:16:42.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.519 "dma_device_type": 2 00:16:42.519 }, 00:16:42.519 { 00:16:42.519 "dma_device_id": "system", 00:16:42.519 "dma_device_type": 1 00:16:42.519 }, 00:16:42.519 { 00:16:42.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.519 "dma_device_type": 2 00:16:42.519 }, 00:16:42.519 { 00:16:42.519 "dma_device_id": "system", 00:16:42.519 "dma_device_type": 1 00:16:42.519 }, 00:16:42.519 { 00:16:42.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.519 "dma_device_type": 2 00:16:42.519 } 00:16:42.519 ], 00:16:42.519 "driver_specific": { 00:16:42.519 "raid": { 00:16:42.519 "uuid": "2c41863f-7306-4332-9534-2118478e030e", 00:16:42.519 "strip_size_kb": 0, 00:16:42.519 "state": "online", 00:16:42.519 "raid_level": "raid1", 00:16:42.519 "superblock": false, 00:16:42.519 "num_base_bdevs": 3, 00:16:42.519 "num_base_bdevs_discovered": 3, 00:16:42.519 "num_base_bdevs_operational": 3, 00:16:42.519 "base_bdevs_list": [ 00:16:42.519 { 00:16:42.519 "name": "NewBaseBdev", 00:16:42.519 "uuid": "ef3c40f5-dd39-4542-9be1-b2684c1bd283", 00:16:42.519 "is_configured": true, 00:16:42.519 "data_offset": 0, 00:16:42.519 "data_size": 65536 00:16:42.519 }, 00:16:42.519 { 00:16:42.519 "name": "BaseBdev2", 00:16:42.519 "uuid": "cffb29ed-6c62-4ffa-a75e-d25b13524460", 00:16:42.519 "is_configured": true, 00:16:42.519 "data_offset": 0, 00:16:42.519 "data_size": 65536 00:16:42.519 }, 00:16:42.519 { 00:16:42.519 "name": "BaseBdev3", 00:16:42.519 "uuid": "03bd6697-3df9-46ef-8ce5-73910579dee5", 00:16:42.519 "is_configured": true, 00:16:42.519 "data_offset": 0, 00:16:42.519 "data_size": 65536 00:16:42.519 } 00:16:42.519 ] 00:16:42.519 } 00:16:42.519 } 00:16:42.519 }' 00:16:42.519 22:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:42.519 22:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:16:42.519 BaseBdev2 00:16:42.519 BaseBdev3' 00:16:42.519 22:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:42.776 22:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:16:42.776 22:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:42.776 22:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:42.776 "name": "NewBaseBdev", 00:16:42.776 "aliases": [ 00:16:42.776 "ef3c40f5-dd39-4542-9be1-b2684c1bd283" 00:16:42.777 ], 00:16:42.777 "product_name": "Malloc disk", 00:16:42.777 "block_size": 512, 00:16:42.777 "num_blocks": 65536, 00:16:42.777 "uuid": "ef3c40f5-dd39-4542-9be1-b2684c1bd283", 00:16:42.777 "assigned_rate_limits": { 00:16:42.777 "rw_ios_per_sec": 0, 00:16:42.777 "rw_mbytes_per_sec": 0, 00:16:42.777 "r_mbytes_per_sec": 0, 00:16:42.777 "w_mbytes_per_sec": 0 00:16:42.777 }, 00:16:42.777 "claimed": true, 00:16:42.777 "claim_type": "exclusive_write", 00:16:42.777 "zoned": false, 00:16:42.777 "supported_io_types": { 00:16:42.777 "read": true, 00:16:42.777 "write": true, 00:16:42.777 "unmap": true, 00:16:42.777 "flush": true, 00:16:42.777 "reset": true, 00:16:42.777 "nvme_admin": false, 00:16:42.777 "nvme_io": false, 00:16:42.777 "nvme_io_md": false, 00:16:42.777 "write_zeroes": true, 00:16:42.777 "zcopy": true, 00:16:42.777 "get_zone_info": false, 00:16:42.777 "zone_management": false, 00:16:42.777 "zone_append": false, 00:16:42.777 "compare": false, 00:16:42.777 "compare_and_write": false, 00:16:42.777 "abort": true, 00:16:42.777 "seek_hole": false, 00:16:42.777 "seek_data": false, 00:16:42.777 "copy": true, 00:16:42.777 "nvme_iov_md": false 00:16:42.777 }, 00:16:42.777 "memory_domains": [ 00:16:42.777 { 00:16:42.777 "dma_device_id": "system", 00:16:42.777 "dma_device_type": 1 00:16:42.777 }, 00:16:42.777 { 00:16:42.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.777 "dma_device_type": 2 00:16:42.777 } 00:16:42.777 ], 00:16:42.777 "driver_specific": {} 00:16:42.777 }' 00:16:42.777 22:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:42.777 22:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:43.035 22:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:43.035 22:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:43.035 22:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:43.035 22:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:43.035 22:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:43.035 22:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:43.035 22:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:43.035 22:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:43.035 22:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:43.035 22:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:43.035 22:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:43.035 22:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:43.035 22:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:43.293 22:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:43.294 "name": "BaseBdev2", 00:16:43.294 "aliases": [ 00:16:43.294 "cffb29ed-6c62-4ffa-a75e-d25b13524460" 00:16:43.294 ], 00:16:43.294 "product_name": "Malloc disk", 00:16:43.294 "block_size": 512, 00:16:43.294 "num_blocks": 65536, 00:16:43.294 "uuid": "cffb29ed-6c62-4ffa-a75e-d25b13524460", 00:16:43.294 "assigned_rate_limits": { 00:16:43.294 "rw_ios_per_sec": 0, 00:16:43.294 "rw_mbytes_per_sec": 0, 00:16:43.294 "r_mbytes_per_sec": 0, 00:16:43.294 "w_mbytes_per_sec": 0 00:16:43.294 }, 00:16:43.294 "claimed": true, 00:16:43.294 "claim_type": "exclusive_write", 00:16:43.294 "zoned": false, 00:16:43.294 "supported_io_types": { 00:16:43.294 "read": true, 00:16:43.294 "write": true, 00:16:43.294 "unmap": true, 00:16:43.294 "flush": true, 00:16:43.294 "reset": true, 00:16:43.294 "nvme_admin": false, 00:16:43.294 "nvme_io": false, 00:16:43.294 "nvme_io_md": false, 00:16:43.294 "write_zeroes": true, 00:16:43.294 "zcopy": true, 00:16:43.294 "get_zone_info": false, 00:16:43.294 "zone_management": false, 00:16:43.294 "zone_append": false, 00:16:43.294 "compare": false, 00:16:43.294 "compare_and_write": false, 00:16:43.294 "abort": true, 00:16:43.294 "seek_hole": false, 00:16:43.294 "seek_data": false, 00:16:43.294 "copy": true, 00:16:43.294 "nvme_iov_md": false 00:16:43.294 }, 00:16:43.294 "memory_domains": [ 00:16:43.294 { 00:16:43.294 "dma_device_id": "system", 00:16:43.294 "dma_device_type": 1 00:16:43.294 }, 00:16:43.294 { 00:16:43.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.294 "dma_device_type": 2 00:16:43.294 } 00:16:43.294 ], 00:16:43.294 "driver_specific": {} 00:16:43.294 }' 00:16:43.294 22:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:43.294 22:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:43.294 22:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:43.294 22:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:43.294 22:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:43.552 22:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:43.552 22:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:43.552 22:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:43.552 22:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:43.552 22:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:43.552 22:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:43.552 22:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:43.552 22:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:43.552 22:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:43.552 22:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:43.809 22:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:43.809 "name": "BaseBdev3", 00:16:43.809 "aliases": [ 00:16:43.809 "03bd6697-3df9-46ef-8ce5-73910579dee5" 00:16:43.809 ], 00:16:43.809 "product_name": "Malloc disk", 00:16:43.809 "block_size": 512, 00:16:43.809 "num_blocks": 65536, 00:16:43.809 "uuid": "03bd6697-3df9-46ef-8ce5-73910579dee5", 00:16:43.809 "assigned_rate_limits": { 00:16:43.809 "rw_ios_per_sec": 0, 00:16:43.809 "rw_mbytes_per_sec": 0, 00:16:43.809 "r_mbytes_per_sec": 0, 00:16:43.809 "w_mbytes_per_sec": 0 00:16:43.809 }, 00:16:43.809 "claimed": true, 00:16:43.809 "claim_type": "exclusive_write", 00:16:43.809 "zoned": false, 00:16:43.809 "supported_io_types": { 00:16:43.809 "read": true, 00:16:43.809 "write": true, 00:16:43.809 "unmap": true, 00:16:43.809 "flush": true, 00:16:43.809 "reset": true, 00:16:43.809 "nvme_admin": false, 00:16:43.809 "nvme_io": false, 00:16:43.809 "nvme_io_md": false, 00:16:43.809 "write_zeroes": true, 00:16:43.809 "zcopy": true, 00:16:43.809 "get_zone_info": false, 00:16:43.809 "zone_management": false, 00:16:43.809 "zone_append": false, 00:16:43.809 "compare": false, 00:16:43.809 "compare_and_write": false, 00:16:43.809 "abort": true, 00:16:43.809 "seek_hole": false, 00:16:43.809 "seek_data": false, 00:16:43.809 "copy": true, 00:16:43.809 "nvme_iov_md": false 00:16:43.809 }, 00:16:43.809 "memory_domains": [ 00:16:43.809 { 00:16:43.809 "dma_device_id": "system", 00:16:43.809 "dma_device_type": 1 00:16:43.809 }, 00:16:43.809 { 00:16:43.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.810 "dma_device_type": 2 00:16:43.810 } 00:16:43.810 ], 00:16:43.810 "driver_specific": {} 00:16:43.810 }' 00:16:43.810 22:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:43.810 22:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:43.810 22:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:43.810 22:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:43.810 22:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:43.810 22:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:43.810 22:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:44.068 22:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:44.068 22:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:44.068 22:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:44.068 22:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:44.068 22:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:44.068 22:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:44.325 [2024-07-13 22:00:03.474106] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:44.325 [2024-07-13 22:00:03.474142] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:44.325 [2024-07-13 22:00:03.474225] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:44.325 [2024-07-13 22:00:03.474485] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:44.325 [2024-07-13 22:00:03.474498] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041780 name Existed_Raid, state offline 00:16:44.325 22:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 1396516 00:16:44.325 22:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 1396516 ']' 00:16:44.325 22:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 1396516 00:16:44.325 22:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:16:44.325 22:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:44.325 22:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1396516 00:16:44.325 22:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:44.325 22:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:44.325 22:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1396516' 00:16:44.325 killing process with pid 1396516 00:16:44.325 22:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 1396516 00:16:44.325 [2024-07-13 22:00:03.528131] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:44.325 22:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 1396516 00:16:44.583 [2024-07-13 22:00:03.760738] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:45.962 22:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:16:45.962 00:16:45.962 real 0m23.278s 00:16:45.962 user 0m40.745s 00:16:45.962 sys 0m4.362s 00:16:45.962 22:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:45.962 22:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.962 ************************************ 00:16:45.962 END TEST raid_state_function_test 00:16:45.962 ************************************ 00:16:45.962 22:00:05 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:45.962 22:00:05 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:16:45.962 22:00:05 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:45.962 22:00:05 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:45.962 22:00:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:45.962 ************************************ 00:16:45.962 START TEST raid_state_function_test_sb 00:16:45.962 ************************************ 00:16:45.962 22:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 3 true 00:16:45.962 22:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:16:45.963 22:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:16:45.963 22:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:16:45.963 22:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:45.963 22:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:45.963 22:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:45.963 22:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:16:45.963 22:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:45.963 22:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:45.963 22:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:16:45.963 22:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:45.963 22:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:45.963 22:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:16:45.963 22:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:45.963 22:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:45.963 22:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:45.963 22:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:45.963 22:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:45.963 22:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:45.963 22:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:45.963 22:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:45.963 22:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:16:45.963 22:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:16:45.963 22:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:16:45.963 22:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:16:45.963 22:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=1401531 00:16:45.963 22:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1401531' 00:16:45.963 Process raid pid: 1401531 00:16:45.963 22:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:45.963 22:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 1401531 /var/tmp/spdk-raid.sock 00:16:45.963 22:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 1401531 ']' 00:16:45.963 22:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:45.963 22:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:45.963 22:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:45.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:45.963 22:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:45.963 22:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.963 [2024-07-13 22:00:05.237742] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:45.963 [2024-07-13 22:00:05.237825] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:45.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.963 EAL: Requested device 0000:3d:01.0 cannot be used 00:16:45.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.963 EAL: Requested device 0000:3d:01.1 cannot be used 00:16:45.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.963 EAL: Requested device 0000:3d:01.2 cannot be used 00:16:45.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.963 EAL: Requested device 0000:3d:01.3 cannot be used 00:16:45.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.963 EAL: Requested device 0000:3d:01.4 cannot be used 00:16:45.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.963 EAL: Requested device 0000:3d:01.5 cannot be used 00:16:45.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.963 EAL: Requested device 0000:3d:01.6 cannot be used 00:16:45.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.963 EAL: Requested device 0000:3d:01.7 cannot be used 00:16:45.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.963 EAL: Requested device 0000:3d:02.0 cannot be used 00:16:45.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.963 EAL: Requested device 0000:3d:02.1 cannot be used 00:16:45.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.963 EAL: Requested device 0000:3d:02.2 cannot be used 00:16:45.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.963 EAL: Requested device 0000:3d:02.3 cannot be used 00:16:45.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.963 EAL: Requested device 0000:3d:02.4 cannot be used 00:16:45.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.963 EAL: Requested device 0000:3d:02.5 cannot be used 00:16:45.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.963 EAL: Requested device 0000:3d:02.6 cannot be used 00:16:45.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.963 EAL: Requested device 0000:3d:02.7 cannot be used 00:16:45.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.963 EAL: Requested device 0000:3f:01.0 cannot be used 00:16:45.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.963 EAL: Requested device 0000:3f:01.1 cannot be used 00:16:45.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.963 EAL: Requested device 0000:3f:01.2 cannot be used 00:16:45.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.963 EAL: Requested device 0000:3f:01.3 cannot be used 00:16:45.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.963 EAL: Requested device 0000:3f:01.4 cannot be used 00:16:45.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.963 EAL: Requested device 0000:3f:01.5 cannot be used 00:16:45.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.963 EAL: Requested device 0000:3f:01.6 cannot be used 00:16:45.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.963 EAL: Requested device 0000:3f:01.7 cannot be used 00:16:45.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.963 EAL: Requested device 0000:3f:02.0 cannot be used 00:16:45.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.963 EAL: Requested device 0000:3f:02.1 cannot be used 00:16:45.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.964 EAL: Requested device 0000:3f:02.2 cannot be used 00:16:45.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.964 EAL: Requested device 0000:3f:02.3 cannot be used 00:16:45.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.964 EAL: Requested device 0000:3f:02.4 cannot be used 00:16:45.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.964 EAL: Requested device 0000:3f:02.5 cannot be used 00:16:45.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.964 EAL: Requested device 0000:3f:02.6 cannot be used 00:16:45.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.964 EAL: Requested device 0000:3f:02.7 cannot be used 00:16:46.242 [2024-07-13 22:00:05.403947] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.242 [2024-07-13 22:00:05.620331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.505 [2024-07-13 22:00:05.866870] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:46.505 [2024-07-13 22:00:05.866909] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:46.765 22:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:46.765 22:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:16:46.765 22:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:46.765 [2024-07-13 22:00:06.152163] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:46.765 [2024-07-13 22:00:06.152212] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:46.765 [2024-07-13 22:00:06.152227] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:46.765 [2024-07-13 22:00:06.152242] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:46.765 [2024-07-13 22:00:06.152250] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:46.765 [2024-07-13 22:00:06.152262] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:47.023 22:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:47.023 22:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:47.023 22:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:47.023 22:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:47.023 22:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:47.023 22:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:47.023 22:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:47.023 22:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:47.023 22:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:47.023 22:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:47.023 22:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.024 22:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.024 22:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:47.024 "name": "Existed_Raid", 00:16:47.024 "uuid": "d16ded07-d770-4273-a3f3-c5909d626e6a", 00:16:47.024 "strip_size_kb": 0, 00:16:47.024 "state": "configuring", 00:16:47.024 "raid_level": "raid1", 00:16:47.024 "superblock": true, 00:16:47.024 "num_base_bdevs": 3, 00:16:47.024 "num_base_bdevs_discovered": 0, 00:16:47.024 "num_base_bdevs_operational": 3, 00:16:47.024 "base_bdevs_list": [ 00:16:47.024 { 00:16:47.024 "name": "BaseBdev1", 00:16:47.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.024 "is_configured": false, 00:16:47.024 "data_offset": 0, 00:16:47.024 "data_size": 0 00:16:47.024 }, 00:16:47.024 { 00:16:47.024 "name": "BaseBdev2", 00:16:47.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.024 "is_configured": false, 00:16:47.024 "data_offset": 0, 00:16:47.024 "data_size": 0 00:16:47.024 }, 00:16:47.024 { 00:16:47.024 "name": "BaseBdev3", 00:16:47.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.024 "is_configured": false, 00:16:47.024 "data_offset": 0, 00:16:47.024 "data_size": 0 00:16:47.024 } 00:16:47.024 ] 00:16:47.024 }' 00:16:47.024 22:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:47.024 22:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.592 22:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:47.851 [2024-07-13 22:00:07.014291] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:47.851 [2024-07-13 22:00:07.014327] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:16:47.851 22:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:47.851 [2024-07-13 22:00:07.182755] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:47.851 [2024-07-13 22:00:07.182790] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:47.851 [2024-07-13 22:00:07.182800] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:47.851 [2024-07-13 22:00:07.182830] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:47.851 [2024-07-13 22:00:07.182841] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:47.851 [2024-07-13 22:00:07.182852] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:47.851 22:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:48.109 [2024-07-13 22:00:07.380178] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:48.109 BaseBdev1 00:16:48.109 22:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:48.109 22:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:48.109 22:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:48.109 22:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:48.109 22:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:48.109 22:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:48.109 22:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:48.367 22:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:48.367 [ 00:16:48.367 { 00:16:48.367 "name": "BaseBdev1", 00:16:48.367 "aliases": [ 00:16:48.367 "d81fe4f6-4b03-4ec2-b678-ea5a126437dc" 00:16:48.367 ], 00:16:48.367 "product_name": "Malloc disk", 00:16:48.367 "block_size": 512, 00:16:48.367 "num_blocks": 65536, 00:16:48.367 "uuid": "d81fe4f6-4b03-4ec2-b678-ea5a126437dc", 00:16:48.367 "assigned_rate_limits": { 00:16:48.367 "rw_ios_per_sec": 0, 00:16:48.367 "rw_mbytes_per_sec": 0, 00:16:48.368 "r_mbytes_per_sec": 0, 00:16:48.368 "w_mbytes_per_sec": 0 00:16:48.368 }, 00:16:48.368 "claimed": true, 00:16:48.368 "claim_type": "exclusive_write", 00:16:48.368 "zoned": false, 00:16:48.368 "supported_io_types": { 00:16:48.368 "read": true, 00:16:48.368 "write": true, 00:16:48.368 "unmap": true, 00:16:48.368 "flush": true, 00:16:48.368 "reset": true, 00:16:48.368 "nvme_admin": false, 00:16:48.368 "nvme_io": false, 00:16:48.368 "nvme_io_md": false, 00:16:48.368 "write_zeroes": true, 00:16:48.368 "zcopy": true, 00:16:48.368 "get_zone_info": false, 00:16:48.368 "zone_management": false, 00:16:48.368 "zone_append": false, 00:16:48.368 "compare": false, 00:16:48.368 "compare_and_write": false, 00:16:48.368 "abort": true, 00:16:48.368 "seek_hole": false, 00:16:48.368 "seek_data": false, 00:16:48.368 "copy": true, 00:16:48.368 "nvme_iov_md": false 00:16:48.368 }, 00:16:48.368 "memory_domains": [ 00:16:48.368 { 00:16:48.368 "dma_device_id": "system", 00:16:48.368 "dma_device_type": 1 00:16:48.368 }, 00:16:48.368 { 00:16:48.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:48.368 "dma_device_type": 2 00:16:48.368 } 00:16:48.368 ], 00:16:48.368 "driver_specific": {} 00:16:48.368 } 00:16:48.368 ] 00:16:48.368 22:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:48.368 22:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:48.368 22:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:48.368 22:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:48.368 22:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:48.368 22:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:48.368 22:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:48.368 22:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:48.368 22:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:48.368 22:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:48.368 22:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:48.368 22:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.368 22:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:48.626 22:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:48.626 "name": "Existed_Raid", 00:16:48.626 "uuid": "1e40e888-ddba-4542-a845-759c6dbecca9", 00:16:48.626 "strip_size_kb": 0, 00:16:48.626 "state": "configuring", 00:16:48.626 "raid_level": "raid1", 00:16:48.626 "superblock": true, 00:16:48.626 "num_base_bdevs": 3, 00:16:48.626 "num_base_bdevs_discovered": 1, 00:16:48.626 "num_base_bdevs_operational": 3, 00:16:48.626 "base_bdevs_list": [ 00:16:48.626 { 00:16:48.626 "name": "BaseBdev1", 00:16:48.626 "uuid": "d81fe4f6-4b03-4ec2-b678-ea5a126437dc", 00:16:48.626 "is_configured": true, 00:16:48.626 "data_offset": 2048, 00:16:48.626 "data_size": 63488 00:16:48.626 }, 00:16:48.626 { 00:16:48.626 "name": "BaseBdev2", 00:16:48.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.626 "is_configured": false, 00:16:48.626 "data_offset": 0, 00:16:48.626 "data_size": 0 00:16:48.626 }, 00:16:48.626 { 00:16:48.626 "name": "BaseBdev3", 00:16:48.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.626 "is_configured": false, 00:16:48.626 "data_offset": 0, 00:16:48.626 "data_size": 0 00:16:48.626 } 00:16:48.626 ] 00:16:48.626 }' 00:16:48.626 22:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:48.626 22:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.192 22:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:49.192 [2024-07-13 22:00:08.563309] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:49.192 [2024-07-13 22:00:08.563363] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:16:49.451 22:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:49.451 [2024-07-13 22:00:08.731820] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:49.451 [2024-07-13 22:00:08.733495] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:49.451 [2024-07-13 22:00:08.733530] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:49.451 [2024-07-13 22:00:08.733540] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:49.451 [2024-07-13 22:00:08.733568] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:49.451 22:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:49.451 22:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:49.451 22:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:49.451 22:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:49.451 22:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:49.451 22:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:49.451 22:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:49.451 22:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:49.451 22:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:49.451 22:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:49.451 22:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:49.451 22:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:49.451 22:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:49.451 22:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.710 22:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:49.710 "name": "Existed_Raid", 00:16:49.710 "uuid": "b0e9149d-ae92-4ae3-8a66-980b3562d41f", 00:16:49.710 "strip_size_kb": 0, 00:16:49.710 "state": "configuring", 00:16:49.710 "raid_level": "raid1", 00:16:49.710 "superblock": true, 00:16:49.710 "num_base_bdevs": 3, 00:16:49.710 "num_base_bdevs_discovered": 1, 00:16:49.710 "num_base_bdevs_operational": 3, 00:16:49.710 "base_bdevs_list": [ 00:16:49.710 { 00:16:49.710 "name": "BaseBdev1", 00:16:49.710 "uuid": "d81fe4f6-4b03-4ec2-b678-ea5a126437dc", 00:16:49.710 "is_configured": true, 00:16:49.710 "data_offset": 2048, 00:16:49.710 "data_size": 63488 00:16:49.710 }, 00:16:49.710 { 00:16:49.710 "name": "BaseBdev2", 00:16:49.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.710 "is_configured": false, 00:16:49.710 "data_offset": 0, 00:16:49.710 "data_size": 0 00:16:49.710 }, 00:16:49.710 { 00:16:49.710 "name": "BaseBdev3", 00:16:49.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.710 "is_configured": false, 00:16:49.710 "data_offset": 0, 00:16:49.710 "data_size": 0 00:16:49.710 } 00:16:49.710 ] 00:16:49.710 }' 00:16:49.710 22:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:49.710 22:00:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.277 22:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:50.277 [2024-07-13 22:00:09.618437] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:50.277 BaseBdev2 00:16:50.277 22:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:50.277 22:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:50.277 22:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:50.277 22:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:50.277 22:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:50.277 22:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:50.277 22:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:50.535 22:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:50.794 [ 00:16:50.794 { 00:16:50.794 "name": "BaseBdev2", 00:16:50.794 "aliases": [ 00:16:50.794 "c9680453-9a04-4c52-9c3c-aadb81b067b3" 00:16:50.794 ], 00:16:50.794 "product_name": "Malloc disk", 00:16:50.794 "block_size": 512, 00:16:50.794 "num_blocks": 65536, 00:16:50.794 "uuid": "c9680453-9a04-4c52-9c3c-aadb81b067b3", 00:16:50.794 "assigned_rate_limits": { 00:16:50.794 "rw_ios_per_sec": 0, 00:16:50.794 "rw_mbytes_per_sec": 0, 00:16:50.794 "r_mbytes_per_sec": 0, 00:16:50.794 "w_mbytes_per_sec": 0 00:16:50.794 }, 00:16:50.794 "claimed": true, 00:16:50.794 "claim_type": "exclusive_write", 00:16:50.794 "zoned": false, 00:16:50.794 "supported_io_types": { 00:16:50.794 "read": true, 00:16:50.794 "write": true, 00:16:50.794 "unmap": true, 00:16:50.794 "flush": true, 00:16:50.794 "reset": true, 00:16:50.794 "nvme_admin": false, 00:16:50.794 "nvme_io": false, 00:16:50.794 "nvme_io_md": false, 00:16:50.794 "write_zeroes": true, 00:16:50.794 "zcopy": true, 00:16:50.794 "get_zone_info": false, 00:16:50.794 "zone_management": false, 00:16:50.794 "zone_append": false, 00:16:50.795 "compare": false, 00:16:50.795 "compare_and_write": false, 00:16:50.795 "abort": true, 00:16:50.795 "seek_hole": false, 00:16:50.795 "seek_data": false, 00:16:50.795 "copy": true, 00:16:50.795 "nvme_iov_md": false 00:16:50.795 }, 00:16:50.795 "memory_domains": [ 00:16:50.795 { 00:16:50.795 "dma_device_id": "system", 00:16:50.795 "dma_device_type": 1 00:16:50.795 }, 00:16:50.795 { 00:16:50.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.795 "dma_device_type": 2 00:16:50.795 } 00:16:50.795 ], 00:16:50.795 "driver_specific": {} 00:16:50.795 } 00:16:50.795 ] 00:16:50.795 22:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:50.795 22:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:50.795 22:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:50.795 22:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:50.795 22:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:50.795 22:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:50.795 22:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:50.795 22:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:50.795 22:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:50.795 22:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:50.795 22:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:50.795 22:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:50.795 22:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:50.795 22:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:50.795 22:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.795 22:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:50.795 "name": "Existed_Raid", 00:16:50.795 "uuid": "b0e9149d-ae92-4ae3-8a66-980b3562d41f", 00:16:50.795 "strip_size_kb": 0, 00:16:50.795 "state": "configuring", 00:16:50.795 "raid_level": "raid1", 00:16:50.795 "superblock": true, 00:16:50.795 "num_base_bdevs": 3, 00:16:50.795 "num_base_bdevs_discovered": 2, 00:16:50.795 "num_base_bdevs_operational": 3, 00:16:50.795 "base_bdevs_list": [ 00:16:50.795 { 00:16:50.795 "name": "BaseBdev1", 00:16:50.795 "uuid": "d81fe4f6-4b03-4ec2-b678-ea5a126437dc", 00:16:50.795 "is_configured": true, 00:16:50.795 "data_offset": 2048, 00:16:50.795 "data_size": 63488 00:16:50.795 }, 00:16:50.795 { 00:16:50.795 "name": "BaseBdev2", 00:16:50.795 "uuid": "c9680453-9a04-4c52-9c3c-aadb81b067b3", 00:16:50.795 "is_configured": true, 00:16:50.795 "data_offset": 2048, 00:16:50.795 "data_size": 63488 00:16:50.795 }, 00:16:50.795 { 00:16:50.795 "name": "BaseBdev3", 00:16:50.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.795 "is_configured": false, 00:16:50.795 "data_offset": 0, 00:16:50.795 "data_size": 0 00:16:50.795 } 00:16:50.795 ] 00:16:50.795 }' 00:16:50.795 22:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:50.795 22:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.368 22:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:51.626 [2024-07-13 22:00:10.811125] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:51.626 [2024-07-13 22:00:10.811340] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:16:51.626 [2024-07-13 22:00:10.811362] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:51.626 [2024-07-13 22:00:10.811596] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:16:51.626 [2024-07-13 22:00:10.811775] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:16:51.626 [2024-07-13 22:00:10.811786] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:16:51.626 [2024-07-13 22:00:10.811947] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:51.626 BaseBdev3 00:16:51.626 22:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:16:51.626 22:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:51.626 22:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:51.626 22:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:51.626 22:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:51.626 22:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:51.626 22:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:51.626 22:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:51.885 [ 00:16:51.885 { 00:16:51.885 "name": "BaseBdev3", 00:16:51.885 "aliases": [ 00:16:51.885 "ff237c26-3570-444d-86bd-0948e3268d56" 00:16:51.885 ], 00:16:51.885 "product_name": "Malloc disk", 00:16:51.885 "block_size": 512, 00:16:51.885 "num_blocks": 65536, 00:16:51.885 "uuid": "ff237c26-3570-444d-86bd-0948e3268d56", 00:16:51.885 "assigned_rate_limits": { 00:16:51.885 "rw_ios_per_sec": 0, 00:16:51.885 "rw_mbytes_per_sec": 0, 00:16:51.885 "r_mbytes_per_sec": 0, 00:16:51.885 "w_mbytes_per_sec": 0 00:16:51.885 }, 00:16:51.885 "claimed": true, 00:16:51.885 "claim_type": "exclusive_write", 00:16:51.885 "zoned": false, 00:16:51.885 "supported_io_types": { 00:16:51.885 "read": true, 00:16:51.885 "write": true, 00:16:51.885 "unmap": true, 00:16:51.885 "flush": true, 00:16:51.885 "reset": true, 00:16:51.885 "nvme_admin": false, 00:16:51.885 "nvme_io": false, 00:16:51.885 "nvme_io_md": false, 00:16:51.885 "write_zeroes": true, 00:16:51.885 "zcopy": true, 00:16:51.885 "get_zone_info": false, 00:16:51.885 "zone_management": false, 00:16:51.885 "zone_append": false, 00:16:51.885 "compare": false, 00:16:51.885 "compare_and_write": false, 00:16:51.885 "abort": true, 00:16:51.885 "seek_hole": false, 00:16:51.885 "seek_data": false, 00:16:51.885 "copy": true, 00:16:51.885 "nvme_iov_md": false 00:16:51.885 }, 00:16:51.885 "memory_domains": [ 00:16:51.885 { 00:16:51.885 "dma_device_id": "system", 00:16:51.885 "dma_device_type": 1 00:16:51.885 }, 00:16:51.885 { 00:16:51.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.885 "dma_device_type": 2 00:16:51.885 } 00:16:51.885 ], 00:16:51.885 "driver_specific": {} 00:16:51.885 } 00:16:51.885 ] 00:16:51.885 22:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:51.885 22:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:51.885 22:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:51.885 22:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:51.885 22:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:51.885 22:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:51.885 22:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:51.885 22:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:51.885 22:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:51.885 22:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:51.885 22:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:51.885 22:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:51.885 22:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:51.885 22:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:51.885 22:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.144 22:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:52.144 "name": "Existed_Raid", 00:16:52.144 "uuid": "b0e9149d-ae92-4ae3-8a66-980b3562d41f", 00:16:52.144 "strip_size_kb": 0, 00:16:52.144 "state": "online", 00:16:52.144 "raid_level": "raid1", 00:16:52.144 "superblock": true, 00:16:52.144 "num_base_bdevs": 3, 00:16:52.144 "num_base_bdevs_discovered": 3, 00:16:52.144 "num_base_bdevs_operational": 3, 00:16:52.144 "base_bdevs_list": [ 00:16:52.144 { 00:16:52.144 "name": "BaseBdev1", 00:16:52.144 "uuid": "d81fe4f6-4b03-4ec2-b678-ea5a126437dc", 00:16:52.144 "is_configured": true, 00:16:52.144 "data_offset": 2048, 00:16:52.144 "data_size": 63488 00:16:52.144 }, 00:16:52.144 { 00:16:52.144 "name": "BaseBdev2", 00:16:52.144 "uuid": "c9680453-9a04-4c52-9c3c-aadb81b067b3", 00:16:52.144 "is_configured": true, 00:16:52.144 "data_offset": 2048, 00:16:52.144 "data_size": 63488 00:16:52.144 }, 00:16:52.144 { 00:16:52.144 "name": "BaseBdev3", 00:16:52.144 "uuid": "ff237c26-3570-444d-86bd-0948e3268d56", 00:16:52.144 "is_configured": true, 00:16:52.144 "data_offset": 2048, 00:16:52.144 "data_size": 63488 00:16:52.144 } 00:16:52.144 ] 00:16:52.144 }' 00:16:52.144 22:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:52.144 22:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.712 22:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:52.712 22:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:52.712 22:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:52.712 22:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:52.712 22:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:52.712 22:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:16:52.712 22:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:52.712 22:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:52.712 [2024-07-13 22:00:11.974502] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:52.712 22:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:52.712 "name": "Existed_Raid", 00:16:52.712 "aliases": [ 00:16:52.712 "b0e9149d-ae92-4ae3-8a66-980b3562d41f" 00:16:52.712 ], 00:16:52.712 "product_name": "Raid Volume", 00:16:52.712 "block_size": 512, 00:16:52.712 "num_blocks": 63488, 00:16:52.712 "uuid": "b0e9149d-ae92-4ae3-8a66-980b3562d41f", 00:16:52.712 "assigned_rate_limits": { 00:16:52.712 "rw_ios_per_sec": 0, 00:16:52.712 "rw_mbytes_per_sec": 0, 00:16:52.712 "r_mbytes_per_sec": 0, 00:16:52.712 "w_mbytes_per_sec": 0 00:16:52.712 }, 00:16:52.712 "claimed": false, 00:16:52.712 "zoned": false, 00:16:52.712 "supported_io_types": { 00:16:52.712 "read": true, 00:16:52.712 "write": true, 00:16:52.712 "unmap": false, 00:16:52.712 "flush": false, 00:16:52.712 "reset": true, 00:16:52.712 "nvme_admin": false, 00:16:52.712 "nvme_io": false, 00:16:52.712 "nvme_io_md": false, 00:16:52.712 "write_zeroes": true, 00:16:52.712 "zcopy": false, 00:16:52.712 "get_zone_info": false, 00:16:52.712 "zone_management": false, 00:16:52.712 "zone_append": false, 00:16:52.712 "compare": false, 00:16:52.712 "compare_and_write": false, 00:16:52.712 "abort": false, 00:16:52.712 "seek_hole": false, 00:16:52.712 "seek_data": false, 00:16:52.712 "copy": false, 00:16:52.712 "nvme_iov_md": false 00:16:52.712 }, 00:16:52.712 "memory_domains": [ 00:16:52.712 { 00:16:52.712 "dma_device_id": "system", 00:16:52.712 "dma_device_type": 1 00:16:52.712 }, 00:16:52.712 { 00:16:52.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.712 "dma_device_type": 2 00:16:52.712 }, 00:16:52.712 { 00:16:52.712 "dma_device_id": "system", 00:16:52.712 "dma_device_type": 1 00:16:52.712 }, 00:16:52.712 { 00:16:52.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.712 "dma_device_type": 2 00:16:52.712 }, 00:16:52.712 { 00:16:52.712 "dma_device_id": "system", 00:16:52.712 "dma_device_type": 1 00:16:52.712 }, 00:16:52.712 { 00:16:52.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.712 "dma_device_type": 2 00:16:52.712 } 00:16:52.712 ], 00:16:52.712 "driver_specific": { 00:16:52.712 "raid": { 00:16:52.712 "uuid": "b0e9149d-ae92-4ae3-8a66-980b3562d41f", 00:16:52.712 "strip_size_kb": 0, 00:16:52.712 "state": "online", 00:16:52.712 "raid_level": "raid1", 00:16:52.712 "superblock": true, 00:16:52.712 "num_base_bdevs": 3, 00:16:52.712 "num_base_bdevs_discovered": 3, 00:16:52.712 "num_base_bdevs_operational": 3, 00:16:52.712 "base_bdevs_list": [ 00:16:52.712 { 00:16:52.712 "name": "BaseBdev1", 00:16:52.713 "uuid": "d81fe4f6-4b03-4ec2-b678-ea5a126437dc", 00:16:52.713 "is_configured": true, 00:16:52.713 "data_offset": 2048, 00:16:52.713 "data_size": 63488 00:16:52.713 }, 00:16:52.713 { 00:16:52.713 "name": "BaseBdev2", 00:16:52.713 "uuid": "c9680453-9a04-4c52-9c3c-aadb81b067b3", 00:16:52.713 "is_configured": true, 00:16:52.713 "data_offset": 2048, 00:16:52.713 "data_size": 63488 00:16:52.713 }, 00:16:52.713 { 00:16:52.713 "name": "BaseBdev3", 00:16:52.713 "uuid": "ff237c26-3570-444d-86bd-0948e3268d56", 00:16:52.713 "is_configured": true, 00:16:52.713 "data_offset": 2048, 00:16:52.713 "data_size": 63488 00:16:52.713 } 00:16:52.713 ] 00:16:52.713 } 00:16:52.713 } 00:16:52.713 }' 00:16:52.713 22:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:52.713 22:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:52.713 BaseBdev2 00:16:52.713 BaseBdev3' 00:16:52.713 22:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:52.713 22:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:52.713 22:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:52.972 22:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:52.972 "name": "BaseBdev1", 00:16:52.972 "aliases": [ 00:16:52.972 "d81fe4f6-4b03-4ec2-b678-ea5a126437dc" 00:16:52.972 ], 00:16:52.972 "product_name": "Malloc disk", 00:16:52.972 "block_size": 512, 00:16:52.972 "num_blocks": 65536, 00:16:52.972 "uuid": "d81fe4f6-4b03-4ec2-b678-ea5a126437dc", 00:16:52.972 "assigned_rate_limits": { 00:16:52.972 "rw_ios_per_sec": 0, 00:16:52.972 "rw_mbytes_per_sec": 0, 00:16:52.972 "r_mbytes_per_sec": 0, 00:16:52.972 "w_mbytes_per_sec": 0 00:16:52.972 }, 00:16:52.972 "claimed": true, 00:16:52.972 "claim_type": "exclusive_write", 00:16:52.972 "zoned": false, 00:16:52.972 "supported_io_types": { 00:16:52.972 "read": true, 00:16:52.972 "write": true, 00:16:52.972 "unmap": true, 00:16:52.972 "flush": true, 00:16:52.972 "reset": true, 00:16:52.972 "nvme_admin": false, 00:16:52.972 "nvme_io": false, 00:16:52.972 "nvme_io_md": false, 00:16:52.972 "write_zeroes": true, 00:16:52.972 "zcopy": true, 00:16:52.972 "get_zone_info": false, 00:16:52.972 "zone_management": false, 00:16:52.972 "zone_append": false, 00:16:52.972 "compare": false, 00:16:52.972 "compare_and_write": false, 00:16:52.972 "abort": true, 00:16:52.972 "seek_hole": false, 00:16:52.972 "seek_data": false, 00:16:52.972 "copy": true, 00:16:52.972 "nvme_iov_md": false 00:16:52.972 }, 00:16:52.972 "memory_domains": [ 00:16:52.972 { 00:16:52.972 "dma_device_id": "system", 00:16:52.972 "dma_device_type": 1 00:16:52.972 }, 00:16:52.972 { 00:16:52.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.972 "dma_device_type": 2 00:16:52.972 } 00:16:52.972 ], 00:16:52.972 "driver_specific": {} 00:16:52.972 }' 00:16:52.972 22:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:52.972 22:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:52.972 22:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:52.973 22:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:52.973 22:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:52.973 22:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:53.232 22:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:53.232 22:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:53.232 22:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:53.232 22:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:53.232 22:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:53.232 22:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:53.232 22:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:53.232 22:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:53.232 22:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:53.490 22:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:53.490 "name": "BaseBdev2", 00:16:53.490 "aliases": [ 00:16:53.490 "c9680453-9a04-4c52-9c3c-aadb81b067b3" 00:16:53.490 ], 00:16:53.490 "product_name": "Malloc disk", 00:16:53.490 "block_size": 512, 00:16:53.490 "num_blocks": 65536, 00:16:53.490 "uuid": "c9680453-9a04-4c52-9c3c-aadb81b067b3", 00:16:53.490 "assigned_rate_limits": { 00:16:53.490 "rw_ios_per_sec": 0, 00:16:53.490 "rw_mbytes_per_sec": 0, 00:16:53.490 "r_mbytes_per_sec": 0, 00:16:53.490 "w_mbytes_per_sec": 0 00:16:53.490 }, 00:16:53.490 "claimed": true, 00:16:53.490 "claim_type": "exclusive_write", 00:16:53.490 "zoned": false, 00:16:53.490 "supported_io_types": { 00:16:53.490 "read": true, 00:16:53.490 "write": true, 00:16:53.490 "unmap": true, 00:16:53.490 "flush": true, 00:16:53.490 "reset": true, 00:16:53.490 "nvme_admin": false, 00:16:53.490 "nvme_io": false, 00:16:53.490 "nvme_io_md": false, 00:16:53.490 "write_zeroes": true, 00:16:53.490 "zcopy": true, 00:16:53.490 "get_zone_info": false, 00:16:53.490 "zone_management": false, 00:16:53.490 "zone_append": false, 00:16:53.490 "compare": false, 00:16:53.490 "compare_and_write": false, 00:16:53.490 "abort": true, 00:16:53.490 "seek_hole": false, 00:16:53.490 "seek_data": false, 00:16:53.490 "copy": true, 00:16:53.490 "nvme_iov_md": false 00:16:53.490 }, 00:16:53.490 "memory_domains": [ 00:16:53.490 { 00:16:53.490 "dma_device_id": "system", 00:16:53.490 "dma_device_type": 1 00:16:53.490 }, 00:16:53.490 { 00:16:53.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.490 "dma_device_type": 2 00:16:53.490 } 00:16:53.490 ], 00:16:53.490 "driver_specific": {} 00:16:53.490 }' 00:16:53.490 22:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:53.490 22:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:53.490 22:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:53.490 22:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:53.490 22:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:53.490 22:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:53.490 22:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:53.749 22:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:53.749 22:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:53.749 22:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:53.749 22:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:53.749 22:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:53.749 22:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:53.749 22:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:53.749 22:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:54.007 22:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:54.007 "name": "BaseBdev3", 00:16:54.007 "aliases": [ 00:16:54.007 "ff237c26-3570-444d-86bd-0948e3268d56" 00:16:54.007 ], 00:16:54.007 "product_name": "Malloc disk", 00:16:54.007 "block_size": 512, 00:16:54.007 "num_blocks": 65536, 00:16:54.007 "uuid": "ff237c26-3570-444d-86bd-0948e3268d56", 00:16:54.007 "assigned_rate_limits": { 00:16:54.007 "rw_ios_per_sec": 0, 00:16:54.007 "rw_mbytes_per_sec": 0, 00:16:54.007 "r_mbytes_per_sec": 0, 00:16:54.007 "w_mbytes_per_sec": 0 00:16:54.007 }, 00:16:54.007 "claimed": true, 00:16:54.007 "claim_type": "exclusive_write", 00:16:54.007 "zoned": false, 00:16:54.007 "supported_io_types": { 00:16:54.007 "read": true, 00:16:54.007 "write": true, 00:16:54.007 "unmap": true, 00:16:54.007 "flush": true, 00:16:54.007 "reset": true, 00:16:54.007 "nvme_admin": false, 00:16:54.007 "nvme_io": false, 00:16:54.007 "nvme_io_md": false, 00:16:54.007 "write_zeroes": true, 00:16:54.007 "zcopy": true, 00:16:54.007 "get_zone_info": false, 00:16:54.007 "zone_management": false, 00:16:54.007 "zone_append": false, 00:16:54.007 "compare": false, 00:16:54.007 "compare_and_write": false, 00:16:54.007 "abort": true, 00:16:54.007 "seek_hole": false, 00:16:54.007 "seek_data": false, 00:16:54.007 "copy": true, 00:16:54.007 "nvme_iov_md": false 00:16:54.007 }, 00:16:54.007 "memory_domains": [ 00:16:54.007 { 00:16:54.007 "dma_device_id": "system", 00:16:54.007 "dma_device_type": 1 00:16:54.007 }, 00:16:54.007 { 00:16:54.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.007 "dma_device_type": 2 00:16:54.007 } 00:16:54.007 ], 00:16:54.007 "driver_specific": {} 00:16:54.007 }' 00:16:54.007 22:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:54.007 22:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:54.007 22:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:54.007 22:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:54.007 22:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:54.007 22:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:54.007 22:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:54.007 22:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:54.007 22:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:54.007 22:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:54.265 22:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:54.265 22:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:54.265 22:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:54.265 [2024-07-13 22:00:13.614662] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:54.265 22:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:54.265 22:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:16:54.265 22:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:54.265 22:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:16:54.525 22:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:16:54.525 22:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:54.525 22:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:54.525 22:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:54.525 22:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:54.525 22:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:54.525 22:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:54.525 22:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:54.525 22:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:54.525 22:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:54.525 22:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:54.525 22:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:54.525 22:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.525 22:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:54.525 "name": "Existed_Raid", 00:16:54.525 "uuid": "b0e9149d-ae92-4ae3-8a66-980b3562d41f", 00:16:54.525 "strip_size_kb": 0, 00:16:54.525 "state": "online", 00:16:54.525 "raid_level": "raid1", 00:16:54.525 "superblock": true, 00:16:54.525 "num_base_bdevs": 3, 00:16:54.525 "num_base_bdevs_discovered": 2, 00:16:54.525 "num_base_bdevs_operational": 2, 00:16:54.525 "base_bdevs_list": [ 00:16:54.525 { 00:16:54.525 "name": null, 00:16:54.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.525 "is_configured": false, 00:16:54.525 "data_offset": 2048, 00:16:54.525 "data_size": 63488 00:16:54.525 }, 00:16:54.525 { 00:16:54.525 "name": "BaseBdev2", 00:16:54.525 "uuid": "c9680453-9a04-4c52-9c3c-aadb81b067b3", 00:16:54.525 "is_configured": true, 00:16:54.525 "data_offset": 2048, 00:16:54.525 "data_size": 63488 00:16:54.525 }, 00:16:54.525 { 00:16:54.525 "name": "BaseBdev3", 00:16:54.525 "uuid": "ff237c26-3570-444d-86bd-0948e3268d56", 00:16:54.525 "is_configured": true, 00:16:54.525 "data_offset": 2048, 00:16:54.525 "data_size": 63488 00:16:54.525 } 00:16:54.525 ] 00:16:54.525 }' 00:16:54.525 22:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:54.525 22:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.095 22:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:55.095 22:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:55.095 22:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:55.095 22:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:55.352 22:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:55.352 22:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:55.352 22:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:55.352 [2024-07-13 22:00:14.648588] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:55.611 22:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:55.611 22:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:55.611 22:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:55.611 22:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:55.611 22:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:55.611 22:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:55.611 22:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:55.870 [2024-07-13 22:00:15.064496] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:55.870 [2024-07-13 22:00:15.064598] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:55.870 [2024-07-13 22:00:15.156635] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:55.870 [2024-07-13 22:00:15.156689] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:55.870 [2024-07-13 22:00:15.156703] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:16:55.870 22:00:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:55.870 22:00:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:55.870 22:00:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:55.870 22:00:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:56.129 22:00:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:56.129 22:00:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:56.129 22:00:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:16:56.129 22:00:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:16:56.129 22:00:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:56.129 22:00:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:56.129 BaseBdev2 00:16:56.387 22:00:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:16:56.387 22:00:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:56.387 22:00:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:56.387 22:00:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:56.387 22:00:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:56.387 22:00:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:56.387 22:00:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:56.387 22:00:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:56.646 [ 00:16:56.646 { 00:16:56.646 "name": "BaseBdev2", 00:16:56.646 "aliases": [ 00:16:56.646 "20da66d5-4cb2-4683-ac18-d1c09e1a4cc4" 00:16:56.646 ], 00:16:56.646 "product_name": "Malloc disk", 00:16:56.646 "block_size": 512, 00:16:56.646 "num_blocks": 65536, 00:16:56.646 "uuid": "20da66d5-4cb2-4683-ac18-d1c09e1a4cc4", 00:16:56.646 "assigned_rate_limits": { 00:16:56.646 "rw_ios_per_sec": 0, 00:16:56.646 "rw_mbytes_per_sec": 0, 00:16:56.646 "r_mbytes_per_sec": 0, 00:16:56.646 "w_mbytes_per_sec": 0 00:16:56.646 }, 00:16:56.646 "claimed": false, 00:16:56.646 "zoned": false, 00:16:56.646 "supported_io_types": { 00:16:56.646 "read": true, 00:16:56.646 "write": true, 00:16:56.646 "unmap": true, 00:16:56.646 "flush": true, 00:16:56.646 "reset": true, 00:16:56.646 "nvme_admin": false, 00:16:56.646 "nvme_io": false, 00:16:56.646 "nvme_io_md": false, 00:16:56.646 "write_zeroes": true, 00:16:56.646 "zcopy": true, 00:16:56.646 "get_zone_info": false, 00:16:56.646 "zone_management": false, 00:16:56.646 "zone_append": false, 00:16:56.646 "compare": false, 00:16:56.646 "compare_and_write": false, 00:16:56.646 "abort": true, 00:16:56.646 "seek_hole": false, 00:16:56.646 "seek_data": false, 00:16:56.646 "copy": true, 00:16:56.646 "nvme_iov_md": false 00:16:56.646 }, 00:16:56.646 "memory_domains": [ 00:16:56.646 { 00:16:56.646 "dma_device_id": "system", 00:16:56.646 "dma_device_type": 1 00:16:56.646 }, 00:16:56.646 { 00:16:56.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.646 "dma_device_type": 2 00:16:56.646 } 00:16:56.646 ], 00:16:56.646 "driver_specific": {} 00:16:56.646 } 00:16:56.646 ] 00:16:56.646 22:00:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:56.646 22:00:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:56.646 22:00:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:56.646 22:00:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:56.906 BaseBdev3 00:16:56.906 22:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:16:56.906 22:00:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:56.906 22:00:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:56.906 22:00:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:56.906 22:00:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:56.906 22:00:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:56.906 22:00:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:56.906 22:00:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:57.164 [ 00:16:57.164 { 00:16:57.164 "name": "BaseBdev3", 00:16:57.164 "aliases": [ 00:16:57.164 "ad1a8044-e176-4116-a81f-8520e282551c" 00:16:57.164 ], 00:16:57.164 "product_name": "Malloc disk", 00:16:57.164 "block_size": 512, 00:16:57.164 "num_blocks": 65536, 00:16:57.164 "uuid": "ad1a8044-e176-4116-a81f-8520e282551c", 00:16:57.164 "assigned_rate_limits": { 00:16:57.164 "rw_ios_per_sec": 0, 00:16:57.164 "rw_mbytes_per_sec": 0, 00:16:57.164 "r_mbytes_per_sec": 0, 00:16:57.165 "w_mbytes_per_sec": 0 00:16:57.165 }, 00:16:57.165 "claimed": false, 00:16:57.165 "zoned": false, 00:16:57.165 "supported_io_types": { 00:16:57.165 "read": true, 00:16:57.165 "write": true, 00:16:57.165 "unmap": true, 00:16:57.165 "flush": true, 00:16:57.165 "reset": true, 00:16:57.165 "nvme_admin": false, 00:16:57.165 "nvme_io": false, 00:16:57.165 "nvme_io_md": false, 00:16:57.165 "write_zeroes": true, 00:16:57.165 "zcopy": true, 00:16:57.165 "get_zone_info": false, 00:16:57.165 "zone_management": false, 00:16:57.165 "zone_append": false, 00:16:57.165 "compare": false, 00:16:57.165 "compare_and_write": false, 00:16:57.165 "abort": true, 00:16:57.165 "seek_hole": false, 00:16:57.165 "seek_data": false, 00:16:57.165 "copy": true, 00:16:57.165 "nvme_iov_md": false 00:16:57.165 }, 00:16:57.165 "memory_domains": [ 00:16:57.165 { 00:16:57.165 "dma_device_id": "system", 00:16:57.165 "dma_device_type": 1 00:16:57.165 }, 00:16:57.165 { 00:16:57.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.165 "dma_device_type": 2 00:16:57.165 } 00:16:57.165 ], 00:16:57.165 "driver_specific": {} 00:16:57.165 } 00:16:57.165 ] 00:16:57.165 22:00:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:57.165 22:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:57.165 22:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:57.165 22:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:57.165 [2024-07-13 22:00:16.545383] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:57.165 [2024-07-13 22:00:16.545431] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:57.165 [2024-07-13 22:00:16.545472] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:57.165 [2024-07-13 22:00:16.547241] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:57.424 22:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:57.424 22:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:57.424 22:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:57.424 22:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:57.424 22:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:57.424 22:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:57.424 22:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:57.424 22:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:57.424 22:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:57.424 22:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:57.424 22:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:57.424 22:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.424 22:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:57.424 "name": "Existed_Raid", 00:16:57.424 "uuid": "e0d2bf59-343a-4be1-a559-ea98c45a748a", 00:16:57.424 "strip_size_kb": 0, 00:16:57.424 "state": "configuring", 00:16:57.424 "raid_level": "raid1", 00:16:57.424 "superblock": true, 00:16:57.424 "num_base_bdevs": 3, 00:16:57.424 "num_base_bdevs_discovered": 2, 00:16:57.424 "num_base_bdevs_operational": 3, 00:16:57.424 "base_bdevs_list": [ 00:16:57.424 { 00:16:57.424 "name": "BaseBdev1", 00:16:57.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.424 "is_configured": false, 00:16:57.424 "data_offset": 0, 00:16:57.424 "data_size": 0 00:16:57.424 }, 00:16:57.424 { 00:16:57.424 "name": "BaseBdev2", 00:16:57.424 "uuid": "20da66d5-4cb2-4683-ac18-d1c09e1a4cc4", 00:16:57.424 "is_configured": true, 00:16:57.424 "data_offset": 2048, 00:16:57.424 "data_size": 63488 00:16:57.424 }, 00:16:57.424 { 00:16:57.424 "name": "BaseBdev3", 00:16:57.424 "uuid": "ad1a8044-e176-4116-a81f-8520e282551c", 00:16:57.424 "is_configured": true, 00:16:57.424 "data_offset": 2048, 00:16:57.424 "data_size": 63488 00:16:57.424 } 00:16:57.424 ] 00:16:57.424 }' 00:16:57.424 22:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:57.424 22:00:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.992 22:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:16:57.992 [2024-07-13 22:00:17.331438] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:57.992 22:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:57.992 22:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:57.992 22:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:57.992 22:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:57.992 22:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:57.992 22:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:57.992 22:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:57.992 22:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:57.992 22:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:57.992 22:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:57.992 22:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.992 22:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.254 22:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:58.254 "name": "Existed_Raid", 00:16:58.255 "uuid": "e0d2bf59-343a-4be1-a559-ea98c45a748a", 00:16:58.255 "strip_size_kb": 0, 00:16:58.255 "state": "configuring", 00:16:58.255 "raid_level": "raid1", 00:16:58.255 "superblock": true, 00:16:58.255 "num_base_bdevs": 3, 00:16:58.255 "num_base_bdevs_discovered": 1, 00:16:58.255 "num_base_bdevs_operational": 3, 00:16:58.255 "base_bdevs_list": [ 00:16:58.255 { 00:16:58.255 "name": "BaseBdev1", 00:16:58.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.255 "is_configured": false, 00:16:58.255 "data_offset": 0, 00:16:58.255 "data_size": 0 00:16:58.255 }, 00:16:58.255 { 00:16:58.255 "name": null, 00:16:58.255 "uuid": "20da66d5-4cb2-4683-ac18-d1c09e1a4cc4", 00:16:58.255 "is_configured": false, 00:16:58.255 "data_offset": 2048, 00:16:58.255 "data_size": 63488 00:16:58.255 }, 00:16:58.255 { 00:16:58.255 "name": "BaseBdev3", 00:16:58.255 "uuid": "ad1a8044-e176-4116-a81f-8520e282551c", 00:16:58.255 "is_configured": true, 00:16:58.255 "data_offset": 2048, 00:16:58.255 "data_size": 63488 00:16:58.255 } 00:16:58.255 ] 00:16:58.255 }' 00:16:58.255 22:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:58.255 22:00:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.822 22:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.822 22:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:58.822 22:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:16:58.822 22:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:59.080 [2024-07-13 22:00:18.339557] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:59.080 BaseBdev1 00:16:59.080 22:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:16:59.080 22:00:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:59.080 22:00:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:59.080 22:00:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:59.080 22:00:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:59.080 22:00:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:59.081 22:00:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:59.340 22:00:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:59.340 [ 00:16:59.340 { 00:16:59.340 "name": "BaseBdev1", 00:16:59.340 "aliases": [ 00:16:59.340 "d08cbc9b-ebdd-4615-8bd9-96c8b85f8232" 00:16:59.340 ], 00:16:59.340 "product_name": "Malloc disk", 00:16:59.340 "block_size": 512, 00:16:59.340 "num_blocks": 65536, 00:16:59.340 "uuid": "d08cbc9b-ebdd-4615-8bd9-96c8b85f8232", 00:16:59.340 "assigned_rate_limits": { 00:16:59.340 "rw_ios_per_sec": 0, 00:16:59.340 "rw_mbytes_per_sec": 0, 00:16:59.340 "r_mbytes_per_sec": 0, 00:16:59.340 "w_mbytes_per_sec": 0 00:16:59.340 }, 00:16:59.340 "claimed": true, 00:16:59.340 "claim_type": "exclusive_write", 00:16:59.340 "zoned": false, 00:16:59.340 "supported_io_types": { 00:16:59.340 "read": true, 00:16:59.340 "write": true, 00:16:59.340 "unmap": true, 00:16:59.340 "flush": true, 00:16:59.340 "reset": true, 00:16:59.340 "nvme_admin": false, 00:16:59.340 "nvme_io": false, 00:16:59.340 "nvme_io_md": false, 00:16:59.340 "write_zeroes": true, 00:16:59.340 "zcopy": true, 00:16:59.340 "get_zone_info": false, 00:16:59.340 "zone_management": false, 00:16:59.340 "zone_append": false, 00:16:59.340 "compare": false, 00:16:59.340 "compare_and_write": false, 00:16:59.340 "abort": true, 00:16:59.340 "seek_hole": false, 00:16:59.340 "seek_data": false, 00:16:59.340 "copy": true, 00:16:59.340 "nvme_iov_md": false 00:16:59.340 }, 00:16:59.340 "memory_domains": [ 00:16:59.340 { 00:16:59.340 "dma_device_id": "system", 00:16:59.340 "dma_device_type": 1 00:16:59.340 }, 00:16:59.340 { 00:16:59.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.340 "dma_device_type": 2 00:16:59.340 } 00:16:59.340 ], 00:16:59.340 "driver_specific": {} 00:16:59.340 } 00:16:59.340 ] 00:16:59.340 22:00:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:59.340 22:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:59.340 22:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:59.340 22:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:59.340 22:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:59.340 22:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:59.340 22:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:59.340 22:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:59.340 22:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:59.340 22:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:59.340 22:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:59.340 22:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:59.340 22:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:59.599 22:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:59.599 "name": "Existed_Raid", 00:16:59.599 "uuid": "e0d2bf59-343a-4be1-a559-ea98c45a748a", 00:16:59.599 "strip_size_kb": 0, 00:16:59.599 "state": "configuring", 00:16:59.599 "raid_level": "raid1", 00:16:59.599 "superblock": true, 00:16:59.599 "num_base_bdevs": 3, 00:16:59.599 "num_base_bdevs_discovered": 2, 00:16:59.599 "num_base_bdevs_operational": 3, 00:16:59.599 "base_bdevs_list": [ 00:16:59.599 { 00:16:59.599 "name": "BaseBdev1", 00:16:59.599 "uuid": "d08cbc9b-ebdd-4615-8bd9-96c8b85f8232", 00:16:59.599 "is_configured": true, 00:16:59.599 "data_offset": 2048, 00:16:59.599 "data_size": 63488 00:16:59.599 }, 00:16:59.599 { 00:16:59.599 "name": null, 00:16:59.599 "uuid": "20da66d5-4cb2-4683-ac18-d1c09e1a4cc4", 00:16:59.599 "is_configured": false, 00:16:59.599 "data_offset": 2048, 00:16:59.599 "data_size": 63488 00:16:59.599 }, 00:16:59.599 { 00:16:59.599 "name": "BaseBdev3", 00:16:59.599 "uuid": "ad1a8044-e176-4116-a81f-8520e282551c", 00:16:59.599 "is_configured": true, 00:16:59.599 "data_offset": 2048, 00:16:59.599 "data_size": 63488 00:16:59.599 } 00:16:59.599 ] 00:16:59.599 }' 00:16:59.599 22:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:59.599 22:00:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.166 22:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:00.166 22:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:00.166 22:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:17:00.166 22:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:17:00.425 [2024-07-13 22:00:19.635014] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:00.425 22:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:00.425 22:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:00.425 22:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:00.425 22:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:00.425 22:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:00.425 22:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:00.425 22:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:00.425 22:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:00.425 22:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:00.425 22:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:00.425 22:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:00.425 22:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:00.684 22:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:00.684 "name": "Existed_Raid", 00:17:00.684 "uuid": "e0d2bf59-343a-4be1-a559-ea98c45a748a", 00:17:00.684 "strip_size_kb": 0, 00:17:00.684 "state": "configuring", 00:17:00.684 "raid_level": "raid1", 00:17:00.684 "superblock": true, 00:17:00.684 "num_base_bdevs": 3, 00:17:00.684 "num_base_bdevs_discovered": 1, 00:17:00.684 "num_base_bdevs_operational": 3, 00:17:00.684 "base_bdevs_list": [ 00:17:00.684 { 00:17:00.684 "name": "BaseBdev1", 00:17:00.684 "uuid": "d08cbc9b-ebdd-4615-8bd9-96c8b85f8232", 00:17:00.684 "is_configured": true, 00:17:00.684 "data_offset": 2048, 00:17:00.684 "data_size": 63488 00:17:00.684 }, 00:17:00.684 { 00:17:00.684 "name": null, 00:17:00.684 "uuid": "20da66d5-4cb2-4683-ac18-d1c09e1a4cc4", 00:17:00.684 "is_configured": false, 00:17:00.684 "data_offset": 2048, 00:17:00.684 "data_size": 63488 00:17:00.684 }, 00:17:00.684 { 00:17:00.684 "name": null, 00:17:00.684 "uuid": "ad1a8044-e176-4116-a81f-8520e282551c", 00:17:00.684 "is_configured": false, 00:17:00.684 "data_offset": 2048, 00:17:00.684 "data_size": 63488 00:17:00.684 } 00:17:00.684 ] 00:17:00.684 }' 00:17:00.684 22:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:00.684 22:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.250 22:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:01.250 22:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:01.250 22:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:17:01.250 22:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:01.509 [2024-07-13 22:00:20.673775] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:01.509 22:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:01.509 22:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:01.509 22:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:01.509 22:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:01.509 22:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:01.509 22:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:01.509 22:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:01.509 22:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:01.509 22:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:01.509 22:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:01.509 22:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:01.509 22:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.509 22:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:01.509 "name": "Existed_Raid", 00:17:01.509 "uuid": "e0d2bf59-343a-4be1-a559-ea98c45a748a", 00:17:01.509 "strip_size_kb": 0, 00:17:01.509 "state": "configuring", 00:17:01.509 "raid_level": "raid1", 00:17:01.509 "superblock": true, 00:17:01.509 "num_base_bdevs": 3, 00:17:01.509 "num_base_bdevs_discovered": 2, 00:17:01.509 "num_base_bdevs_operational": 3, 00:17:01.509 "base_bdevs_list": [ 00:17:01.509 { 00:17:01.509 "name": "BaseBdev1", 00:17:01.509 "uuid": "d08cbc9b-ebdd-4615-8bd9-96c8b85f8232", 00:17:01.509 "is_configured": true, 00:17:01.509 "data_offset": 2048, 00:17:01.509 "data_size": 63488 00:17:01.509 }, 00:17:01.509 { 00:17:01.509 "name": null, 00:17:01.509 "uuid": "20da66d5-4cb2-4683-ac18-d1c09e1a4cc4", 00:17:01.509 "is_configured": false, 00:17:01.509 "data_offset": 2048, 00:17:01.509 "data_size": 63488 00:17:01.509 }, 00:17:01.509 { 00:17:01.509 "name": "BaseBdev3", 00:17:01.509 "uuid": "ad1a8044-e176-4116-a81f-8520e282551c", 00:17:01.509 "is_configured": true, 00:17:01.509 "data_offset": 2048, 00:17:01.509 "data_size": 63488 00:17:01.509 } 00:17:01.509 ] 00:17:01.509 }' 00:17:01.509 22:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:01.509 22:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.076 22:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:02.076 22:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:02.335 22:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:17:02.335 22:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:02.335 [2024-07-13 22:00:21.648362] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:02.595 22:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:02.595 22:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:02.595 22:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:02.595 22:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:02.595 22:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:02.595 22:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:02.595 22:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:02.595 22:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:02.595 22:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:02.595 22:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:02.595 22:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:02.595 22:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:02.595 22:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:02.595 "name": "Existed_Raid", 00:17:02.595 "uuid": "e0d2bf59-343a-4be1-a559-ea98c45a748a", 00:17:02.595 "strip_size_kb": 0, 00:17:02.595 "state": "configuring", 00:17:02.595 "raid_level": "raid1", 00:17:02.595 "superblock": true, 00:17:02.595 "num_base_bdevs": 3, 00:17:02.595 "num_base_bdevs_discovered": 1, 00:17:02.595 "num_base_bdevs_operational": 3, 00:17:02.595 "base_bdevs_list": [ 00:17:02.595 { 00:17:02.595 "name": null, 00:17:02.595 "uuid": "d08cbc9b-ebdd-4615-8bd9-96c8b85f8232", 00:17:02.595 "is_configured": false, 00:17:02.595 "data_offset": 2048, 00:17:02.595 "data_size": 63488 00:17:02.595 }, 00:17:02.595 { 00:17:02.595 "name": null, 00:17:02.595 "uuid": "20da66d5-4cb2-4683-ac18-d1c09e1a4cc4", 00:17:02.595 "is_configured": false, 00:17:02.595 "data_offset": 2048, 00:17:02.595 "data_size": 63488 00:17:02.595 }, 00:17:02.595 { 00:17:02.595 "name": "BaseBdev3", 00:17:02.595 "uuid": "ad1a8044-e176-4116-a81f-8520e282551c", 00:17:02.595 "is_configured": true, 00:17:02.595 "data_offset": 2048, 00:17:02.595 "data_size": 63488 00:17:02.595 } 00:17:02.595 ] 00:17:02.595 }' 00:17:02.595 22:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:02.595 22:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.162 22:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:03.162 22:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:03.162 22:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:17:03.163 22:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:03.422 [2024-07-13 22:00:22.671445] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:03.422 22:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:03.422 22:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:03.422 22:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:03.422 22:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:03.422 22:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:03.422 22:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:03.422 22:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:03.422 22:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:03.422 22:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:03.422 22:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:03.422 22:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:03.422 22:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:03.680 22:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:03.680 "name": "Existed_Raid", 00:17:03.680 "uuid": "e0d2bf59-343a-4be1-a559-ea98c45a748a", 00:17:03.680 "strip_size_kb": 0, 00:17:03.680 "state": "configuring", 00:17:03.680 "raid_level": "raid1", 00:17:03.680 "superblock": true, 00:17:03.680 "num_base_bdevs": 3, 00:17:03.680 "num_base_bdevs_discovered": 2, 00:17:03.680 "num_base_bdevs_operational": 3, 00:17:03.680 "base_bdevs_list": [ 00:17:03.680 { 00:17:03.680 "name": null, 00:17:03.680 "uuid": "d08cbc9b-ebdd-4615-8bd9-96c8b85f8232", 00:17:03.680 "is_configured": false, 00:17:03.680 "data_offset": 2048, 00:17:03.680 "data_size": 63488 00:17:03.680 }, 00:17:03.680 { 00:17:03.680 "name": "BaseBdev2", 00:17:03.680 "uuid": "20da66d5-4cb2-4683-ac18-d1c09e1a4cc4", 00:17:03.680 "is_configured": true, 00:17:03.680 "data_offset": 2048, 00:17:03.680 "data_size": 63488 00:17:03.680 }, 00:17:03.680 { 00:17:03.680 "name": "BaseBdev3", 00:17:03.680 "uuid": "ad1a8044-e176-4116-a81f-8520e282551c", 00:17:03.680 "is_configured": true, 00:17:03.680 "data_offset": 2048, 00:17:03.680 "data_size": 63488 00:17:03.680 } 00:17:03.680 ] 00:17:03.680 }' 00:17:03.680 22:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:03.680 22:00:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.938 22:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:03.938 22:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:04.196 22:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:17:04.196 22:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:04.196 22:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:04.454 22:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u d08cbc9b-ebdd-4615-8bd9-96c8b85f8232 00:17:04.713 [2024-07-13 22:00:23.851257] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:04.713 [2024-07-13 22:00:23.851473] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041780 00:17:04.713 [2024-07-13 22:00:23.851488] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:04.713 [2024-07-13 22:00:23.851725] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010980 00:17:04.713 [2024-07-13 22:00:23.851887] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041780 00:17:04.713 [2024-07-13 22:00:23.851899] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000041780 00:17:04.713 NewBaseBdev 00:17:04.713 [2024-07-13 22:00:23.852061] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:04.713 22:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:17:04.713 22:00:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:17:04.713 22:00:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:04.713 22:00:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:04.713 22:00:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:04.713 22:00:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:04.713 22:00:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:04.713 22:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:04.970 [ 00:17:04.970 { 00:17:04.970 "name": "NewBaseBdev", 00:17:04.970 "aliases": [ 00:17:04.970 "d08cbc9b-ebdd-4615-8bd9-96c8b85f8232" 00:17:04.970 ], 00:17:04.970 "product_name": "Malloc disk", 00:17:04.970 "block_size": 512, 00:17:04.970 "num_blocks": 65536, 00:17:04.970 "uuid": "d08cbc9b-ebdd-4615-8bd9-96c8b85f8232", 00:17:04.970 "assigned_rate_limits": { 00:17:04.970 "rw_ios_per_sec": 0, 00:17:04.970 "rw_mbytes_per_sec": 0, 00:17:04.970 "r_mbytes_per_sec": 0, 00:17:04.970 "w_mbytes_per_sec": 0 00:17:04.970 }, 00:17:04.970 "claimed": true, 00:17:04.970 "claim_type": "exclusive_write", 00:17:04.970 "zoned": false, 00:17:04.970 "supported_io_types": { 00:17:04.970 "read": true, 00:17:04.970 "write": true, 00:17:04.970 "unmap": true, 00:17:04.970 "flush": true, 00:17:04.970 "reset": true, 00:17:04.970 "nvme_admin": false, 00:17:04.970 "nvme_io": false, 00:17:04.970 "nvme_io_md": false, 00:17:04.970 "write_zeroes": true, 00:17:04.970 "zcopy": true, 00:17:04.970 "get_zone_info": false, 00:17:04.970 "zone_management": false, 00:17:04.970 "zone_append": false, 00:17:04.970 "compare": false, 00:17:04.970 "compare_and_write": false, 00:17:04.970 "abort": true, 00:17:04.970 "seek_hole": false, 00:17:04.970 "seek_data": false, 00:17:04.970 "copy": true, 00:17:04.970 "nvme_iov_md": false 00:17:04.970 }, 00:17:04.970 "memory_domains": [ 00:17:04.970 { 00:17:04.970 "dma_device_id": "system", 00:17:04.970 "dma_device_type": 1 00:17:04.970 }, 00:17:04.970 { 00:17:04.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:04.970 "dma_device_type": 2 00:17:04.970 } 00:17:04.970 ], 00:17:04.970 "driver_specific": {} 00:17:04.970 } 00:17:04.970 ] 00:17:04.970 22:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:04.970 22:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:17:04.970 22:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:04.970 22:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:04.970 22:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:04.970 22:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:04.970 22:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:04.970 22:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:04.970 22:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:04.970 22:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:04.970 22:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:04.970 22:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:04.970 22:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:05.227 22:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:05.228 "name": "Existed_Raid", 00:17:05.228 "uuid": "e0d2bf59-343a-4be1-a559-ea98c45a748a", 00:17:05.228 "strip_size_kb": 0, 00:17:05.228 "state": "online", 00:17:05.228 "raid_level": "raid1", 00:17:05.228 "superblock": true, 00:17:05.228 "num_base_bdevs": 3, 00:17:05.228 "num_base_bdevs_discovered": 3, 00:17:05.228 "num_base_bdevs_operational": 3, 00:17:05.228 "base_bdevs_list": [ 00:17:05.228 { 00:17:05.228 "name": "NewBaseBdev", 00:17:05.228 "uuid": "d08cbc9b-ebdd-4615-8bd9-96c8b85f8232", 00:17:05.228 "is_configured": true, 00:17:05.228 "data_offset": 2048, 00:17:05.228 "data_size": 63488 00:17:05.228 }, 00:17:05.228 { 00:17:05.228 "name": "BaseBdev2", 00:17:05.228 "uuid": "20da66d5-4cb2-4683-ac18-d1c09e1a4cc4", 00:17:05.228 "is_configured": true, 00:17:05.228 "data_offset": 2048, 00:17:05.228 "data_size": 63488 00:17:05.228 }, 00:17:05.228 { 00:17:05.228 "name": "BaseBdev3", 00:17:05.228 "uuid": "ad1a8044-e176-4116-a81f-8520e282551c", 00:17:05.228 "is_configured": true, 00:17:05.228 "data_offset": 2048, 00:17:05.228 "data_size": 63488 00:17:05.228 } 00:17:05.228 ] 00:17:05.228 }' 00:17:05.228 22:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:05.228 22:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.488 22:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:17:05.488 22:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:05.488 22:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:05.488 22:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:05.488 22:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:05.488 22:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:17:05.488 22:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:05.488 22:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:05.746 [2024-07-13 22:00:25.006628] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:05.746 22:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:05.746 "name": "Existed_Raid", 00:17:05.746 "aliases": [ 00:17:05.746 "e0d2bf59-343a-4be1-a559-ea98c45a748a" 00:17:05.746 ], 00:17:05.746 "product_name": "Raid Volume", 00:17:05.746 "block_size": 512, 00:17:05.746 "num_blocks": 63488, 00:17:05.746 "uuid": "e0d2bf59-343a-4be1-a559-ea98c45a748a", 00:17:05.746 "assigned_rate_limits": { 00:17:05.746 "rw_ios_per_sec": 0, 00:17:05.746 "rw_mbytes_per_sec": 0, 00:17:05.746 "r_mbytes_per_sec": 0, 00:17:05.746 "w_mbytes_per_sec": 0 00:17:05.746 }, 00:17:05.746 "claimed": false, 00:17:05.746 "zoned": false, 00:17:05.746 "supported_io_types": { 00:17:05.746 "read": true, 00:17:05.746 "write": true, 00:17:05.746 "unmap": false, 00:17:05.746 "flush": false, 00:17:05.746 "reset": true, 00:17:05.746 "nvme_admin": false, 00:17:05.746 "nvme_io": false, 00:17:05.746 "nvme_io_md": false, 00:17:05.746 "write_zeroes": true, 00:17:05.746 "zcopy": false, 00:17:05.746 "get_zone_info": false, 00:17:05.746 "zone_management": false, 00:17:05.746 "zone_append": false, 00:17:05.746 "compare": false, 00:17:05.746 "compare_and_write": false, 00:17:05.746 "abort": false, 00:17:05.746 "seek_hole": false, 00:17:05.746 "seek_data": false, 00:17:05.746 "copy": false, 00:17:05.746 "nvme_iov_md": false 00:17:05.746 }, 00:17:05.746 "memory_domains": [ 00:17:05.746 { 00:17:05.746 "dma_device_id": "system", 00:17:05.746 "dma_device_type": 1 00:17:05.746 }, 00:17:05.746 { 00:17:05.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.746 "dma_device_type": 2 00:17:05.746 }, 00:17:05.746 { 00:17:05.746 "dma_device_id": "system", 00:17:05.746 "dma_device_type": 1 00:17:05.746 }, 00:17:05.746 { 00:17:05.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.746 "dma_device_type": 2 00:17:05.746 }, 00:17:05.746 { 00:17:05.746 "dma_device_id": "system", 00:17:05.746 "dma_device_type": 1 00:17:05.746 }, 00:17:05.746 { 00:17:05.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.746 "dma_device_type": 2 00:17:05.746 } 00:17:05.746 ], 00:17:05.746 "driver_specific": { 00:17:05.746 "raid": { 00:17:05.746 "uuid": "e0d2bf59-343a-4be1-a559-ea98c45a748a", 00:17:05.746 "strip_size_kb": 0, 00:17:05.746 "state": "online", 00:17:05.746 "raid_level": "raid1", 00:17:05.746 "superblock": true, 00:17:05.746 "num_base_bdevs": 3, 00:17:05.746 "num_base_bdevs_discovered": 3, 00:17:05.746 "num_base_bdevs_operational": 3, 00:17:05.746 "base_bdevs_list": [ 00:17:05.746 { 00:17:05.746 "name": "NewBaseBdev", 00:17:05.746 "uuid": "d08cbc9b-ebdd-4615-8bd9-96c8b85f8232", 00:17:05.746 "is_configured": true, 00:17:05.746 "data_offset": 2048, 00:17:05.746 "data_size": 63488 00:17:05.746 }, 00:17:05.746 { 00:17:05.746 "name": "BaseBdev2", 00:17:05.746 "uuid": "20da66d5-4cb2-4683-ac18-d1c09e1a4cc4", 00:17:05.746 "is_configured": true, 00:17:05.746 "data_offset": 2048, 00:17:05.746 "data_size": 63488 00:17:05.746 }, 00:17:05.746 { 00:17:05.746 "name": "BaseBdev3", 00:17:05.746 "uuid": "ad1a8044-e176-4116-a81f-8520e282551c", 00:17:05.746 "is_configured": true, 00:17:05.746 "data_offset": 2048, 00:17:05.746 "data_size": 63488 00:17:05.746 } 00:17:05.746 ] 00:17:05.746 } 00:17:05.746 } 00:17:05.746 }' 00:17:05.746 22:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:05.746 22:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:17:05.746 BaseBdev2 00:17:05.746 BaseBdev3' 00:17:05.747 22:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:05.747 22:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:17:05.747 22:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:06.005 22:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:06.005 "name": "NewBaseBdev", 00:17:06.005 "aliases": [ 00:17:06.005 "d08cbc9b-ebdd-4615-8bd9-96c8b85f8232" 00:17:06.005 ], 00:17:06.005 "product_name": "Malloc disk", 00:17:06.005 "block_size": 512, 00:17:06.005 "num_blocks": 65536, 00:17:06.005 "uuid": "d08cbc9b-ebdd-4615-8bd9-96c8b85f8232", 00:17:06.005 "assigned_rate_limits": { 00:17:06.005 "rw_ios_per_sec": 0, 00:17:06.005 "rw_mbytes_per_sec": 0, 00:17:06.005 "r_mbytes_per_sec": 0, 00:17:06.005 "w_mbytes_per_sec": 0 00:17:06.005 }, 00:17:06.005 "claimed": true, 00:17:06.005 "claim_type": "exclusive_write", 00:17:06.005 "zoned": false, 00:17:06.005 "supported_io_types": { 00:17:06.005 "read": true, 00:17:06.005 "write": true, 00:17:06.005 "unmap": true, 00:17:06.005 "flush": true, 00:17:06.005 "reset": true, 00:17:06.005 "nvme_admin": false, 00:17:06.005 "nvme_io": false, 00:17:06.005 "nvme_io_md": false, 00:17:06.005 "write_zeroes": true, 00:17:06.005 "zcopy": true, 00:17:06.005 "get_zone_info": false, 00:17:06.005 "zone_management": false, 00:17:06.005 "zone_append": false, 00:17:06.005 "compare": false, 00:17:06.005 "compare_and_write": false, 00:17:06.005 "abort": true, 00:17:06.005 "seek_hole": false, 00:17:06.005 "seek_data": false, 00:17:06.005 "copy": true, 00:17:06.005 "nvme_iov_md": false 00:17:06.005 }, 00:17:06.005 "memory_domains": [ 00:17:06.005 { 00:17:06.005 "dma_device_id": "system", 00:17:06.005 "dma_device_type": 1 00:17:06.005 }, 00:17:06.005 { 00:17:06.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:06.005 "dma_device_type": 2 00:17:06.005 } 00:17:06.005 ], 00:17:06.005 "driver_specific": {} 00:17:06.005 }' 00:17:06.005 22:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:06.005 22:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:06.005 22:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:06.005 22:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:06.005 22:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:06.005 22:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:06.005 22:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:06.263 22:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:06.263 22:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:06.263 22:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:06.263 22:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:06.263 22:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:06.263 22:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:06.263 22:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:06.263 22:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:06.521 22:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:06.521 "name": "BaseBdev2", 00:17:06.521 "aliases": [ 00:17:06.521 "20da66d5-4cb2-4683-ac18-d1c09e1a4cc4" 00:17:06.521 ], 00:17:06.521 "product_name": "Malloc disk", 00:17:06.521 "block_size": 512, 00:17:06.521 "num_blocks": 65536, 00:17:06.521 "uuid": "20da66d5-4cb2-4683-ac18-d1c09e1a4cc4", 00:17:06.521 "assigned_rate_limits": { 00:17:06.521 "rw_ios_per_sec": 0, 00:17:06.521 "rw_mbytes_per_sec": 0, 00:17:06.521 "r_mbytes_per_sec": 0, 00:17:06.521 "w_mbytes_per_sec": 0 00:17:06.521 }, 00:17:06.521 "claimed": true, 00:17:06.521 "claim_type": "exclusive_write", 00:17:06.521 "zoned": false, 00:17:06.521 "supported_io_types": { 00:17:06.521 "read": true, 00:17:06.521 "write": true, 00:17:06.521 "unmap": true, 00:17:06.521 "flush": true, 00:17:06.521 "reset": true, 00:17:06.521 "nvme_admin": false, 00:17:06.521 "nvme_io": false, 00:17:06.521 "nvme_io_md": false, 00:17:06.521 "write_zeroes": true, 00:17:06.521 "zcopy": true, 00:17:06.521 "get_zone_info": false, 00:17:06.521 "zone_management": false, 00:17:06.521 "zone_append": false, 00:17:06.521 "compare": false, 00:17:06.521 "compare_and_write": false, 00:17:06.521 "abort": true, 00:17:06.521 "seek_hole": false, 00:17:06.521 "seek_data": false, 00:17:06.521 "copy": true, 00:17:06.521 "nvme_iov_md": false 00:17:06.521 }, 00:17:06.521 "memory_domains": [ 00:17:06.521 { 00:17:06.521 "dma_device_id": "system", 00:17:06.521 "dma_device_type": 1 00:17:06.521 }, 00:17:06.521 { 00:17:06.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:06.521 "dma_device_type": 2 00:17:06.521 } 00:17:06.521 ], 00:17:06.521 "driver_specific": {} 00:17:06.521 }' 00:17:06.521 22:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:06.521 22:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:06.521 22:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:06.521 22:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:06.521 22:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:06.521 22:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:06.521 22:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:06.521 22:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:06.521 22:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:06.521 22:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:06.521 22:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:06.779 22:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:06.779 22:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:06.779 22:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:06.779 22:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:06.779 22:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:06.779 "name": "BaseBdev3", 00:17:06.779 "aliases": [ 00:17:06.779 "ad1a8044-e176-4116-a81f-8520e282551c" 00:17:06.779 ], 00:17:06.779 "product_name": "Malloc disk", 00:17:06.779 "block_size": 512, 00:17:06.779 "num_blocks": 65536, 00:17:06.779 "uuid": "ad1a8044-e176-4116-a81f-8520e282551c", 00:17:06.779 "assigned_rate_limits": { 00:17:06.779 "rw_ios_per_sec": 0, 00:17:06.779 "rw_mbytes_per_sec": 0, 00:17:06.779 "r_mbytes_per_sec": 0, 00:17:06.779 "w_mbytes_per_sec": 0 00:17:06.779 }, 00:17:06.779 "claimed": true, 00:17:06.779 "claim_type": "exclusive_write", 00:17:06.779 "zoned": false, 00:17:06.779 "supported_io_types": { 00:17:06.779 "read": true, 00:17:06.779 "write": true, 00:17:06.779 "unmap": true, 00:17:06.779 "flush": true, 00:17:06.779 "reset": true, 00:17:06.779 "nvme_admin": false, 00:17:06.779 "nvme_io": false, 00:17:06.779 "nvme_io_md": false, 00:17:06.779 "write_zeroes": true, 00:17:06.779 "zcopy": true, 00:17:06.779 "get_zone_info": false, 00:17:06.779 "zone_management": false, 00:17:06.779 "zone_append": false, 00:17:06.779 "compare": false, 00:17:06.779 "compare_and_write": false, 00:17:06.779 "abort": true, 00:17:06.779 "seek_hole": false, 00:17:06.779 "seek_data": false, 00:17:06.779 "copy": true, 00:17:06.779 "nvme_iov_md": false 00:17:06.779 }, 00:17:06.779 "memory_domains": [ 00:17:06.779 { 00:17:06.779 "dma_device_id": "system", 00:17:06.779 "dma_device_type": 1 00:17:06.779 }, 00:17:06.779 { 00:17:06.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:06.779 "dma_device_type": 2 00:17:06.779 } 00:17:06.779 ], 00:17:06.779 "driver_specific": {} 00:17:06.779 }' 00:17:06.779 22:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:06.779 22:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:06.779 22:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:06.779 22:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:07.037 22:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:07.037 22:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:07.037 22:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:07.037 22:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:07.037 22:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:07.037 22:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:07.037 22:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:07.037 22:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:07.037 22:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:07.294 [2024-07-13 22:00:26.570607] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:07.295 [2024-07-13 22:00:26.570638] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:07.295 [2024-07-13 22:00:26.570714] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:07.295 [2024-07-13 22:00:26.570998] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:07.295 [2024-07-13 22:00:26.571013] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041780 name Existed_Raid, state offline 00:17:07.295 22:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 1401531 00:17:07.295 22:00:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 1401531 ']' 00:17:07.295 22:00:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 1401531 00:17:07.295 22:00:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:17:07.295 22:00:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:07.295 22:00:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1401531 00:17:07.295 22:00:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:07.295 22:00:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:07.295 22:00:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1401531' 00:17:07.295 killing process with pid 1401531 00:17:07.295 22:00:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 1401531 00:17:07.295 [2024-07-13 22:00:26.646615] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:07.295 22:00:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 1401531 00:17:07.553 [2024-07-13 22:00:26.873720] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:08.950 22:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:17:08.950 00:17:08.950 real 0m22.954s 00:17:08.950 user 0m40.158s 00:17:08.950 sys 0m4.267s 00:17:08.950 22:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:08.950 22:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.950 ************************************ 00:17:08.950 END TEST raid_state_function_test_sb 00:17:08.950 ************************************ 00:17:08.950 22:00:28 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:08.950 22:00:28 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:17:08.950 22:00:28 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:17:08.950 22:00:28 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:08.950 22:00:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:08.950 ************************************ 00:17:08.950 START TEST raid_superblock_test 00:17:08.950 ************************************ 00:17:08.950 22:00:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 3 00:17:08.950 22:00:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:17:08.950 22:00:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:17:08.950 22:00:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:17:08.950 22:00:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:17:08.950 22:00:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:17:08.950 22:00:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:17:08.950 22:00:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:17:08.950 22:00:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:17:08.950 22:00:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:17:08.950 22:00:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:17:08.950 22:00:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:17:08.950 22:00:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:17:08.950 22:00:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:17:08.950 22:00:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:17:08.950 22:00:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:17:08.950 22:00:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=1406073 00:17:08.950 22:00:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 1406073 /var/tmp/spdk-raid.sock 00:17:08.950 22:00:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:08.950 22:00:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 1406073 ']' 00:17:08.950 22:00:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:08.951 22:00:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:08.951 22:00:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:08.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:08.951 22:00:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:08.951 22:00:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.951 [2024-07-13 22:00:28.268727] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:08.951 [2024-07-13 22:00:28.268818] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1406073 ] 00:17:09.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:09.210 EAL: Requested device 0000:3d:01.0 cannot be used 00:17:09.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:09.210 EAL: Requested device 0000:3d:01.1 cannot be used 00:17:09.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:09.210 EAL: Requested device 0000:3d:01.2 cannot be used 00:17:09.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:09.210 EAL: Requested device 0000:3d:01.3 cannot be used 00:17:09.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:09.210 EAL: Requested device 0000:3d:01.4 cannot be used 00:17:09.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:09.210 EAL: Requested device 0000:3d:01.5 cannot be used 00:17:09.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:09.210 EAL: Requested device 0000:3d:01.6 cannot be used 00:17:09.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:09.210 EAL: Requested device 0000:3d:01.7 cannot be used 00:17:09.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:09.210 EAL: Requested device 0000:3d:02.0 cannot be used 00:17:09.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:09.210 EAL: Requested device 0000:3d:02.1 cannot be used 00:17:09.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:09.210 EAL: Requested device 0000:3d:02.2 cannot be used 00:17:09.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:09.210 EAL: Requested device 0000:3d:02.3 cannot be used 00:17:09.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:09.210 EAL: Requested device 0000:3d:02.4 cannot be used 00:17:09.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:09.210 EAL: Requested device 0000:3d:02.5 cannot be used 00:17:09.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:09.210 EAL: Requested device 0000:3d:02.6 cannot be used 00:17:09.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:09.210 EAL: Requested device 0000:3d:02.7 cannot be used 00:17:09.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:09.210 EAL: Requested device 0000:3f:01.0 cannot be used 00:17:09.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:09.210 EAL: Requested device 0000:3f:01.1 cannot be used 00:17:09.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:09.210 EAL: Requested device 0000:3f:01.2 cannot be used 00:17:09.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:09.210 EAL: Requested device 0000:3f:01.3 cannot be used 00:17:09.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:09.210 EAL: Requested device 0000:3f:01.4 cannot be used 00:17:09.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:09.210 EAL: Requested device 0000:3f:01.5 cannot be used 00:17:09.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:09.210 EAL: Requested device 0000:3f:01.6 cannot be used 00:17:09.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:09.210 EAL: Requested device 0000:3f:01.7 cannot be used 00:17:09.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:09.210 EAL: Requested device 0000:3f:02.0 cannot be used 00:17:09.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:09.210 EAL: Requested device 0000:3f:02.1 cannot be used 00:17:09.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:09.210 EAL: Requested device 0000:3f:02.2 cannot be used 00:17:09.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:09.210 EAL: Requested device 0000:3f:02.3 cannot be used 00:17:09.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:09.210 EAL: Requested device 0000:3f:02.4 cannot be used 00:17:09.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:09.210 EAL: Requested device 0000:3f:02.5 cannot be used 00:17:09.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:09.210 EAL: Requested device 0000:3f:02.6 cannot be used 00:17:09.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:09.210 EAL: Requested device 0000:3f:02.7 cannot be used 00:17:09.210 [2024-07-13 22:00:28.433058] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.501 [2024-07-13 22:00:28.635793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.501 [2024-07-13 22:00:28.879413] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:09.501 [2024-07-13 22:00:28.879447] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:09.761 22:00:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:09.761 22:00:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:17:09.761 22:00:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:17:09.761 22:00:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:09.761 22:00:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:17:09.761 22:00:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:17:09.761 22:00:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:09.761 22:00:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:09.761 22:00:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:09.761 22:00:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:09.761 22:00:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:10.020 malloc1 00:17:10.020 22:00:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:10.020 [2024-07-13 22:00:29.377352] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:10.020 [2024-07-13 22:00:29.377411] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:10.020 [2024-07-13 22:00:29.377451] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:17:10.020 [2024-07-13 22:00:29.377463] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:10.020 [2024-07-13 22:00:29.379486] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:10.020 [2024-07-13 22:00:29.379515] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:10.020 pt1 00:17:10.020 22:00:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:10.020 22:00:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:10.020 22:00:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:17:10.020 22:00:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:17:10.020 22:00:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:10.020 22:00:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:10.020 22:00:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:10.020 22:00:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:10.020 22:00:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:10.279 malloc2 00:17:10.279 22:00:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:10.537 [2024-07-13 22:00:29.766126] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:10.537 [2024-07-13 22:00:29.766176] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:10.537 [2024-07-13 22:00:29.766197] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:17:10.537 [2024-07-13 22:00:29.766208] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:10.537 [2024-07-13 22:00:29.768304] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:10.537 [2024-07-13 22:00:29.768339] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:10.537 pt2 00:17:10.537 22:00:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:10.537 22:00:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:10.537 22:00:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:17:10.537 22:00:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:17:10.537 22:00:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:10.537 22:00:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:10.537 22:00:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:10.537 22:00:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:10.537 22:00:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:10.795 malloc3 00:17:10.795 22:00:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:10.795 [2024-07-13 22:00:30.138214] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:10.795 [2024-07-13 22:00:30.138271] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:10.795 [2024-07-13 22:00:30.138312] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040e80 00:17:10.795 [2024-07-13 22:00:30.138323] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:10.795 [2024-07-13 22:00:30.140424] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:10.795 [2024-07-13 22:00:30.140451] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:10.795 pt3 00:17:10.795 22:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:10.795 22:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:10.795 22:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:17:11.053 [2024-07-13 22:00:30.298664] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:11.053 [2024-07-13 22:00:30.300290] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:11.053 [2024-07-13 22:00:30.300349] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:11.053 [2024-07-13 22:00:30.300509] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041480 00:17:11.053 [2024-07-13 22:00:30.300527] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:11.053 [2024-07-13 22:00:30.300773] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:17:11.053 [2024-07-13 22:00:30.300961] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041480 00:17:11.053 [2024-07-13 22:00:30.300973] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041480 00:17:11.053 [2024-07-13 22:00:30.301110] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:11.053 22:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:11.053 22:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:11.053 22:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:11.053 22:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:11.053 22:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:11.053 22:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:11.053 22:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:11.053 22:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:11.053 22:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:11.053 22:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:11.053 22:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.053 22:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.312 22:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:11.312 "name": "raid_bdev1", 00:17:11.312 "uuid": "e5114e36-7936-422d-9c71-40e3317a8348", 00:17:11.312 "strip_size_kb": 0, 00:17:11.312 "state": "online", 00:17:11.312 "raid_level": "raid1", 00:17:11.312 "superblock": true, 00:17:11.312 "num_base_bdevs": 3, 00:17:11.312 "num_base_bdevs_discovered": 3, 00:17:11.312 "num_base_bdevs_operational": 3, 00:17:11.312 "base_bdevs_list": [ 00:17:11.312 { 00:17:11.312 "name": "pt1", 00:17:11.312 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:11.312 "is_configured": true, 00:17:11.312 "data_offset": 2048, 00:17:11.312 "data_size": 63488 00:17:11.312 }, 00:17:11.312 { 00:17:11.312 "name": "pt2", 00:17:11.312 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:11.312 "is_configured": true, 00:17:11.312 "data_offset": 2048, 00:17:11.312 "data_size": 63488 00:17:11.312 }, 00:17:11.312 { 00:17:11.312 "name": "pt3", 00:17:11.312 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:11.312 "is_configured": true, 00:17:11.312 "data_offset": 2048, 00:17:11.312 "data_size": 63488 00:17:11.312 } 00:17:11.312 ] 00:17:11.312 }' 00:17:11.312 22:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:11.312 22:00:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.571 22:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:17:11.571 22:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:11.571 22:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:11.571 22:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:11.571 22:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:11.571 22:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:11.571 22:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:11.571 22:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:11.830 [2024-07-13 22:00:31.097037] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:11.830 22:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:11.830 "name": "raid_bdev1", 00:17:11.830 "aliases": [ 00:17:11.830 "e5114e36-7936-422d-9c71-40e3317a8348" 00:17:11.830 ], 00:17:11.830 "product_name": "Raid Volume", 00:17:11.830 "block_size": 512, 00:17:11.830 "num_blocks": 63488, 00:17:11.830 "uuid": "e5114e36-7936-422d-9c71-40e3317a8348", 00:17:11.830 "assigned_rate_limits": { 00:17:11.830 "rw_ios_per_sec": 0, 00:17:11.830 "rw_mbytes_per_sec": 0, 00:17:11.830 "r_mbytes_per_sec": 0, 00:17:11.830 "w_mbytes_per_sec": 0 00:17:11.830 }, 00:17:11.830 "claimed": false, 00:17:11.830 "zoned": false, 00:17:11.830 "supported_io_types": { 00:17:11.830 "read": true, 00:17:11.830 "write": true, 00:17:11.830 "unmap": false, 00:17:11.830 "flush": false, 00:17:11.830 "reset": true, 00:17:11.830 "nvme_admin": false, 00:17:11.830 "nvme_io": false, 00:17:11.830 "nvme_io_md": false, 00:17:11.830 "write_zeroes": true, 00:17:11.830 "zcopy": false, 00:17:11.830 "get_zone_info": false, 00:17:11.830 "zone_management": false, 00:17:11.830 "zone_append": false, 00:17:11.830 "compare": false, 00:17:11.830 "compare_and_write": false, 00:17:11.830 "abort": false, 00:17:11.830 "seek_hole": false, 00:17:11.830 "seek_data": false, 00:17:11.830 "copy": false, 00:17:11.830 "nvme_iov_md": false 00:17:11.830 }, 00:17:11.830 "memory_domains": [ 00:17:11.830 { 00:17:11.830 "dma_device_id": "system", 00:17:11.830 "dma_device_type": 1 00:17:11.830 }, 00:17:11.830 { 00:17:11.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:11.830 "dma_device_type": 2 00:17:11.830 }, 00:17:11.830 { 00:17:11.830 "dma_device_id": "system", 00:17:11.830 "dma_device_type": 1 00:17:11.830 }, 00:17:11.830 { 00:17:11.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:11.830 "dma_device_type": 2 00:17:11.830 }, 00:17:11.830 { 00:17:11.830 "dma_device_id": "system", 00:17:11.830 "dma_device_type": 1 00:17:11.830 }, 00:17:11.830 { 00:17:11.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:11.830 "dma_device_type": 2 00:17:11.830 } 00:17:11.830 ], 00:17:11.830 "driver_specific": { 00:17:11.830 "raid": { 00:17:11.830 "uuid": "e5114e36-7936-422d-9c71-40e3317a8348", 00:17:11.830 "strip_size_kb": 0, 00:17:11.830 "state": "online", 00:17:11.830 "raid_level": "raid1", 00:17:11.830 "superblock": true, 00:17:11.830 "num_base_bdevs": 3, 00:17:11.830 "num_base_bdevs_discovered": 3, 00:17:11.830 "num_base_bdevs_operational": 3, 00:17:11.830 "base_bdevs_list": [ 00:17:11.830 { 00:17:11.830 "name": "pt1", 00:17:11.830 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:11.830 "is_configured": true, 00:17:11.830 "data_offset": 2048, 00:17:11.830 "data_size": 63488 00:17:11.830 }, 00:17:11.830 { 00:17:11.830 "name": "pt2", 00:17:11.830 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:11.830 "is_configured": true, 00:17:11.830 "data_offset": 2048, 00:17:11.830 "data_size": 63488 00:17:11.830 }, 00:17:11.830 { 00:17:11.830 "name": "pt3", 00:17:11.830 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:11.830 "is_configured": true, 00:17:11.830 "data_offset": 2048, 00:17:11.830 "data_size": 63488 00:17:11.830 } 00:17:11.830 ] 00:17:11.830 } 00:17:11.831 } 00:17:11.831 }' 00:17:11.831 22:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:11.831 22:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:11.831 pt2 00:17:11.831 pt3' 00:17:11.831 22:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:11.831 22:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:11.831 22:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:12.088 22:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:12.088 "name": "pt1", 00:17:12.088 "aliases": [ 00:17:12.088 "00000000-0000-0000-0000-000000000001" 00:17:12.088 ], 00:17:12.088 "product_name": "passthru", 00:17:12.088 "block_size": 512, 00:17:12.088 "num_blocks": 65536, 00:17:12.088 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:12.088 "assigned_rate_limits": { 00:17:12.088 "rw_ios_per_sec": 0, 00:17:12.088 "rw_mbytes_per_sec": 0, 00:17:12.088 "r_mbytes_per_sec": 0, 00:17:12.088 "w_mbytes_per_sec": 0 00:17:12.088 }, 00:17:12.088 "claimed": true, 00:17:12.088 "claim_type": "exclusive_write", 00:17:12.088 "zoned": false, 00:17:12.088 "supported_io_types": { 00:17:12.088 "read": true, 00:17:12.088 "write": true, 00:17:12.088 "unmap": true, 00:17:12.088 "flush": true, 00:17:12.088 "reset": true, 00:17:12.088 "nvme_admin": false, 00:17:12.088 "nvme_io": false, 00:17:12.088 "nvme_io_md": false, 00:17:12.088 "write_zeroes": true, 00:17:12.089 "zcopy": true, 00:17:12.089 "get_zone_info": false, 00:17:12.089 "zone_management": false, 00:17:12.089 "zone_append": false, 00:17:12.089 "compare": false, 00:17:12.089 "compare_and_write": false, 00:17:12.089 "abort": true, 00:17:12.089 "seek_hole": false, 00:17:12.089 "seek_data": false, 00:17:12.089 "copy": true, 00:17:12.089 "nvme_iov_md": false 00:17:12.089 }, 00:17:12.089 "memory_domains": [ 00:17:12.089 { 00:17:12.089 "dma_device_id": "system", 00:17:12.089 "dma_device_type": 1 00:17:12.089 }, 00:17:12.089 { 00:17:12.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:12.089 "dma_device_type": 2 00:17:12.089 } 00:17:12.089 ], 00:17:12.089 "driver_specific": { 00:17:12.089 "passthru": { 00:17:12.089 "name": "pt1", 00:17:12.089 "base_bdev_name": "malloc1" 00:17:12.089 } 00:17:12.089 } 00:17:12.089 }' 00:17:12.089 22:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:12.089 22:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:12.089 22:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:12.089 22:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:12.089 22:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:12.347 22:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:12.347 22:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:12.347 22:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:12.347 22:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:12.347 22:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:12.347 22:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:12.347 22:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:12.347 22:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:12.347 22:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:12.347 22:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:12.605 22:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:12.605 "name": "pt2", 00:17:12.605 "aliases": [ 00:17:12.605 "00000000-0000-0000-0000-000000000002" 00:17:12.605 ], 00:17:12.605 "product_name": "passthru", 00:17:12.605 "block_size": 512, 00:17:12.605 "num_blocks": 65536, 00:17:12.605 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:12.605 "assigned_rate_limits": { 00:17:12.605 "rw_ios_per_sec": 0, 00:17:12.605 "rw_mbytes_per_sec": 0, 00:17:12.605 "r_mbytes_per_sec": 0, 00:17:12.605 "w_mbytes_per_sec": 0 00:17:12.605 }, 00:17:12.605 "claimed": true, 00:17:12.605 "claim_type": "exclusive_write", 00:17:12.605 "zoned": false, 00:17:12.605 "supported_io_types": { 00:17:12.605 "read": true, 00:17:12.605 "write": true, 00:17:12.605 "unmap": true, 00:17:12.605 "flush": true, 00:17:12.605 "reset": true, 00:17:12.605 "nvme_admin": false, 00:17:12.605 "nvme_io": false, 00:17:12.605 "nvme_io_md": false, 00:17:12.605 "write_zeroes": true, 00:17:12.605 "zcopy": true, 00:17:12.605 "get_zone_info": false, 00:17:12.605 "zone_management": false, 00:17:12.605 "zone_append": false, 00:17:12.605 "compare": false, 00:17:12.605 "compare_and_write": false, 00:17:12.605 "abort": true, 00:17:12.605 "seek_hole": false, 00:17:12.605 "seek_data": false, 00:17:12.605 "copy": true, 00:17:12.605 "nvme_iov_md": false 00:17:12.605 }, 00:17:12.605 "memory_domains": [ 00:17:12.605 { 00:17:12.605 "dma_device_id": "system", 00:17:12.605 "dma_device_type": 1 00:17:12.605 }, 00:17:12.605 { 00:17:12.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:12.605 "dma_device_type": 2 00:17:12.605 } 00:17:12.605 ], 00:17:12.605 "driver_specific": { 00:17:12.605 "passthru": { 00:17:12.605 "name": "pt2", 00:17:12.605 "base_bdev_name": "malloc2" 00:17:12.605 } 00:17:12.605 } 00:17:12.605 }' 00:17:12.605 22:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:12.605 22:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:12.605 22:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:12.605 22:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:12.605 22:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:12.605 22:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:12.605 22:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:12.863 22:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:12.863 22:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:12.863 22:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:12.863 22:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:12.863 22:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:12.863 22:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:12.863 22:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:17:12.863 22:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:13.121 22:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:13.121 "name": "pt3", 00:17:13.121 "aliases": [ 00:17:13.121 "00000000-0000-0000-0000-000000000003" 00:17:13.121 ], 00:17:13.121 "product_name": "passthru", 00:17:13.121 "block_size": 512, 00:17:13.121 "num_blocks": 65536, 00:17:13.121 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:13.121 "assigned_rate_limits": { 00:17:13.121 "rw_ios_per_sec": 0, 00:17:13.121 "rw_mbytes_per_sec": 0, 00:17:13.121 "r_mbytes_per_sec": 0, 00:17:13.121 "w_mbytes_per_sec": 0 00:17:13.121 }, 00:17:13.121 "claimed": true, 00:17:13.121 "claim_type": "exclusive_write", 00:17:13.121 "zoned": false, 00:17:13.121 "supported_io_types": { 00:17:13.121 "read": true, 00:17:13.121 "write": true, 00:17:13.121 "unmap": true, 00:17:13.121 "flush": true, 00:17:13.121 "reset": true, 00:17:13.121 "nvme_admin": false, 00:17:13.121 "nvme_io": false, 00:17:13.121 "nvme_io_md": false, 00:17:13.121 "write_zeroes": true, 00:17:13.121 "zcopy": true, 00:17:13.121 "get_zone_info": false, 00:17:13.121 "zone_management": false, 00:17:13.121 "zone_append": false, 00:17:13.121 "compare": false, 00:17:13.121 "compare_and_write": false, 00:17:13.121 "abort": true, 00:17:13.121 "seek_hole": false, 00:17:13.121 "seek_data": false, 00:17:13.121 "copy": true, 00:17:13.121 "nvme_iov_md": false 00:17:13.121 }, 00:17:13.121 "memory_domains": [ 00:17:13.121 { 00:17:13.121 "dma_device_id": "system", 00:17:13.121 "dma_device_type": 1 00:17:13.121 }, 00:17:13.121 { 00:17:13.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:13.121 "dma_device_type": 2 00:17:13.121 } 00:17:13.121 ], 00:17:13.121 "driver_specific": { 00:17:13.121 "passthru": { 00:17:13.121 "name": "pt3", 00:17:13.121 "base_bdev_name": "malloc3" 00:17:13.121 } 00:17:13.121 } 00:17:13.121 }' 00:17:13.121 22:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:13.121 22:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:13.121 22:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:13.121 22:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:13.121 22:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:13.121 22:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:13.121 22:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:13.121 22:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:13.379 22:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:13.379 22:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:13.379 22:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:13.379 22:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:13.379 22:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:13.379 22:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:17:13.379 [2024-07-13 22:00:32.765445] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:13.639 22:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=e5114e36-7936-422d-9c71-40e3317a8348 00:17:13.639 22:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z e5114e36-7936-422d-9c71-40e3317a8348 ']' 00:17:13.639 22:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:13.639 [2024-07-13 22:00:32.933566] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:13.639 [2024-07-13 22:00:32.933605] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:13.639 [2024-07-13 22:00:32.933690] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:13.639 [2024-07-13 22:00:32.933762] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:13.639 [2024-07-13 22:00:32.933776] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041480 name raid_bdev1, state offline 00:17:13.639 22:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:13.639 22:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:17:13.898 22:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:17:13.898 22:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:17:13.898 22:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:13.898 22:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:13.898 22:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:13.898 22:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:14.155 22:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:14.155 22:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:14.414 22:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:14.414 22:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:14.414 22:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:17:14.414 22:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:14.414 22:00:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:17:14.414 22:00:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:14.414 22:00:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:17:14.414 22:00:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:14.414 22:00:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:17:14.414 22:00:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:14.414 22:00:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:17:14.414 22:00:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:14.414 22:00:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:17:14.414 22:00:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:17:14.414 22:00:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:14.673 [2024-07-13 22:00:33.888054] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:14.673 [2024-07-13 22:00:33.889850] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:14.673 [2024-07-13 22:00:33.889916] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:14.673 [2024-07-13 22:00:33.889970] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:14.673 [2024-07-13 22:00:33.890015] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:14.673 [2024-07-13 22:00:33.890034] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:14.673 [2024-07-13 22:00:33.890052] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:14.673 [2024-07-13 22:00:33.890063] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041a80 name raid_bdev1, state configuring 00:17:14.673 request: 00:17:14.673 { 00:17:14.673 "name": "raid_bdev1", 00:17:14.673 "raid_level": "raid1", 00:17:14.673 "base_bdevs": [ 00:17:14.673 "malloc1", 00:17:14.673 "malloc2", 00:17:14.673 "malloc3" 00:17:14.673 ], 00:17:14.673 "superblock": false, 00:17:14.673 "method": "bdev_raid_create", 00:17:14.673 "req_id": 1 00:17:14.673 } 00:17:14.673 Got JSON-RPC error response 00:17:14.673 response: 00:17:14.673 { 00:17:14.673 "code": -17, 00:17:14.673 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:14.673 } 00:17:14.673 22:00:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:17:14.673 22:00:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:14.673 22:00:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:14.673 22:00:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:14.673 22:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.673 22:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:17:14.673 22:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:17:14.673 22:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:17:14.673 22:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:14.932 [2024-07-13 22:00:34.208864] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:14.932 [2024-07-13 22:00:34.208941] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:14.932 [2024-07-13 22:00:34.208968] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042080 00:17:14.932 [2024-07-13 22:00:34.208979] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:14.932 [2024-07-13 22:00:34.211073] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:14.932 [2024-07-13 22:00:34.211104] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:14.932 [2024-07-13 22:00:34.211190] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:14.932 [2024-07-13 22:00:34.211244] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:14.932 pt1 00:17:14.932 22:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:14.932 22:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:14.932 22:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:14.932 22:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:14.932 22:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:14.932 22:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:14.932 22:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:14.932 22:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:14.932 22:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:14.932 22:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:14.932 22:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.932 22:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.191 22:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:15.191 "name": "raid_bdev1", 00:17:15.191 "uuid": "e5114e36-7936-422d-9c71-40e3317a8348", 00:17:15.191 "strip_size_kb": 0, 00:17:15.191 "state": "configuring", 00:17:15.191 "raid_level": "raid1", 00:17:15.191 "superblock": true, 00:17:15.191 "num_base_bdevs": 3, 00:17:15.191 "num_base_bdevs_discovered": 1, 00:17:15.191 "num_base_bdevs_operational": 3, 00:17:15.191 "base_bdevs_list": [ 00:17:15.191 { 00:17:15.191 "name": "pt1", 00:17:15.191 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:15.191 "is_configured": true, 00:17:15.191 "data_offset": 2048, 00:17:15.191 "data_size": 63488 00:17:15.191 }, 00:17:15.191 { 00:17:15.191 "name": null, 00:17:15.191 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:15.191 "is_configured": false, 00:17:15.191 "data_offset": 2048, 00:17:15.191 "data_size": 63488 00:17:15.191 }, 00:17:15.191 { 00:17:15.191 "name": null, 00:17:15.191 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:15.191 "is_configured": false, 00:17:15.191 "data_offset": 2048, 00:17:15.191 "data_size": 63488 00:17:15.191 } 00:17:15.191 ] 00:17:15.191 }' 00:17:15.191 22:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:15.191 22:00:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.468 22:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:17:15.468 22:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:15.733 [2024-07-13 22:00:34.974826] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:15.733 [2024-07-13 22:00:34.974882] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:15.733 [2024-07-13 22:00:34.974930] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042980 00:17:15.733 [2024-07-13 22:00:34.974941] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:15.733 [2024-07-13 22:00:34.975412] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:15.733 [2024-07-13 22:00:34.975430] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:15.733 [2024-07-13 22:00:34.975514] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:15.733 [2024-07-13 22:00:34.975539] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:15.733 pt2 00:17:15.733 22:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:15.992 [2024-07-13 22:00:35.147312] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:15.992 22:00:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:15.992 22:00:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:15.992 22:00:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:15.992 22:00:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:15.992 22:00:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:15.992 22:00:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:15.992 22:00:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:15.992 22:00:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:15.992 22:00:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:15.992 22:00:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:15.992 22:00:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.992 22:00:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:15.992 22:00:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:15.992 "name": "raid_bdev1", 00:17:15.992 "uuid": "e5114e36-7936-422d-9c71-40e3317a8348", 00:17:15.992 "strip_size_kb": 0, 00:17:15.992 "state": "configuring", 00:17:15.992 "raid_level": "raid1", 00:17:15.992 "superblock": true, 00:17:15.992 "num_base_bdevs": 3, 00:17:15.992 "num_base_bdevs_discovered": 1, 00:17:15.992 "num_base_bdevs_operational": 3, 00:17:15.992 "base_bdevs_list": [ 00:17:15.992 { 00:17:15.992 "name": "pt1", 00:17:15.992 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:15.992 "is_configured": true, 00:17:15.992 "data_offset": 2048, 00:17:15.992 "data_size": 63488 00:17:15.992 }, 00:17:15.992 { 00:17:15.992 "name": null, 00:17:15.992 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:15.992 "is_configured": false, 00:17:15.992 "data_offset": 2048, 00:17:15.992 "data_size": 63488 00:17:15.992 }, 00:17:15.992 { 00:17:15.992 "name": null, 00:17:15.992 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:15.992 "is_configured": false, 00:17:15.992 "data_offset": 2048, 00:17:15.992 "data_size": 63488 00:17:15.992 } 00:17:15.992 ] 00:17:15.992 }' 00:17:15.992 22:00:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:15.992 22:00:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.561 22:00:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:17:16.561 22:00:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:16.561 22:00:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:16.820 [2024-07-13 22:00:35.989547] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:16.820 [2024-07-13 22:00:35.989612] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.820 [2024-07-13 22:00:35.989634] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042c80 00:17:16.820 [2024-07-13 22:00:35.989647] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.820 [2024-07-13 22:00:35.990134] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.820 [2024-07-13 22:00:35.990159] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:16.820 [2024-07-13 22:00:35.990241] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:16.820 [2024-07-13 22:00:35.990268] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:16.820 pt2 00:17:16.820 22:00:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:17:16.820 22:00:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:16.820 22:00:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:16.820 [2024-07-13 22:00:36.153960] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:16.820 [2024-07-13 22:00:36.154025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.820 [2024-07-13 22:00:36.154048] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042f80 00:17:16.820 [2024-07-13 22:00:36.154061] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.820 [2024-07-13 22:00:36.154508] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.820 [2024-07-13 22:00:36.154530] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:16.820 [2024-07-13 22:00:36.154609] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:16.820 [2024-07-13 22:00:36.154637] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:16.820 [2024-07-13 22:00:36.154789] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042680 00:17:16.820 [2024-07-13 22:00:36.154802] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:16.820 [2024-07-13 22:00:36.155068] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:17:16.820 [2024-07-13 22:00:36.155252] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042680 00:17:16.820 [2024-07-13 22:00:36.155263] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000042680 00:17:16.820 [2024-07-13 22:00:36.155407] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:16.820 pt3 00:17:16.820 22:00:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:17:16.820 22:00:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:16.820 22:00:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:16.820 22:00:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:16.820 22:00:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:16.820 22:00:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:16.820 22:00:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:16.820 22:00:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:16.820 22:00:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:16.820 22:00:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:16.820 22:00:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:16.820 22:00:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:16.820 22:00:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.820 22:00:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.079 22:00:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:17.079 "name": "raid_bdev1", 00:17:17.079 "uuid": "e5114e36-7936-422d-9c71-40e3317a8348", 00:17:17.079 "strip_size_kb": 0, 00:17:17.079 "state": "online", 00:17:17.079 "raid_level": "raid1", 00:17:17.079 "superblock": true, 00:17:17.079 "num_base_bdevs": 3, 00:17:17.079 "num_base_bdevs_discovered": 3, 00:17:17.079 "num_base_bdevs_operational": 3, 00:17:17.079 "base_bdevs_list": [ 00:17:17.079 { 00:17:17.079 "name": "pt1", 00:17:17.079 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:17.079 "is_configured": true, 00:17:17.079 "data_offset": 2048, 00:17:17.079 "data_size": 63488 00:17:17.079 }, 00:17:17.079 { 00:17:17.079 "name": "pt2", 00:17:17.079 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:17.079 "is_configured": true, 00:17:17.079 "data_offset": 2048, 00:17:17.079 "data_size": 63488 00:17:17.079 }, 00:17:17.079 { 00:17:17.079 "name": "pt3", 00:17:17.079 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:17.079 "is_configured": true, 00:17:17.079 "data_offset": 2048, 00:17:17.079 "data_size": 63488 00:17:17.079 } 00:17:17.079 ] 00:17:17.079 }' 00:17:17.079 22:00:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:17.079 22:00:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.645 22:00:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:17:17.645 22:00:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:17.645 22:00:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:17.645 22:00:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:17.645 22:00:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:17.645 22:00:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:17.645 22:00:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:17.645 22:00:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:17.645 [2024-07-13 22:00:36.980411] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:17.645 22:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:17.645 "name": "raid_bdev1", 00:17:17.645 "aliases": [ 00:17:17.645 "e5114e36-7936-422d-9c71-40e3317a8348" 00:17:17.645 ], 00:17:17.645 "product_name": "Raid Volume", 00:17:17.645 "block_size": 512, 00:17:17.645 "num_blocks": 63488, 00:17:17.645 "uuid": "e5114e36-7936-422d-9c71-40e3317a8348", 00:17:17.645 "assigned_rate_limits": { 00:17:17.645 "rw_ios_per_sec": 0, 00:17:17.645 "rw_mbytes_per_sec": 0, 00:17:17.645 "r_mbytes_per_sec": 0, 00:17:17.645 "w_mbytes_per_sec": 0 00:17:17.645 }, 00:17:17.645 "claimed": false, 00:17:17.645 "zoned": false, 00:17:17.645 "supported_io_types": { 00:17:17.645 "read": true, 00:17:17.645 "write": true, 00:17:17.645 "unmap": false, 00:17:17.645 "flush": false, 00:17:17.645 "reset": true, 00:17:17.645 "nvme_admin": false, 00:17:17.645 "nvme_io": false, 00:17:17.645 "nvme_io_md": false, 00:17:17.645 "write_zeroes": true, 00:17:17.645 "zcopy": false, 00:17:17.645 "get_zone_info": false, 00:17:17.645 "zone_management": false, 00:17:17.645 "zone_append": false, 00:17:17.645 "compare": false, 00:17:17.645 "compare_and_write": false, 00:17:17.645 "abort": false, 00:17:17.645 "seek_hole": false, 00:17:17.645 "seek_data": false, 00:17:17.645 "copy": false, 00:17:17.645 "nvme_iov_md": false 00:17:17.645 }, 00:17:17.645 "memory_domains": [ 00:17:17.645 { 00:17:17.645 "dma_device_id": "system", 00:17:17.645 "dma_device_type": 1 00:17:17.645 }, 00:17:17.645 { 00:17:17.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.645 "dma_device_type": 2 00:17:17.645 }, 00:17:17.645 { 00:17:17.645 "dma_device_id": "system", 00:17:17.645 "dma_device_type": 1 00:17:17.645 }, 00:17:17.645 { 00:17:17.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.645 "dma_device_type": 2 00:17:17.645 }, 00:17:17.645 { 00:17:17.645 "dma_device_id": "system", 00:17:17.645 "dma_device_type": 1 00:17:17.645 }, 00:17:17.645 { 00:17:17.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.645 "dma_device_type": 2 00:17:17.645 } 00:17:17.645 ], 00:17:17.645 "driver_specific": { 00:17:17.646 "raid": { 00:17:17.646 "uuid": "e5114e36-7936-422d-9c71-40e3317a8348", 00:17:17.646 "strip_size_kb": 0, 00:17:17.646 "state": "online", 00:17:17.646 "raid_level": "raid1", 00:17:17.646 "superblock": true, 00:17:17.646 "num_base_bdevs": 3, 00:17:17.646 "num_base_bdevs_discovered": 3, 00:17:17.646 "num_base_bdevs_operational": 3, 00:17:17.646 "base_bdevs_list": [ 00:17:17.646 { 00:17:17.646 "name": "pt1", 00:17:17.646 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:17.646 "is_configured": true, 00:17:17.646 "data_offset": 2048, 00:17:17.646 "data_size": 63488 00:17:17.646 }, 00:17:17.646 { 00:17:17.646 "name": "pt2", 00:17:17.646 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:17.646 "is_configured": true, 00:17:17.646 "data_offset": 2048, 00:17:17.646 "data_size": 63488 00:17:17.646 }, 00:17:17.646 { 00:17:17.646 "name": "pt3", 00:17:17.646 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:17.646 "is_configured": true, 00:17:17.646 "data_offset": 2048, 00:17:17.646 "data_size": 63488 00:17:17.646 } 00:17:17.646 ] 00:17:17.646 } 00:17:17.646 } 00:17:17.646 }' 00:17:17.646 22:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:17.904 22:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:17.904 pt2 00:17:17.904 pt3' 00:17:17.904 22:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:17.904 22:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:17.904 22:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:17.904 22:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:17.904 "name": "pt1", 00:17:17.904 "aliases": [ 00:17:17.904 "00000000-0000-0000-0000-000000000001" 00:17:17.904 ], 00:17:17.904 "product_name": "passthru", 00:17:17.904 "block_size": 512, 00:17:17.904 "num_blocks": 65536, 00:17:17.904 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:17.904 "assigned_rate_limits": { 00:17:17.904 "rw_ios_per_sec": 0, 00:17:17.904 "rw_mbytes_per_sec": 0, 00:17:17.904 "r_mbytes_per_sec": 0, 00:17:17.904 "w_mbytes_per_sec": 0 00:17:17.904 }, 00:17:17.904 "claimed": true, 00:17:17.904 "claim_type": "exclusive_write", 00:17:17.904 "zoned": false, 00:17:17.904 "supported_io_types": { 00:17:17.904 "read": true, 00:17:17.904 "write": true, 00:17:17.904 "unmap": true, 00:17:17.904 "flush": true, 00:17:17.904 "reset": true, 00:17:17.904 "nvme_admin": false, 00:17:17.904 "nvme_io": false, 00:17:17.904 "nvme_io_md": false, 00:17:17.904 "write_zeroes": true, 00:17:17.904 "zcopy": true, 00:17:17.904 "get_zone_info": false, 00:17:17.904 "zone_management": false, 00:17:17.904 "zone_append": false, 00:17:17.904 "compare": false, 00:17:17.904 "compare_and_write": false, 00:17:17.904 "abort": true, 00:17:17.904 "seek_hole": false, 00:17:17.904 "seek_data": false, 00:17:17.904 "copy": true, 00:17:17.904 "nvme_iov_md": false 00:17:17.904 }, 00:17:17.904 "memory_domains": [ 00:17:17.904 { 00:17:17.904 "dma_device_id": "system", 00:17:17.904 "dma_device_type": 1 00:17:17.904 }, 00:17:17.904 { 00:17:17.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.904 "dma_device_type": 2 00:17:17.904 } 00:17:17.904 ], 00:17:17.904 "driver_specific": { 00:17:17.904 "passthru": { 00:17:17.904 "name": "pt1", 00:17:17.904 "base_bdev_name": "malloc1" 00:17:17.904 } 00:17:17.904 } 00:17:17.904 }' 00:17:17.904 22:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:17.904 22:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:18.163 22:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:18.163 22:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:18.163 22:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:18.163 22:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:18.163 22:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:18.163 22:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:18.163 22:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:18.163 22:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:18.163 22:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:18.163 22:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:18.163 22:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:18.163 22:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:18.163 22:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:18.423 22:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:18.423 "name": "pt2", 00:17:18.423 "aliases": [ 00:17:18.423 "00000000-0000-0000-0000-000000000002" 00:17:18.423 ], 00:17:18.423 "product_name": "passthru", 00:17:18.423 "block_size": 512, 00:17:18.423 "num_blocks": 65536, 00:17:18.423 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:18.423 "assigned_rate_limits": { 00:17:18.423 "rw_ios_per_sec": 0, 00:17:18.423 "rw_mbytes_per_sec": 0, 00:17:18.423 "r_mbytes_per_sec": 0, 00:17:18.423 "w_mbytes_per_sec": 0 00:17:18.423 }, 00:17:18.423 "claimed": true, 00:17:18.423 "claim_type": "exclusive_write", 00:17:18.423 "zoned": false, 00:17:18.423 "supported_io_types": { 00:17:18.423 "read": true, 00:17:18.423 "write": true, 00:17:18.423 "unmap": true, 00:17:18.423 "flush": true, 00:17:18.423 "reset": true, 00:17:18.423 "nvme_admin": false, 00:17:18.423 "nvme_io": false, 00:17:18.423 "nvme_io_md": false, 00:17:18.423 "write_zeroes": true, 00:17:18.423 "zcopy": true, 00:17:18.423 "get_zone_info": false, 00:17:18.423 "zone_management": false, 00:17:18.423 "zone_append": false, 00:17:18.423 "compare": false, 00:17:18.423 "compare_and_write": false, 00:17:18.423 "abort": true, 00:17:18.423 "seek_hole": false, 00:17:18.423 "seek_data": false, 00:17:18.423 "copy": true, 00:17:18.423 "nvme_iov_md": false 00:17:18.423 }, 00:17:18.423 "memory_domains": [ 00:17:18.423 { 00:17:18.423 "dma_device_id": "system", 00:17:18.423 "dma_device_type": 1 00:17:18.423 }, 00:17:18.423 { 00:17:18.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:18.423 "dma_device_type": 2 00:17:18.423 } 00:17:18.423 ], 00:17:18.423 "driver_specific": { 00:17:18.423 "passthru": { 00:17:18.423 "name": "pt2", 00:17:18.423 "base_bdev_name": "malloc2" 00:17:18.423 } 00:17:18.423 } 00:17:18.423 }' 00:17:18.423 22:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:18.423 22:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:18.423 22:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:18.423 22:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:18.423 22:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:18.681 22:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:18.681 22:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:18.681 22:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:18.681 22:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:18.681 22:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:18.681 22:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:18.681 22:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:18.681 22:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:18.681 22:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:17:18.681 22:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:18.939 22:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:18.939 "name": "pt3", 00:17:18.939 "aliases": [ 00:17:18.939 "00000000-0000-0000-0000-000000000003" 00:17:18.939 ], 00:17:18.939 "product_name": "passthru", 00:17:18.939 "block_size": 512, 00:17:18.939 "num_blocks": 65536, 00:17:18.939 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:18.939 "assigned_rate_limits": { 00:17:18.940 "rw_ios_per_sec": 0, 00:17:18.940 "rw_mbytes_per_sec": 0, 00:17:18.940 "r_mbytes_per_sec": 0, 00:17:18.940 "w_mbytes_per_sec": 0 00:17:18.940 }, 00:17:18.940 "claimed": true, 00:17:18.940 "claim_type": "exclusive_write", 00:17:18.940 "zoned": false, 00:17:18.940 "supported_io_types": { 00:17:18.940 "read": true, 00:17:18.940 "write": true, 00:17:18.940 "unmap": true, 00:17:18.940 "flush": true, 00:17:18.940 "reset": true, 00:17:18.940 "nvme_admin": false, 00:17:18.940 "nvme_io": false, 00:17:18.940 "nvme_io_md": false, 00:17:18.940 "write_zeroes": true, 00:17:18.940 "zcopy": true, 00:17:18.940 "get_zone_info": false, 00:17:18.940 "zone_management": false, 00:17:18.940 "zone_append": false, 00:17:18.940 "compare": false, 00:17:18.940 "compare_and_write": false, 00:17:18.940 "abort": true, 00:17:18.940 "seek_hole": false, 00:17:18.940 "seek_data": false, 00:17:18.940 "copy": true, 00:17:18.940 "nvme_iov_md": false 00:17:18.940 }, 00:17:18.940 "memory_domains": [ 00:17:18.940 { 00:17:18.940 "dma_device_id": "system", 00:17:18.940 "dma_device_type": 1 00:17:18.940 }, 00:17:18.940 { 00:17:18.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:18.940 "dma_device_type": 2 00:17:18.940 } 00:17:18.940 ], 00:17:18.940 "driver_specific": { 00:17:18.940 "passthru": { 00:17:18.940 "name": "pt3", 00:17:18.940 "base_bdev_name": "malloc3" 00:17:18.940 } 00:17:18.940 } 00:17:18.940 }' 00:17:18.940 22:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:18.940 22:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:18.940 22:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:18.940 22:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:18.940 22:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:18.940 22:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:18.940 22:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:18.940 22:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:19.199 22:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:19.199 22:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:19.199 22:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:19.199 22:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:19.199 22:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:19.199 22:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:17:19.199 [2024-07-13 22:00:38.544487] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:19.199 22:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' e5114e36-7936-422d-9c71-40e3317a8348 '!=' e5114e36-7936-422d-9c71-40e3317a8348 ']' 00:17:19.199 22:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:17:19.199 22:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:19.199 22:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:17:19.199 22:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:19.458 [2024-07-13 22:00:38.716755] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:19.458 22:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:19.458 22:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:19.458 22:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:19.458 22:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:19.458 22:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:19.458 22:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:19.458 22:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:19.459 22:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:19.459 22:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:19.459 22:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:19.459 22:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:19.459 22:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.718 22:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:19.718 "name": "raid_bdev1", 00:17:19.718 "uuid": "e5114e36-7936-422d-9c71-40e3317a8348", 00:17:19.718 "strip_size_kb": 0, 00:17:19.718 "state": "online", 00:17:19.718 "raid_level": "raid1", 00:17:19.718 "superblock": true, 00:17:19.718 "num_base_bdevs": 3, 00:17:19.718 "num_base_bdevs_discovered": 2, 00:17:19.718 "num_base_bdevs_operational": 2, 00:17:19.718 "base_bdevs_list": [ 00:17:19.718 { 00:17:19.718 "name": null, 00:17:19.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.718 "is_configured": false, 00:17:19.718 "data_offset": 2048, 00:17:19.718 "data_size": 63488 00:17:19.718 }, 00:17:19.718 { 00:17:19.718 "name": "pt2", 00:17:19.718 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:19.718 "is_configured": true, 00:17:19.718 "data_offset": 2048, 00:17:19.718 "data_size": 63488 00:17:19.718 }, 00:17:19.718 { 00:17:19.718 "name": "pt3", 00:17:19.718 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:19.718 "is_configured": true, 00:17:19.718 "data_offset": 2048, 00:17:19.718 "data_size": 63488 00:17:19.718 } 00:17:19.718 ] 00:17:19.718 }' 00:17:19.718 22:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:19.718 22:00:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.286 22:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:20.286 [2024-07-13 22:00:39.574940] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:20.286 [2024-07-13 22:00:39.574974] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:20.286 [2024-07-13 22:00:39.575051] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:20.286 [2024-07-13 22:00:39.575107] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:20.286 [2024-07-13 22:00:39.575121] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042680 name raid_bdev1, state offline 00:17:20.286 22:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:20.286 22:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:17:20.546 22:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:17:20.546 22:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:17:20.546 22:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:17:20.546 22:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:17:20.546 22:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:20.546 22:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:17:20.546 22:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:17:20.546 22:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:20.823 22:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:17:20.823 22:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:17:20.823 22:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:17:20.823 22:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:17:20.823 22:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:21.089 [2024-07-13 22:00:40.268725] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:21.089 [2024-07-13 22:00:40.268787] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:21.089 [2024-07-13 22:00:40.268807] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043280 00:17:21.089 [2024-07-13 22:00:40.268820] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:21.089 [2024-07-13 22:00:40.270956] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:21.089 [2024-07-13 22:00:40.270987] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:21.089 [2024-07-13 22:00:40.271065] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:21.089 [2024-07-13 22:00:40.271114] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:21.089 pt2 00:17:21.089 22:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:21.089 22:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:21.089 22:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:21.089 22:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:21.089 22:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:21.089 22:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:21.089 22:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:21.089 22:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:21.089 22:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:21.089 22:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:21.089 22:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:21.089 22:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.089 22:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:21.089 "name": "raid_bdev1", 00:17:21.089 "uuid": "e5114e36-7936-422d-9c71-40e3317a8348", 00:17:21.089 "strip_size_kb": 0, 00:17:21.089 "state": "configuring", 00:17:21.089 "raid_level": "raid1", 00:17:21.089 "superblock": true, 00:17:21.089 "num_base_bdevs": 3, 00:17:21.089 "num_base_bdevs_discovered": 1, 00:17:21.089 "num_base_bdevs_operational": 2, 00:17:21.089 "base_bdevs_list": [ 00:17:21.089 { 00:17:21.089 "name": null, 00:17:21.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.089 "is_configured": false, 00:17:21.089 "data_offset": 2048, 00:17:21.089 "data_size": 63488 00:17:21.089 }, 00:17:21.089 { 00:17:21.089 "name": "pt2", 00:17:21.089 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:21.089 "is_configured": true, 00:17:21.089 "data_offset": 2048, 00:17:21.089 "data_size": 63488 00:17:21.089 }, 00:17:21.089 { 00:17:21.089 "name": null, 00:17:21.089 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:21.089 "is_configured": false, 00:17:21.089 "data_offset": 2048, 00:17:21.089 "data_size": 63488 00:17:21.089 } 00:17:21.089 ] 00:17:21.089 }' 00:17:21.089 22:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:21.089 22:00:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.657 22:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:17:21.657 22:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:17:21.657 22:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=2 00:17:21.657 22:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:21.916 [2024-07-13 22:00:41.139053] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:21.916 [2024-07-13 22:00:41.139120] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:21.916 [2024-07-13 22:00:41.139142] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043b80 00:17:21.916 [2024-07-13 22:00:41.139155] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:21.916 [2024-07-13 22:00:41.139629] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:21.916 [2024-07-13 22:00:41.139653] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:21.916 [2024-07-13 22:00:41.139738] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:21.916 [2024-07-13 22:00:41.139764] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:21.916 [2024-07-13 22:00:41.139915] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000043880 00:17:21.916 [2024-07-13 22:00:41.139936] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:21.916 [2024-07-13 22:00:41.140174] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:17:21.916 [2024-07-13 22:00:41.140342] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000043880 00:17:21.916 [2024-07-13 22:00:41.140353] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000043880 00:17:21.916 [2024-07-13 22:00:41.140495] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:21.916 pt3 00:17:21.916 22:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:21.916 22:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:21.916 22:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:21.916 22:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:21.916 22:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:21.916 22:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:21.916 22:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:21.916 22:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:21.916 22:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:21.916 22:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:21.916 22:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:21.916 22:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.180 22:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:22.180 "name": "raid_bdev1", 00:17:22.180 "uuid": "e5114e36-7936-422d-9c71-40e3317a8348", 00:17:22.180 "strip_size_kb": 0, 00:17:22.180 "state": "online", 00:17:22.180 "raid_level": "raid1", 00:17:22.180 "superblock": true, 00:17:22.180 "num_base_bdevs": 3, 00:17:22.180 "num_base_bdevs_discovered": 2, 00:17:22.180 "num_base_bdevs_operational": 2, 00:17:22.180 "base_bdevs_list": [ 00:17:22.180 { 00:17:22.180 "name": null, 00:17:22.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.180 "is_configured": false, 00:17:22.180 "data_offset": 2048, 00:17:22.180 "data_size": 63488 00:17:22.180 }, 00:17:22.180 { 00:17:22.180 "name": "pt2", 00:17:22.180 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:22.180 "is_configured": true, 00:17:22.180 "data_offset": 2048, 00:17:22.180 "data_size": 63488 00:17:22.180 }, 00:17:22.180 { 00:17:22.180 "name": "pt3", 00:17:22.180 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:22.180 "is_configured": true, 00:17:22.180 "data_offset": 2048, 00:17:22.180 "data_size": 63488 00:17:22.180 } 00:17:22.180 ] 00:17:22.180 }' 00:17:22.180 22:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:22.180 22:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.437 22:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:22.695 [2024-07-13 22:00:41.973235] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:22.695 [2024-07-13 22:00:41.973271] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:22.695 [2024-07-13 22:00:41.973351] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:22.695 [2024-07-13 22:00:41.973415] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:22.695 [2024-07-13 22:00:41.973429] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000043880 name raid_bdev1, state offline 00:17:22.695 22:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:22.695 22:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:17:22.954 22:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:17:22.954 22:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:17:22.954 22:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 3 -gt 2 ']' 00:17:22.954 22:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=2 00:17:22.954 22:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:23.246 22:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:23.246 [2024-07-13 22:00:42.498778] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:23.246 [2024-07-13 22:00:42.498841] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:23.246 [2024-07-13 22:00:42.498866] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043e80 00:17:23.246 [2024-07-13 22:00:42.498878] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:23.246 [2024-07-13 22:00:42.501138] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:23.246 [2024-07-13 22:00:42.501168] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:23.246 [2024-07-13 22:00:42.501256] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:23.246 [2024-07-13 22:00:42.501297] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:23.246 [2024-07-13 22:00:42.501442] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:23.246 [2024-07-13 22:00:42.501459] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:23.246 [2024-07-13 22:00:42.501481] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000044480 name raid_bdev1, state configuring 00:17:23.246 [2024-07-13 22:00:42.501543] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:23.246 pt1 00:17:23.246 22:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 3 -gt 2 ']' 00:17:23.246 22:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:23.246 22:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:23.246 22:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:23.246 22:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:23.246 22:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:23.246 22:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:23.246 22:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:23.246 22:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:23.246 22:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:23.246 22:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:23.246 22:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:23.246 22:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.509 22:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:23.509 "name": "raid_bdev1", 00:17:23.509 "uuid": "e5114e36-7936-422d-9c71-40e3317a8348", 00:17:23.509 "strip_size_kb": 0, 00:17:23.509 "state": "configuring", 00:17:23.509 "raid_level": "raid1", 00:17:23.509 "superblock": true, 00:17:23.509 "num_base_bdevs": 3, 00:17:23.509 "num_base_bdevs_discovered": 1, 00:17:23.509 "num_base_bdevs_operational": 2, 00:17:23.509 "base_bdevs_list": [ 00:17:23.509 { 00:17:23.509 "name": null, 00:17:23.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.509 "is_configured": false, 00:17:23.509 "data_offset": 2048, 00:17:23.509 "data_size": 63488 00:17:23.509 }, 00:17:23.509 { 00:17:23.509 "name": "pt2", 00:17:23.509 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:23.509 "is_configured": true, 00:17:23.509 "data_offset": 2048, 00:17:23.509 "data_size": 63488 00:17:23.509 }, 00:17:23.509 { 00:17:23.509 "name": null, 00:17:23.509 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:23.509 "is_configured": false, 00:17:23.509 "data_offset": 2048, 00:17:23.509 "data_size": 63488 00:17:23.509 } 00:17:23.509 ] 00:17:23.509 }' 00:17:23.509 22:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:23.509 22:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.769 22:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:23.769 22:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:17:24.037 22:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:17:24.037 22:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:24.296 [2024-07-13 22:00:43.473351] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:24.296 [2024-07-13 22:00:43.473411] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:24.296 [2024-07-13 22:00:43.473451] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000044a80 00:17:24.296 [2024-07-13 22:00:43.473462] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:24.296 [2024-07-13 22:00:43.473961] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:24.296 [2024-07-13 22:00:43.473985] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:24.296 [2024-07-13 22:00:43.474089] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:24.297 [2024-07-13 22:00:43.474112] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:24.297 [2024-07-13 22:00:43.474256] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000044780 00:17:24.297 [2024-07-13 22:00:43.474267] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:24.297 [2024-07-13 22:00:43.474512] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000108b0 00:17:24.297 [2024-07-13 22:00:43.474711] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000044780 00:17:24.297 [2024-07-13 22:00:43.474726] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000044780 00:17:24.297 [2024-07-13 22:00:43.474862] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:24.297 pt3 00:17:24.297 22:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:24.297 22:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:24.297 22:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:24.297 22:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:24.297 22:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:24.297 22:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:24.297 22:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:24.297 22:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:24.297 22:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:24.297 22:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:24.297 22:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.297 22:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:24.297 22:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:24.297 "name": "raid_bdev1", 00:17:24.297 "uuid": "e5114e36-7936-422d-9c71-40e3317a8348", 00:17:24.297 "strip_size_kb": 0, 00:17:24.297 "state": "online", 00:17:24.297 "raid_level": "raid1", 00:17:24.297 "superblock": true, 00:17:24.297 "num_base_bdevs": 3, 00:17:24.297 "num_base_bdevs_discovered": 2, 00:17:24.297 "num_base_bdevs_operational": 2, 00:17:24.297 "base_bdevs_list": [ 00:17:24.297 { 00:17:24.297 "name": null, 00:17:24.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.297 "is_configured": false, 00:17:24.297 "data_offset": 2048, 00:17:24.297 "data_size": 63488 00:17:24.297 }, 00:17:24.297 { 00:17:24.297 "name": "pt2", 00:17:24.297 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:24.297 "is_configured": true, 00:17:24.297 "data_offset": 2048, 00:17:24.297 "data_size": 63488 00:17:24.297 }, 00:17:24.297 { 00:17:24.297 "name": "pt3", 00:17:24.297 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:24.297 "is_configured": true, 00:17:24.297 "data_offset": 2048, 00:17:24.297 "data_size": 63488 00:17:24.297 } 00:17:24.297 ] 00:17:24.297 }' 00:17:24.297 22:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:24.297 22:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.865 22:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:17:24.865 22:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:25.124 22:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:17:25.124 22:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:25.124 22:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:17:25.124 [2024-07-13 22:00:44.484254] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:25.124 22:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' e5114e36-7936-422d-9c71-40e3317a8348 '!=' e5114e36-7936-422d-9c71-40e3317a8348 ']' 00:17:25.124 22:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 1406073 00:17:25.124 22:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 1406073 ']' 00:17:25.124 22:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 1406073 00:17:25.124 22:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:17:25.124 22:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:25.124 22:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1406073 00:17:25.383 22:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:25.383 22:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:25.383 22:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1406073' 00:17:25.383 killing process with pid 1406073 00:17:25.383 22:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 1406073 00:17:25.383 [2024-07-13 22:00:44.551366] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:25.383 [2024-07-13 22:00:44.551460] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:25.383 [2024-07-13 22:00:44.551517] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:25.383 [2024-07-13 22:00:44.551532] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000044780 name raid_bdev1, state offline 00:17:25.383 22:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 1406073 00:17:25.641 [2024-07-13 22:00:44.779666] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:27.039 22:00:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:17:27.039 00:17:27.039 real 0m17.830s 00:17:27.039 user 0m31.115s 00:17:27.039 sys 0m3.329s 00:17:27.039 22:00:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:27.039 22:00:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.039 ************************************ 00:17:27.039 END TEST raid_superblock_test 00:17:27.039 ************************************ 00:17:27.039 22:00:46 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:27.039 22:00:46 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:17:27.039 22:00:46 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:27.039 22:00:46 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:27.039 22:00:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:27.039 ************************************ 00:17:27.039 START TEST raid_read_error_test 00:17:27.039 ************************************ 00:17:27.039 22:00:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 3 read 00:17:27.039 22:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:17:27.039 22:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:17:27.039 22:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:17:27.039 22:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:17:27.039 22:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:27.040 22:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:17:27.040 22:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:27.040 22:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:27.040 22:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:17:27.040 22:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:27.040 22:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:27.040 22:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:17:27.040 22:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:27.040 22:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:27.040 22:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:27.040 22:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:17:27.040 22:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:17:27.040 22:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:17:27.040 22:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:17:27.040 22:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:17:27.040 22:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:17:27.040 22:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:17:27.040 22:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:17:27.040 22:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:17:27.040 22:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.guhRmp7ugT 00:17:27.040 22:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1409577 00:17:27.040 22:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1409577 /var/tmp/spdk-raid.sock 00:17:27.040 22:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:27.040 22:00:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 1409577 ']' 00:17:27.040 22:00:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:27.040 22:00:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:27.040 22:00:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:27.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:27.040 22:00:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:27.040 22:00:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.040 [2024-07-13 22:00:46.185241] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:27.040 [2024-07-13 22:00:46.185339] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1409577 ] 00:17:27.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:27.040 EAL: Requested device 0000:3d:01.0 cannot be used 00:17:27.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:27.040 EAL: Requested device 0000:3d:01.1 cannot be used 00:17:27.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:27.040 EAL: Requested device 0000:3d:01.2 cannot be used 00:17:27.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:27.040 EAL: Requested device 0000:3d:01.3 cannot be used 00:17:27.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:27.040 EAL: Requested device 0000:3d:01.4 cannot be used 00:17:27.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:27.040 EAL: Requested device 0000:3d:01.5 cannot be used 00:17:27.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:27.040 EAL: Requested device 0000:3d:01.6 cannot be used 00:17:27.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:27.040 EAL: Requested device 0000:3d:01.7 cannot be used 00:17:27.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:27.040 EAL: Requested device 0000:3d:02.0 cannot be used 00:17:27.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:27.040 EAL: Requested device 0000:3d:02.1 cannot be used 00:17:27.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:27.040 EAL: Requested device 0000:3d:02.2 cannot be used 00:17:27.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:27.040 EAL: Requested device 0000:3d:02.3 cannot be used 00:17:27.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:27.040 EAL: Requested device 0000:3d:02.4 cannot be used 00:17:27.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:27.040 EAL: Requested device 0000:3d:02.5 cannot be used 00:17:27.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:27.040 EAL: Requested device 0000:3d:02.6 cannot be used 00:17:27.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:27.040 EAL: Requested device 0000:3d:02.7 cannot be used 00:17:27.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:27.040 EAL: Requested device 0000:3f:01.0 cannot be used 00:17:27.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:27.040 EAL: Requested device 0000:3f:01.1 cannot be used 00:17:27.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:27.040 EAL: Requested device 0000:3f:01.2 cannot be used 00:17:27.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:27.040 EAL: Requested device 0000:3f:01.3 cannot be used 00:17:27.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:27.040 EAL: Requested device 0000:3f:01.4 cannot be used 00:17:27.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:27.040 EAL: Requested device 0000:3f:01.5 cannot be used 00:17:27.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:27.040 EAL: Requested device 0000:3f:01.6 cannot be used 00:17:27.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:27.040 EAL: Requested device 0000:3f:01.7 cannot be used 00:17:27.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:27.040 EAL: Requested device 0000:3f:02.0 cannot be used 00:17:27.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:27.040 EAL: Requested device 0000:3f:02.1 cannot be used 00:17:27.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:27.040 EAL: Requested device 0000:3f:02.2 cannot be used 00:17:27.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:27.040 EAL: Requested device 0000:3f:02.3 cannot be used 00:17:27.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:27.040 EAL: Requested device 0000:3f:02.4 cannot be used 00:17:27.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:27.040 EAL: Requested device 0000:3f:02.5 cannot be used 00:17:27.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:27.040 EAL: Requested device 0000:3f:02.6 cannot be used 00:17:27.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:27.040 EAL: Requested device 0000:3f:02.7 cannot be used 00:17:27.040 [2024-07-13 22:00:46.346077] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.299 [2024-07-13 22:00:46.544670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.558 [2024-07-13 22:00:46.784898] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:27.558 [2024-07-13 22:00:46.784933] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:27.558 22:00:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:27.558 22:00:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:17:27.558 22:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:27.558 22:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:27.816 BaseBdev1_malloc 00:17:27.816 22:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:28.075 true 00:17:28.075 22:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:28.334 [2024-07-13 22:00:47.499856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:28.334 [2024-07-13 22:00:47.499919] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:28.334 [2024-07-13 22:00:47.499943] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:17:28.334 [2024-07-13 22:00:47.499960] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:28.334 [2024-07-13 22:00:47.502066] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:28.334 [2024-07-13 22:00:47.502102] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:28.334 BaseBdev1 00:17:28.334 22:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:28.334 22:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:28.334 BaseBdev2_malloc 00:17:28.593 22:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:28.593 true 00:17:28.593 22:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:28.851 [2024-07-13 22:00:48.053195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:28.851 [2024-07-13 22:00:48.053249] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:28.851 [2024-07-13 22:00:48.053271] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:17:28.851 [2024-07-13 22:00:48.053288] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:28.851 [2024-07-13 22:00:48.055433] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:28.851 [2024-07-13 22:00:48.055462] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:28.851 BaseBdev2 00:17:28.851 22:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:28.851 22:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:29.109 BaseBdev3_malloc 00:17:29.109 22:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:17:29.109 true 00:17:29.109 22:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:29.368 [2024-07-13 22:00:48.599211] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:29.368 [2024-07-13 22:00:48.599274] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:29.368 [2024-07-13 22:00:48.599296] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041780 00:17:29.368 [2024-07-13 22:00:48.599310] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:29.368 [2024-07-13 22:00:48.601432] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:29.368 [2024-07-13 22:00:48.601463] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:29.368 BaseBdev3 00:17:29.368 22:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:17:29.627 [2024-07-13 22:00:48.767686] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:29.627 [2024-07-13 22:00:48.769409] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:29.627 [2024-07-13 22:00:48.769476] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:29.627 [2024-07-13 22:00:48.769686] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041d80 00:17:29.627 [2024-07-13 22:00:48.769698] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:29.627 [2024-07-13 22:00:48.769967] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:17:29.627 [2024-07-13 22:00:48.770166] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041d80 00:17:29.627 [2024-07-13 22:00:48.770182] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041d80 00:17:29.627 [2024-07-13 22:00:48.770344] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:29.627 22:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:29.627 22:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:29.627 22:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:29.627 22:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:29.627 22:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:29.627 22:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:29.627 22:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:29.627 22:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:29.627 22:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:29.627 22:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:29.627 22:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:29.627 22:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.627 22:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:29.627 "name": "raid_bdev1", 00:17:29.627 "uuid": "18404ea4-6999-4c85-bda1-ab79ec6a1e8a", 00:17:29.627 "strip_size_kb": 0, 00:17:29.627 "state": "online", 00:17:29.627 "raid_level": "raid1", 00:17:29.627 "superblock": true, 00:17:29.627 "num_base_bdevs": 3, 00:17:29.627 "num_base_bdevs_discovered": 3, 00:17:29.627 "num_base_bdevs_operational": 3, 00:17:29.627 "base_bdevs_list": [ 00:17:29.627 { 00:17:29.627 "name": "BaseBdev1", 00:17:29.627 "uuid": "3b0ba83f-a61d-571f-9c12-3e81610abf1d", 00:17:29.627 "is_configured": true, 00:17:29.627 "data_offset": 2048, 00:17:29.627 "data_size": 63488 00:17:29.627 }, 00:17:29.627 { 00:17:29.627 "name": "BaseBdev2", 00:17:29.627 "uuid": "b47dca75-756a-539c-a606-8553b9c12140", 00:17:29.627 "is_configured": true, 00:17:29.627 "data_offset": 2048, 00:17:29.627 "data_size": 63488 00:17:29.627 }, 00:17:29.627 { 00:17:29.627 "name": "BaseBdev3", 00:17:29.627 "uuid": "dc449400-0f9e-576e-9296-c94932db3f90", 00:17:29.627 "is_configured": true, 00:17:29.627 "data_offset": 2048, 00:17:29.627 "data_size": 63488 00:17:29.627 } 00:17:29.627 ] 00:17:29.627 }' 00:17:29.627 22:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:29.627 22:00:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.193 22:00:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:17:30.193 22:00:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:30.193 [2024-07-13 22:00:49.510978] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:17:31.130 22:00:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:17:31.389 22:00:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:17:31.389 22:00:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:17:31.389 22:00:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:17:31.389 22:00:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:17:31.389 22:00:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:31.389 22:00:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:31.389 22:00:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:31.389 22:00:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:31.389 22:00:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:31.389 22:00:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:31.389 22:00:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:31.389 22:00:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:31.389 22:00:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:31.389 22:00:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:31.389 22:00:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:31.389 22:00:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.648 22:00:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:31.648 "name": "raid_bdev1", 00:17:31.648 "uuid": "18404ea4-6999-4c85-bda1-ab79ec6a1e8a", 00:17:31.648 "strip_size_kb": 0, 00:17:31.648 "state": "online", 00:17:31.648 "raid_level": "raid1", 00:17:31.648 "superblock": true, 00:17:31.648 "num_base_bdevs": 3, 00:17:31.648 "num_base_bdevs_discovered": 3, 00:17:31.648 "num_base_bdevs_operational": 3, 00:17:31.648 "base_bdevs_list": [ 00:17:31.648 { 00:17:31.648 "name": "BaseBdev1", 00:17:31.648 "uuid": "3b0ba83f-a61d-571f-9c12-3e81610abf1d", 00:17:31.648 "is_configured": true, 00:17:31.648 "data_offset": 2048, 00:17:31.648 "data_size": 63488 00:17:31.648 }, 00:17:31.648 { 00:17:31.648 "name": "BaseBdev2", 00:17:31.648 "uuid": "b47dca75-756a-539c-a606-8553b9c12140", 00:17:31.648 "is_configured": true, 00:17:31.648 "data_offset": 2048, 00:17:31.648 "data_size": 63488 00:17:31.648 }, 00:17:31.648 { 00:17:31.648 "name": "BaseBdev3", 00:17:31.648 "uuid": "dc449400-0f9e-576e-9296-c94932db3f90", 00:17:31.648 "is_configured": true, 00:17:31.648 "data_offset": 2048, 00:17:31.648 "data_size": 63488 00:17:31.648 } 00:17:31.648 ] 00:17:31.648 }' 00:17:31.648 22:00:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:31.648 22:00:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.906 22:00:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:32.165 [2024-07-13 22:00:51.445334] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:32.165 [2024-07-13 22:00:51.445375] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:32.165 [2024-07-13 22:00:51.447646] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:32.165 [2024-07-13 22:00:51.447689] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:32.165 [2024-07-13 22:00:51.447779] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:32.165 [2024-07-13 22:00:51.447791] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041d80 name raid_bdev1, state offline 00:17:32.165 0 00:17:32.165 22:00:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1409577 00:17:32.165 22:00:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 1409577 ']' 00:17:32.165 22:00:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 1409577 00:17:32.165 22:00:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:17:32.165 22:00:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:32.165 22:00:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1409577 00:17:32.165 22:00:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:32.165 22:00:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:32.165 22:00:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1409577' 00:17:32.165 killing process with pid 1409577 00:17:32.165 22:00:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 1409577 00:17:32.165 [2024-07-13 22:00:51.504961] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:32.165 22:00:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 1409577 00:17:32.423 [2024-07-13 22:00:51.675685] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:33.801 22:00:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.guhRmp7ugT 00:17:33.801 22:00:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:17:33.801 22:00:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:17:33.801 22:00:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:17:33.801 22:00:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:17:33.801 22:00:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:33.801 22:00:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:17:33.801 22:00:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:17:33.801 00:17:33.801 real 0m6.880s 00:17:33.801 user 0m9.621s 00:17:33.801 sys 0m1.142s 00:17:33.801 22:00:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:33.801 22:00:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.801 ************************************ 00:17:33.801 END TEST raid_read_error_test 00:17:33.801 ************************************ 00:17:33.801 22:00:53 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:33.801 22:00:53 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:17:33.801 22:00:53 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:33.801 22:00:53 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:33.801 22:00:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:33.801 ************************************ 00:17:33.801 START TEST raid_write_error_test 00:17:33.801 ************************************ 00:17:33.801 22:00:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 3 write 00:17:33.801 22:00:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:17:33.801 22:00:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:17:33.801 22:00:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:17:33.801 22:00:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:17:33.801 22:00:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:33.801 22:00:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:17:33.801 22:00:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:33.801 22:00:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:33.801 22:00:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:17:33.801 22:00:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:33.801 22:00:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:33.801 22:00:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:17:33.801 22:00:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:33.801 22:00:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:33.801 22:00:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:33.801 22:00:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:17:33.801 22:00:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:17:33.801 22:00:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:17:33.801 22:00:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:17:33.802 22:00:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:17:33.802 22:00:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:17:33.802 22:00:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:17:33.802 22:00:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:17:33.802 22:00:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:17:33.802 22:00:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.CFCdBDVMuX 00:17:33.802 22:00:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1410823 00:17:33.802 22:00:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1410823 /var/tmp/spdk-raid.sock 00:17:33.802 22:00:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:33.802 22:00:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 1410823 ']' 00:17:33.802 22:00:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:33.802 22:00:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:33.802 22:00:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:33.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:33.802 22:00:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:33.802 22:00:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.802 [2024-07-13 22:00:53.150581] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:33.802 [2024-07-13 22:00:53.150680] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1410823 ] 00:17:34.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:34.060 EAL: Requested device 0000:3d:01.0 cannot be used 00:17:34.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:34.060 EAL: Requested device 0000:3d:01.1 cannot be used 00:17:34.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:34.060 EAL: Requested device 0000:3d:01.2 cannot be used 00:17:34.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:34.060 EAL: Requested device 0000:3d:01.3 cannot be used 00:17:34.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:34.060 EAL: Requested device 0000:3d:01.4 cannot be used 00:17:34.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:34.060 EAL: Requested device 0000:3d:01.5 cannot be used 00:17:34.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:34.060 EAL: Requested device 0000:3d:01.6 cannot be used 00:17:34.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:34.060 EAL: Requested device 0000:3d:01.7 cannot be used 00:17:34.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:34.060 EAL: Requested device 0000:3d:02.0 cannot be used 00:17:34.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:34.060 EAL: Requested device 0000:3d:02.1 cannot be used 00:17:34.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:34.060 EAL: Requested device 0000:3d:02.2 cannot be used 00:17:34.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:34.060 EAL: Requested device 0000:3d:02.3 cannot be used 00:17:34.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:34.060 EAL: Requested device 0000:3d:02.4 cannot be used 00:17:34.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:34.060 EAL: Requested device 0000:3d:02.5 cannot be used 00:17:34.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:34.060 EAL: Requested device 0000:3d:02.6 cannot be used 00:17:34.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:34.060 EAL: Requested device 0000:3d:02.7 cannot be used 00:17:34.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:34.060 EAL: Requested device 0000:3f:01.0 cannot be used 00:17:34.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:34.060 EAL: Requested device 0000:3f:01.1 cannot be used 00:17:34.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:34.061 EAL: Requested device 0000:3f:01.2 cannot be used 00:17:34.061 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:34.061 EAL: Requested device 0000:3f:01.3 cannot be used 00:17:34.061 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:34.061 EAL: Requested device 0000:3f:01.4 cannot be used 00:17:34.061 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:34.061 EAL: Requested device 0000:3f:01.5 cannot be used 00:17:34.061 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:34.061 EAL: Requested device 0000:3f:01.6 cannot be used 00:17:34.061 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:34.061 EAL: Requested device 0000:3f:01.7 cannot be used 00:17:34.061 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:34.061 EAL: Requested device 0000:3f:02.0 cannot be used 00:17:34.061 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:34.061 EAL: Requested device 0000:3f:02.1 cannot be used 00:17:34.061 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:34.061 EAL: Requested device 0000:3f:02.2 cannot be used 00:17:34.061 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:34.061 EAL: Requested device 0000:3f:02.3 cannot be used 00:17:34.061 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:34.061 EAL: Requested device 0000:3f:02.4 cannot be used 00:17:34.061 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:34.061 EAL: Requested device 0000:3f:02.5 cannot be used 00:17:34.061 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:34.061 EAL: Requested device 0000:3f:02.6 cannot be used 00:17:34.061 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:34.061 EAL: Requested device 0000:3f:02.7 cannot be used 00:17:34.061 [2024-07-13 22:00:53.316444] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.318 [2024-07-13 22:00:53.519631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.575 [2024-07-13 22:00:53.760689] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:34.575 [2024-07-13 22:00:53.760723] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:34.575 22:00:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:34.575 22:00:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:17:34.575 22:00:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:34.575 22:00:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:34.833 BaseBdev1_malloc 00:17:34.833 22:00:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:35.092 true 00:17:35.092 22:00:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:35.092 [2024-07-13 22:00:54.414828] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:35.092 [2024-07-13 22:00:54.414878] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.092 [2024-07-13 22:00:54.414921] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:17:35.092 [2024-07-13 22:00:54.414938] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.092 [2024-07-13 22:00:54.416984] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.092 [2024-07-13 22:00:54.417013] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:35.092 BaseBdev1 00:17:35.092 22:00:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:35.092 22:00:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:35.351 BaseBdev2_malloc 00:17:35.351 22:00:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:35.610 true 00:17:35.610 22:00:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:35.610 [2024-07-13 22:00:54.986372] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:35.610 [2024-07-13 22:00:54.986422] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.610 [2024-07-13 22:00:54.986456] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:17:35.610 [2024-07-13 22:00:54.986472] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.610 [2024-07-13 22:00:54.988479] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.610 [2024-07-13 22:00:54.988507] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:35.610 BaseBdev2 00:17:35.868 22:00:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:35.868 22:00:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:35.868 BaseBdev3_malloc 00:17:35.868 22:00:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:17:36.140 true 00:17:36.140 22:00:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:36.140 [2024-07-13 22:00:55.522029] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:36.140 [2024-07-13 22:00:55.522081] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.140 [2024-07-13 22:00:55.522117] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041780 00:17:36.140 [2024-07-13 22:00:55.522130] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.140 [2024-07-13 22:00:55.524202] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.140 [2024-07-13 22:00:55.524232] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:36.405 BaseBdev3 00:17:36.405 22:00:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:17:36.405 [2024-07-13 22:00:55.682469] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:36.405 [2024-07-13 22:00:55.683994] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:36.405 [2024-07-13 22:00:55.684057] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:36.405 [2024-07-13 22:00:55.684258] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041d80 00:17:36.405 [2024-07-13 22:00:55.684270] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:36.405 [2024-07-13 22:00:55.684484] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:17:36.405 [2024-07-13 22:00:55.684659] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041d80 00:17:36.405 [2024-07-13 22:00:55.684673] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041d80 00:17:36.405 [2024-07-13 22:00:55.684822] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:36.405 22:00:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:36.405 22:00:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:36.405 22:00:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:36.405 22:00:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:36.405 22:00:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:36.405 22:00:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:36.405 22:00:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:36.405 22:00:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:36.405 22:00:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:36.405 22:00:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:36.405 22:00:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:36.405 22:00:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.663 22:00:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:36.663 "name": "raid_bdev1", 00:17:36.663 "uuid": "c6c81c09-c35a-45dd-84fd-081950f33a58", 00:17:36.663 "strip_size_kb": 0, 00:17:36.663 "state": "online", 00:17:36.663 "raid_level": "raid1", 00:17:36.663 "superblock": true, 00:17:36.663 "num_base_bdevs": 3, 00:17:36.663 "num_base_bdevs_discovered": 3, 00:17:36.663 "num_base_bdevs_operational": 3, 00:17:36.663 "base_bdevs_list": [ 00:17:36.663 { 00:17:36.664 "name": "BaseBdev1", 00:17:36.664 "uuid": "c5067322-6462-5728-98f8-8036cf39e526", 00:17:36.664 "is_configured": true, 00:17:36.664 "data_offset": 2048, 00:17:36.664 "data_size": 63488 00:17:36.664 }, 00:17:36.664 { 00:17:36.664 "name": "BaseBdev2", 00:17:36.664 "uuid": "d6f38b4b-a195-55da-9da3-0940633d3acf", 00:17:36.664 "is_configured": true, 00:17:36.664 "data_offset": 2048, 00:17:36.664 "data_size": 63488 00:17:36.664 }, 00:17:36.664 { 00:17:36.664 "name": "BaseBdev3", 00:17:36.664 "uuid": "cb34574b-dd13-5060-9043-25a72b62aee1", 00:17:36.664 "is_configured": true, 00:17:36.664 "data_offset": 2048, 00:17:36.664 "data_size": 63488 00:17:36.664 } 00:17:36.664 ] 00:17:36.664 }' 00:17:36.664 22:00:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:36.664 22:00:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.231 22:00:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:17:37.231 22:00:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:37.231 [2024-07-13 22:00:56.401854] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:17:38.168 22:00:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:17:38.168 [2024-07-13 22:00:57.486124] bdev_raid.c:2221:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:17:38.168 [2024-07-13 22:00:57.486176] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:38.168 [2024-07-13 22:00:57.486399] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000107e0 00:17:38.168 22:00:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:17:38.168 22:00:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:17:38.168 22:00:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:17:38.168 22:00:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=2 00:17:38.168 22:00:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:38.168 22:00:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:38.168 22:00:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:38.168 22:00:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:38.168 22:00:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:38.168 22:00:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:38.168 22:00:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:38.168 22:00:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:38.168 22:00:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:38.168 22:00:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:38.168 22:00:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:38.168 22:00:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.459 22:00:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:38.459 "name": "raid_bdev1", 00:17:38.459 "uuid": "c6c81c09-c35a-45dd-84fd-081950f33a58", 00:17:38.459 "strip_size_kb": 0, 00:17:38.459 "state": "online", 00:17:38.459 "raid_level": "raid1", 00:17:38.459 "superblock": true, 00:17:38.459 "num_base_bdevs": 3, 00:17:38.459 "num_base_bdevs_discovered": 2, 00:17:38.459 "num_base_bdevs_operational": 2, 00:17:38.459 "base_bdevs_list": [ 00:17:38.459 { 00:17:38.459 "name": null, 00:17:38.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.459 "is_configured": false, 00:17:38.459 "data_offset": 2048, 00:17:38.459 "data_size": 63488 00:17:38.459 }, 00:17:38.459 { 00:17:38.459 "name": "BaseBdev2", 00:17:38.459 "uuid": "d6f38b4b-a195-55da-9da3-0940633d3acf", 00:17:38.459 "is_configured": true, 00:17:38.459 "data_offset": 2048, 00:17:38.459 "data_size": 63488 00:17:38.459 }, 00:17:38.459 { 00:17:38.459 "name": "BaseBdev3", 00:17:38.459 "uuid": "cb34574b-dd13-5060-9043-25a72b62aee1", 00:17:38.459 "is_configured": true, 00:17:38.459 "data_offset": 2048, 00:17:38.459 "data_size": 63488 00:17:38.459 } 00:17:38.459 ] 00:17:38.459 }' 00:17:38.459 22:00:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:38.459 22:00:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.027 22:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:39.027 [2024-07-13 22:00:58.297210] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:39.027 [2024-07-13 22:00:58.297256] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:39.027 [2024-07-13 22:00:58.299496] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:39.027 [2024-07-13 22:00:58.299538] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:39.027 [2024-07-13 22:00:58.299616] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:39.027 [2024-07-13 22:00:58.299630] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041d80 name raid_bdev1, state offline 00:17:39.027 0 00:17:39.027 22:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1410823 00:17:39.027 22:00:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 1410823 ']' 00:17:39.027 22:00:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 1410823 00:17:39.027 22:00:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:17:39.027 22:00:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:39.027 22:00:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1410823 00:17:39.027 22:00:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:39.027 22:00:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:39.027 22:00:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1410823' 00:17:39.027 killing process with pid 1410823 00:17:39.027 22:00:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 1410823 00:17:39.027 [2024-07-13 22:00:58.366639] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:39.027 22:00:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 1410823 00:17:39.285 [2024-07-13 22:00:58.533117] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:40.660 22:00:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.CFCdBDVMuX 00:17:40.661 22:00:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:17:40.661 22:00:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:17:40.661 22:00:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:17:40.661 22:00:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:17:40.661 22:00:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:40.661 22:00:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:17:40.661 22:00:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:17:40.661 00:17:40.661 real 0m6.804s 00:17:40.661 user 0m9.510s 00:17:40.661 sys 0m1.094s 00:17:40.661 22:00:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:40.661 22:00:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.661 ************************************ 00:17:40.661 END TEST raid_write_error_test 00:17:40.661 ************************************ 00:17:40.661 22:00:59 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:40.661 22:00:59 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:17:40.661 22:00:59 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:17:40.661 22:00:59 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:17:40.661 22:00:59 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:40.661 22:00:59 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:40.661 22:00:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:40.661 ************************************ 00:17:40.661 START TEST raid_state_function_test 00:17:40.661 ************************************ 00:17:40.661 22:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 4 false 00:17:40.661 22:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:17:40.661 22:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:17:40.661 22:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:17:40.661 22:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:40.661 22:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:40.661 22:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:40.661 22:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:17:40.661 22:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:40.661 22:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:40.661 22:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:17:40.661 22:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:40.661 22:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:40.661 22:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:17:40.661 22:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:40.661 22:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:40.661 22:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:17:40.661 22:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:40.661 22:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:40.661 22:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:40.661 22:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:40.661 22:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:40.661 22:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:40.661 22:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:40.661 22:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:40.661 22:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:17:40.661 22:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:17:40.661 22:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:17:40.661 22:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:17:40.661 22:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:17:40.661 22:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=1412145 00:17:40.661 22:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1412145' 00:17:40.661 Process raid pid: 1412145 00:17:40.661 22:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:40.661 22:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 1412145 /var/tmp/spdk-raid.sock 00:17:40.661 22:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 1412145 ']' 00:17:40.661 22:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:40.661 22:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:40.661 22:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:40.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:40.661 22:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:40.661 22:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.661 [2024-07-13 22:01:00.044407] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:40.661 [2024-07-13 22:01:00.044505] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:40.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:40.920 EAL: Requested device 0000:3d:01.0 cannot be used 00:17:40.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:40.920 EAL: Requested device 0000:3d:01.1 cannot be used 00:17:40.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:40.920 EAL: Requested device 0000:3d:01.2 cannot be used 00:17:40.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:40.920 EAL: Requested device 0000:3d:01.3 cannot be used 00:17:40.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:40.920 EAL: Requested device 0000:3d:01.4 cannot be used 00:17:40.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:40.920 EAL: Requested device 0000:3d:01.5 cannot be used 00:17:40.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:40.920 EAL: Requested device 0000:3d:01.6 cannot be used 00:17:40.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:40.920 EAL: Requested device 0000:3d:01.7 cannot be used 00:17:40.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:40.920 EAL: Requested device 0000:3d:02.0 cannot be used 00:17:40.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:40.920 EAL: Requested device 0000:3d:02.1 cannot be used 00:17:40.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:40.920 EAL: Requested device 0000:3d:02.2 cannot be used 00:17:40.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:40.920 EAL: Requested device 0000:3d:02.3 cannot be used 00:17:40.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:40.920 EAL: Requested device 0000:3d:02.4 cannot be used 00:17:40.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:40.920 EAL: Requested device 0000:3d:02.5 cannot be used 00:17:40.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:40.920 EAL: Requested device 0000:3d:02.6 cannot be used 00:17:40.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:40.921 EAL: Requested device 0000:3d:02.7 cannot be used 00:17:40.921 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:40.921 EAL: Requested device 0000:3f:01.0 cannot be used 00:17:40.921 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:40.921 EAL: Requested device 0000:3f:01.1 cannot be used 00:17:40.921 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:40.921 EAL: Requested device 0000:3f:01.2 cannot be used 00:17:40.921 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:40.921 EAL: Requested device 0000:3f:01.3 cannot be used 00:17:40.921 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:40.921 EAL: Requested device 0000:3f:01.4 cannot be used 00:17:40.921 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:40.921 EAL: Requested device 0000:3f:01.5 cannot be used 00:17:40.921 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:40.921 EAL: Requested device 0000:3f:01.6 cannot be used 00:17:40.921 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:40.921 EAL: Requested device 0000:3f:01.7 cannot be used 00:17:40.921 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:40.921 EAL: Requested device 0000:3f:02.0 cannot be used 00:17:40.921 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:40.921 EAL: Requested device 0000:3f:02.1 cannot be used 00:17:40.921 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:40.921 EAL: Requested device 0000:3f:02.2 cannot be used 00:17:40.921 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:40.921 EAL: Requested device 0000:3f:02.3 cannot be used 00:17:40.921 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:40.921 EAL: Requested device 0000:3f:02.4 cannot be used 00:17:40.921 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:40.921 EAL: Requested device 0000:3f:02.5 cannot be used 00:17:40.921 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:40.921 EAL: Requested device 0000:3f:02.6 cannot be used 00:17:40.921 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:40.921 EAL: Requested device 0000:3f:02.7 cannot be used 00:17:40.921 [2024-07-13 22:01:00.210204] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.180 [2024-07-13 22:01:00.413781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.439 [2024-07-13 22:01:00.659984] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:41.439 [2024-07-13 22:01:00.660017] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:41.439 22:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:41.439 22:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:17:41.439 22:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:41.698 [2024-07-13 22:01:00.958993] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:41.698 [2024-07-13 22:01:00.959050] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:41.698 [2024-07-13 22:01:00.959061] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:41.698 [2024-07-13 22:01:00.959074] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:41.698 [2024-07-13 22:01:00.959082] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:41.698 [2024-07-13 22:01:00.959093] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:41.698 [2024-07-13 22:01:00.959101] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:41.698 [2024-07-13 22:01:00.959112] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:41.698 22:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:41.698 22:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:41.698 22:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:41.698 22:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:41.698 22:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:41.698 22:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:41.698 22:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:41.698 22:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:41.698 22:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:41.698 22:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:41.698 22:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:41.698 22:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:41.957 22:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:41.957 "name": "Existed_Raid", 00:17:41.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.957 "strip_size_kb": 64, 00:17:41.957 "state": "configuring", 00:17:41.957 "raid_level": "raid0", 00:17:41.957 "superblock": false, 00:17:41.957 "num_base_bdevs": 4, 00:17:41.957 "num_base_bdevs_discovered": 0, 00:17:41.957 "num_base_bdevs_operational": 4, 00:17:41.957 "base_bdevs_list": [ 00:17:41.957 { 00:17:41.957 "name": "BaseBdev1", 00:17:41.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.957 "is_configured": false, 00:17:41.957 "data_offset": 0, 00:17:41.957 "data_size": 0 00:17:41.957 }, 00:17:41.957 { 00:17:41.957 "name": "BaseBdev2", 00:17:41.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.957 "is_configured": false, 00:17:41.957 "data_offset": 0, 00:17:41.957 "data_size": 0 00:17:41.957 }, 00:17:41.957 { 00:17:41.957 "name": "BaseBdev3", 00:17:41.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.957 "is_configured": false, 00:17:41.957 "data_offset": 0, 00:17:41.957 "data_size": 0 00:17:41.957 }, 00:17:41.957 { 00:17:41.957 "name": "BaseBdev4", 00:17:41.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.957 "is_configured": false, 00:17:41.957 "data_offset": 0, 00:17:41.957 "data_size": 0 00:17:41.957 } 00:17:41.957 ] 00:17:41.957 }' 00:17:41.957 22:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:41.957 22:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.215 22:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:42.474 [2024-07-13 22:01:01.696819] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:42.474 [2024-07-13 22:01:01.696860] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:17:42.474 22:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:42.474 [2024-07-13 22:01:01.849264] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:42.474 [2024-07-13 22:01:01.849308] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:42.474 [2024-07-13 22:01:01.849319] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:42.474 [2024-07-13 22:01:01.849353] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:42.474 [2024-07-13 22:01:01.849361] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:42.474 [2024-07-13 22:01:01.849375] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:42.474 [2024-07-13 22:01:01.849384] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:42.474 [2024-07-13 22:01:01.849395] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:42.733 22:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:42.733 [2024-07-13 22:01:02.056839] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:42.733 BaseBdev1 00:17:42.733 22:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:42.733 22:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:42.733 22:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:42.733 22:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:42.733 22:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:42.733 22:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:42.733 22:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:42.992 22:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:43.251 [ 00:17:43.251 { 00:17:43.251 "name": "BaseBdev1", 00:17:43.251 "aliases": [ 00:17:43.251 "615e1927-000e-47d1-b28a-1bedd20c218c" 00:17:43.251 ], 00:17:43.251 "product_name": "Malloc disk", 00:17:43.251 "block_size": 512, 00:17:43.251 "num_blocks": 65536, 00:17:43.251 "uuid": "615e1927-000e-47d1-b28a-1bedd20c218c", 00:17:43.251 "assigned_rate_limits": { 00:17:43.251 "rw_ios_per_sec": 0, 00:17:43.251 "rw_mbytes_per_sec": 0, 00:17:43.251 "r_mbytes_per_sec": 0, 00:17:43.251 "w_mbytes_per_sec": 0 00:17:43.251 }, 00:17:43.251 "claimed": true, 00:17:43.251 "claim_type": "exclusive_write", 00:17:43.251 "zoned": false, 00:17:43.251 "supported_io_types": { 00:17:43.251 "read": true, 00:17:43.251 "write": true, 00:17:43.251 "unmap": true, 00:17:43.251 "flush": true, 00:17:43.251 "reset": true, 00:17:43.251 "nvme_admin": false, 00:17:43.251 "nvme_io": false, 00:17:43.251 "nvme_io_md": false, 00:17:43.251 "write_zeroes": true, 00:17:43.251 "zcopy": true, 00:17:43.251 "get_zone_info": false, 00:17:43.251 "zone_management": false, 00:17:43.251 "zone_append": false, 00:17:43.251 "compare": false, 00:17:43.251 "compare_and_write": false, 00:17:43.251 "abort": true, 00:17:43.251 "seek_hole": false, 00:17:43.251 "seek_data": false, 00:17:43.251 "copy": true, 00:17:43.251 "nvme_iov_md": false 00:17:43.251 }, 00:17:43.251 "memory_domains": [ 00:17:43.251 { 00:17:43.252 "dma_device_id": "system", 00:17:43.252 "dma_device_type": 1 00:17:43.252 }, 00:17:43.252 { 00:17:43.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.252 "dma_device_type": 2 00:17:43.252 } 00:17:43.252 ], 00:17:43.252 "driver_specific": {} 00:17:43.252 } 00:17:43.252 ] 00:17:43.252 22:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:43.252 22:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:43.252 22:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:43.252 22:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:43.252 22:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:43.252 22:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:43.252 22:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:43.252 22:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:43.252 22:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:43.252 22:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:43.252 22:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:43.252 22:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:43.252 22:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:43.252 22:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:43.252 "name": "Existed_Raid", 00:17:43.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.252 "strip_size_kb": 64, 00:17:43.252 "state": "configuring", 00:17:43.252 "raid_level": "raid0", 00:17:43.252 "superblock": false, 00:17:43.252 "num_base_bdevs": 4, 00:17:43.252 "num_base_bdevs_discovered": 1, 00:17:43.252 "num_base_bdevs_operational": 4, 00:17:43.252 "base_bdevs_list": [ 00:17:43.252 { 00:17:43.252 "name": "BaseBdev1", 00:17:43.252 "uuid": "615e1927-000e-47d1-b28a-1bedd20c218c", 00:17:43.252 "is_configured": true, 00:17:43.252 "data_offset": 0, 00:17:43.252 "data_size": 65536 00:17:43.252 }, 00:17:43.252 { 00:17:43.252 "name": "BaseBdev2", 00:17:43.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.252 "is_configured": false, 00:17:43.252 "data_offset": 0, 00:17:43.252 "data_size": 0 00:17:43.252 }, 00:17:43.252 { 00:17:43.252 "name": "BaseBdev3", 00:17:43.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.252 "is_configured": false, 00:17:43.252 "data_offset": 0, 00:17:43.252 "data_size": 0 00:17:43.252 }, 00:17:43.252 { 00:17:43.252 "name": "BaseBdev4", 00:17:43.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.252 "is_configured": false, 00:17:43.252 "data_offset": 0, 00:17:43.252 "data_size": 0 00:17:43.252 } 00:17:43.252 ] 00:17:43.252 }' 00:17:43.252 22:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:43.252 22:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.838 22:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:43.838 [2024-07-13 22:01:03.167778] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:43.838 [2024-07-13 22:01:03.167831] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:17:43.838 22:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:44.096 [2024-07-13 22:01:03.340327] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:44.096 [2024-07-13 22:01:03.342024] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:44.096 [2024-07-13 22:01:03.342062] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:44.096 [2024-07-13 22:01:03.342073] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:44.096 [2024-07-13 22:01:03.342100] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:44.096 [2024-07-13 22:01:03.342108] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:44.096 [2024-07-13 22:01:03.342122] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:44.096 22:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:44.096 22:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:44.096 22:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:44.096 22:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:44.096 22:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:44.096 22:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:44.096 22:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:44.096 22:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:44.096 22:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:44.096 22:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:44.096 22:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:44.096 22:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:44.096 22:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:44.096 22:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:44.354 22:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:44.354 "name": "Existed_Raid", 00:17:44.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.354 "strip_size_kb": 64, 00:17:44.354 "state": "configuring", 00:17:44.354 "raid_level": "raid0", 00:17:44.354 "superblock": false, 00:17:44.354 "num_base_bdevs": 4, 00:17:44.354 "num_base_bdevs_discovered": 1, 00:17:44.354 "num_base_bdevs_operational": 4, 00:17:44.354 "base_bdevs_list": [ 00:17:44.354 { 00:17:44.354 "name": "BaseBdev1", 00:17:44.354 "uuid": "615e1927-000e-47d1-b28a-1bedd20c218c", 00:17:44.354 "is_configured": true, 00:17:44.354 "data_offset": 0, 00:17:44.354 "data_size": 65536 00:17:44.354 }, 00:17:44.354 { 00:17:44.354 "name": "BaseBdev2", 00:17:44.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.354 "is_configured": false, 00:17:44.354 "data_offset": 0, 00:17:44.354 "data_size": 0 00:17:44.354 }, 00:17:44.354 { 00:17:44.354 "name": "BaseBdev3", 00:17:44.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.354 "is_configured": false, 00:17:44.354 "data_offset": 0, 00:17:44.354 "data_size": 0 00:17:44.354 }, 00:17:44.354 { 00:17:44.354 "name": "BaseBdev4", 00:17:44.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.354 "is_configured": false, 00:17:44.354 "data_offset": 0, 00:17:44.354 "data_size": 0 00:17:44.354 } 00:17:44.354 ] 00:17:44.354 }' 00:17:44.354 22:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:44.354 22:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.921 22:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:44.921 [2024-07-13 22:01:04.224271] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:44.921 BaseBdev2 00:17:44.921 22:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:44.921 22:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:44.921 22:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:44.921 22:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:44.921 22:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:44.921 22:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:44.921 22:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:45.179 22:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:45.438 [ 00:17:45.438 { 00:17:45.438 "name": "BaseBdev2", 00:17:45.438 "aliases": [ 00:17:45.438 "26ea0518-31cb-4590-b888-4175904371ec" 00:17:45.438 ], 00:17:45.438 "product_name": "Malloc disk", 00:17:45.438 "block_size": 512, 00:17:45.438 "num_blocks": 65536, 00:17:45.438 "uuid": "26ea0518-31cb-4590-b888-4175904371ec", 00:17:45.438 "assigned_rate_limits": { 00:17:45.438 "rw_ios_per_sec": 0, 00:17:45.438 "rw_mbytes_per_sec": 0, 00:17:45.438 "r_mbytes_per_sec": 0, 00:17:45.438 "w_mbytes_per_sec": 0 00:17:45.438 }, 00:17:45.438 "claimed": true, 00:17:45.438 "claim_type": "exclusive_write", 00:17:45.438 "zoned": false, 00:17:45.438 "supported_io_types": { 00:17:45.438 "read": true, 00:17:45.438 "write": true, 00:17:45.438 "unmap": true, 00:17:45.438 "flush": true, 00:17:45.438 "reset": true, 00:17:45.438 "nvme_admin": false, 00:17:45.438 "nvme_io": false, 00:17:45.438 "nvme_io_md": false, 00:17:45.438 "write_zeroes": true, 00:17:45.438 "zcopy": true, 00:17:45.438 "get_zone_info": false, 00:17:45.438 "zone_management": false, 00:17:45.438 "zone_append": false, 00:17:45.438 "compare": false, 00:17:45.438 "compare_and_write": false, 00:17:45.438 "abort": true, 00:17:45.438 "seek_hole": false, 00:17:45.438 "seek_data": false, 00:17:45.438 "copy": true, 00:17:45.438 "nvme_iov_md": false 00:17:45.438 }, 00:17:45.438 "memory_domains": [ 00:17:45.438 { 00:17:45.438 "dma_device_id": "system", 00:17:45.438 "dma_device_type": 1 00:17:45.438 }, 00:17:45.438 { 00:17:45.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.438 "dma_device_type": 2 00:17:45.438 } 00:17:45.438 ], 00:17:45.438 "driver_specific": {} 00:17:45.438 } 00:17:45.438 ] 00:17:45.438 22:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:45.438 22:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:45.438 22:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:45.438 22:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:45.438 22:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:45.438 22:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:45.438 22:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:45.438 22:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:45.438 22:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:45.438 22:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:45.438 22:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:45.438 22:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:45.438 22:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:45.438 22:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:45.438 22:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:45.438 22:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:45.438 "name": "Existed_Raid", 00:17:45.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.439 "strip_size_kb": 64, 00:17:45.439 "state": "configuring", 00:17:45.439 "raid_level": "raid0", 00:17:45.439 "superblock": false, 00:17:45.439 "num_base_bdevs": 4, 00:17:45.439 "num_base_bdevs_discovered": 2, 00:17:45.439 "num_base_bdevs_operational": 4, 00:17:45.439 "base_bdevs_list": [ 00:17:45.439 { 00:17:45.439 "name": "BaseBdev1", 00:17:45.439 "uuid": "615e1927-000e-47d1-b28a-1bedd20c218c", 00:17:45.439 "is_configured": true, 00:17:45.439 "data_offset": 0, 00:17:45.439 "data_size": 65536 00:17:45.439 }, 00:17:45.439 { 00:17:45.439 "name": "BaseBdev2", 00:17:45.439 "uuid": "26ea0518-31cb-4590-b888-4175904371ec", 00:17:45.439 "is_configured": true, 00:17:45.439 "data_offset": 0, 00:17:45.439 "data_size": 65536 00:17:45.439 }, 00:17:45.439 { 00:17:45.439 "name": "BaseBdev3", 00:17:45.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.439 "is_configured": false, 00:17:45.439 "data_offset": 0, 00:17:45.439 "data_size": 0 00:17:45.439 }, 00:17:45.439 { 00:17:45.439 "name": "BaseBdev4", 00:17:45.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.439 "is_configured": false, 00:17:45.439 "data_offset": 0, 00:17:45.439 "data_size": 0 00:17:45.439 } 00:17:45.439 ] 00:17:45.439 }' 00:17:45.439 22:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:45.439 22:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.006 22:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:46.007 [2024-07-13 22:01:05.395823] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:46.266 BaseBdev3 00:17:46.266 22:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:17:46.266 22:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:17:46.266 22:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:46.266 22:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:46.266 22:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:46.266 22:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:46.266 22:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:46.266 22:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:46.537 [ 00:17:46.537 { 00:17:46.537 "name": "BaseBdev3", 00:17:46.537 "aliases": [ 00:17:46.537 "4ad6ed8f-d643-4765-8af3-03dbb740ce25" 00:17:46.537 ], 00:17:46.537 "product_name": "Malloc disk", 00:17:46.537 "block_size": 512, 00:17:46.537 "num_blocks": 65536, 00:17:46.537 "uuid": "4ad6ed8f-d643-4765-8af3-03dbb740ce25", 00:17:46.537 "assigned_rate_limits": { 00:17:46.537 "rw_ios_per_sec": 0, 00:17:46.537 "rw_mbytes_per_sec": 0, 00:17:46.537 "r_mbytes_per_sec": 0, 00:17:46.537 "w_mbytes_per_sec": 0 00:17:46.537 }, 00:17:46.537 "claimed": true, 00:17:46.537 "claim_type": "exclusive_write", 00:17:46.537 "zoned": false, 00:17:46.537 "supported_io_types": { 00:17:46.537 "read": true, 00:17:46.537 "write": true, 00:17:46.537 "unmap": true, 00:17:46.537 "flush": true, 00:17:46.537 "reset": true, 00:17:46.537 "nvme_admin": false, 00:17:46.537 "nvme_io": false, 00:17:46.537 "nvme_io_md": false, 00:17:46.537 "write_zeroes": true, 00:17:46.537 "zcopy": true, 00:17:46.537 "get_zone_info": false, 00:17:46.537 "zone_management": false, 00:17:46.537 "zone_append": false, 00:17:46.537 "compare": false, 00:17:46.537 "compare_and_write": false, 00:17:46.537 "abort": true, 00:17:46.537 "seek_hole": false, 00:17:46.537 "seek_data": false, 00:17:46.537 "copy": true, 00:17:46.537 "nvme_iov_md": false 00:17:46.537 }, 00:17:46.537 "memory_domains": [ 00:17:46.537 { 00:17:46.537 "dma_device_id": "system", 00:17:46.537 "dma_device_type": 1 00:17:46.537 }, 00:17:46.537 { 00:17:46.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.537 "dma_device_type": 2 00:17:46.537 } 00:17:46.537 ], 00:17:46.537 "driver_specific": {} 00:17:46.537 } 00:17:46.537 ] 00:17:46.537 22:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:46.537 22:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:46.537 22:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:46.537 22:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:46.537 22:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:46.537 22:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:46.537 22:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:46.537 22:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:46.537 22:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:46.537 22:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:46.537 22:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:46.537 22:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:46.537 22:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:46.537 22:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:46.537 22:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.537 22:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:46.537 "name": "Existed_Raid", 00:17:46.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.537 "strip_size_kb": 64, 00:17:46.537 "state": "configuring", 00:17:46.537 "raid_level": "raid0", 00:17:46.537 "superblock": false, 00:17:46.537 "num_base_bdevs": 4, 00:17:46.537 "num_base_bdevs_discovered": 3, 00:17:46.537 "num_base_bdevs_operational": 4, 00:17:46.537 "base_bdevs_list": [ 00:17:46.537 { 00:17:46.537 "name": "BaseBdev1", 00:17:46.537 "uuid": "615e1927-000e-47d1-b28a-1bedd20c218c", 00:17:46.537 "is_configured": true, 00:17:46.537 "data_offset": 0, 00:17:46.537 "data_size": 65536 00:17:46.537 }, 00:17:46.537 { 00:17:46.537 "name": "BaseBdev2", 00:17:46.537 "uuid": "26ea0518-31cb-4590-b888-4175904371ec", 00:17:46.537 "is_configured": true, 00:17:46.537 "data_offset": 0, 00:17:46.538 "data_size": 65536 00:17:46.538 }, 00:17:46.538 { 00:17:46.538 "name": "BaseBdev3", 00:17:46.538 "uuid": "4ad6ed8f-d643-4765-8af3-03dbb740ce25", 00:17:46.538 "is_configured": true, 00:17:46.538 "data_offset": 0, 00:17:46.538 "data_size": 65536 00:17:46.538 }, 00:17:46.538 { 00:17:46.538 "name": "BaseBdev4", 00:17:46.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.538 "is_configured": false, 00:17:46.538 "data_offset": 0, 00:17:46.538 "data_size": 0 00:17:46.538 } 00:17:46.538 ] 00:17:46.538 }' 00:17:46.538 22:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:46.538 22:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.111 22:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:47.370 [2024-07-13 22:01:06.571201] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:47.370 [2024-07-13 22:01:06.571243] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:17:47.370 [2024-07-13 22:01:06.571254] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:17:47.370 [2024-07-13 22:01:06.571511] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:17:47.370 [2024-07-13 22:01:06.571709] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:17:47.370 [2024-07-13 22:01:06.571723] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:17:47.370 [2024-07-13 22:01:06.571986] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:47.370 BaseBdev4 00:17:47.370 22:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:17:47.370 22:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:17:47.370 22:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:47.370 22:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:47.370 22:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:47.370 22:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:47.370 22:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:47.628 22:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:47.628 [ 00:17:47.628 { 00:17:47.628 "name": "BaseBdev4", 00:17:47.628 "aliases": [ 00:17:47.628 "fa008bb7-1df4-41a2-9bbf-ffd2cc777a1e" 00:17:47.628 ], 00:17:47.629 "product_name": "Malloc disk", 00:17:47.629 "block_size": 512, 00:17:47.629 "num_blocks": 65536, 00:17:47.629 "uuid": "fa008bb7-1df4-41a2-9bbf-ffd2cc777a1e", 00:17:47.629 "assigned_rate_limits": { 00:17:47.629 "rw_ios_per_sec": 0, 00:17:47.629 "rw_mbytes_per_sec": 0, 00:17:47.629 "r_mbytes_per_sec": 0, 00:17:47.629 "w_mbytes_per_sec": 0 00:17:47.629 }, 00:17:47.629 "claimed": true, 00:17:47.629 "claim_type": "exclusive_write", 00:17:47.629 "zoned": false, 00:17:47.629 "supported_io_types": { 00:17:47.629 "read": true, 00:17:47.629 "write": true, 00:17:47.629 "unmap": true, 00:17:47.629 "flush": true, 00:17:47.629 "reset": true, 00:17:47.629 "nvme_admin": false, 00:17:47.629 "nvme_io": false, 00:17:47.629 "nvme_io_md": false, 00:17:47.629 "write_zeroes": true, 00:17:47.629 "zcopy": true, 00:17:47.629 "get_zone_info": false, 00:17:47.629 "zone_management": false, 00:17:47.629 "zone_append": false, 00:17:47.629 "compare": false, 00:17:47.629 "compare_and_write": false, 00:17:47.629 "abort": true, 00:17:47.629 "seek_hole": false, 00:17:47.629 "seek_data": false, 00:17:47.629 "copy": true, 00:17:47.629 "nvme_iov_md": false 00:17:47.629 }, 00:17:47.629 "memory_domains": [ 00:17:47.629 { 00:17:47.629 "dma_device_id": "system", 00:17:47.629 "dma_device_type": 1 00:17:47.629 }, 00:17:47.629 { 00:17:47.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.629 "dma_device_type": 2 00:17:47.629 } 00:17:47.629 ], 00:17:47.629 "driver_specific": {} 00:17:47.629 } 00:17:47.629 ] 00:17:47.629 22:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:47.629 22:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:47.629 22:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:47.629 22:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:17:47.629 22:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:47.629 22:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:47.629 22:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:47.629 22:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:47.629 22:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:47.629 22:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:47.629 22:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:47.629 22:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:47.629 22:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:47.629 22:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:47.629 22:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:47.888 22:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:47.888 "name": "Existed_Raid", 00:17:47.888 "uuid": "267a38f9-41f5-4303-91b4-08ee5d4d9b70", 00:17:47.888 "strip_size_kb": 64, 00:17:47.888 "state": "online", 00:17:47.888 "raid_level": "raid0", 00:17:47.888 "superblock": false, 00:17:47.888 "num_base_bdevs": 4, 00:17:47.888 "num_base_bdevs_discovered": 4, 00:17:47.888 "num_base_bdevs_operational": 4, 00:17:47.888 "base_bdevs_list": [ 00:17:47.888 { 00:17:47.888 "name": "BaseBdev1", 00:17:47.888 "uuid": "615e1927-000e-47d1-b28a-1bedd20c218c", 00:17:47.888 "is_configured": true, 00:17:47.888 "data_offset": 0, 00:17:47.888 "data_size": 65536 00:17:47.888 }, 00:17:47.888 { 00:17:47.888 "name": "BaseBdev2", 00:17:47.888 "uuid": "26ea0518-31cb-4590-b888-4175904371ec", 00:17:47.888 "is_configured": true, 00:17:47.888 "data_offset": 0, 00:17:47.888 "data_size": 65536 00:17:47.888 }, 00:17:47.888 { 00:17:47.888 "name": "BaseBdev3", 00:17:47.888 "uuid": "4ad6ed8f-d643-4765-8af3-03dbb740ce25", 00:17:47.888 "is_configured": true, 00:17:47.888 "data_offset": 0, 00:17:47.888 "data_size": 65536 00:17:47.888 }, 00:17:47.888 { 00:17:47.888 "name": "BaseBdev4", 00:17:47.888 "uuid": "fa008bb7-1df4-41a2-9bbf-ffd2cc777a1e", 00:17:47.888 "is_configured": true, 00:17:47.888 "data_offset": 0, 00:17:47.888 "data_size": 65536 00:17:47.888 } 00:17:47.888 ] 00:17:47.888 }' 00:17:47.888 22:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:47.888 22:01:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.456 22:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:48.456 22:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:48.456 22:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:48.456 22:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:48.456 22:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:48.456 22:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:48.456 22:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:48.456 22:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:48.456 [2024-07-13 22:01:07.734609] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:48.456 22:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:48.456 "name": "Existed_Raid", 00:17:48.456 "aliases": [ 00:17:48.456 "267a38f9-41f5-4303-91b4-08ee5d4d9b70" 00:17:48.456 ], 00:17:48.456 "product_name": "Raid Volume", 00:17:48.456 "block_size": 512, 00:17:48.456 "num_blocks": 262144, 00:17:48.456 "uuid": "267a38f9-41f5-4303-91b4-08ee5d4d9b70", 00:17:48.456 "assigned_rate_limits": { 00:17:48.456 "rw_ios_per_sec": 0, 00:17:48.456 "rw_mbytes_per_sec": 0, 00:17:48.456 "r_mbytes_per_sec": 0, 00:17:48.456 "w_mbytes_per_sec": 0 00:17:48.456 }, 00:17:48.456 "claimed": false, 00:17:48.456 "zoned": false, 00:17:48.456 "supported_io_types": { 00:17:48.456 "read": true, 00:17:48.456 "write": true, 00:17:48.456 "unmap": true, 00:17:48.456 "flush": true, 00:17:48.456 "reset": true, 00:17:48.456 "nvme_admin": false, 00:17:48.456 "nvme_io": false, 00:17:48.456 "nvme_io_md": false, 00:17:48.456 "write_zeroes": true, 00:17:48.456 "zcopy": false, 00:17:48.456 "get_zone_info": false, 00:17:48.456 "zone_management": false, 00:17:48.456 "zone_append": false, 00:17:48.456 "compare": false, 00:17:48.456 "compare_and_write": false, 00:17:48.456 "abort": false, 00:17:48.456 "seek_hole": false, 00:17:48.456 "seek_data": false, 00:17:48.456 "copy": false, 00:17:48.456 "nvme_iov_md": false 00:17:48.456 }, 00:17:48.456 "memory_domains": [ 00:17:48.456 { 00:17:48.456 "dma_device_id": "system", 00:17:48.456 "dma_device_type": 1 00:17:48.456 }, 00:17:48.456 { 00:17:48.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.456 "dma_device_type": 2 00:17:48.456 }, 00:17:48.456 { 00:17:48.456 "dma_device_id": "system", 00:17:48.456 "dma_device_type": 1 00:17:48.456 }, 00:17:48.456 { 00:17:48.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.456 "dma_device_type": 2 00:17:48.456 }, 00:17:48.456 { 00:17:48.456 "dma_device_id": "system", 00:17:48.456 "dma_device_type": 1 00:17:48.456 }, 00:17:48.456 { 00:17:48.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.456 "dma_device_type": 2 00:17:48.456 }, 00:17:48.456 { 00:17:48.456 "dma_device_id": "system", 00:17:48.456 "dma_device_type": 1 00:17:48.456 }, 00:17:48.456 { 00:17:48.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.456 "dma_device_type": 2 00:17:48.456 } 00:17:48.456 ], 00:17:48.456 "driver_specific": { 00:17:48.456 "raid": { 00:17:48.456 "uuid": "267a38f9-41f5-4303-91b4-08ee5d4d9b70", 00:17:48.456 "strip_size_kb": 64, 00:17:48.456 "state": "online", 00:17:48.456 "raid_level": "raid0", 00:17:48.456 "superblock": false, 00:17:48.456 "num_base_bdevs": 4, 00:17:48.456 "num_base_bdevs_discovered": 4, 00:17:48.456 "num_base_bdevs_operational": 4, 00:17:48.456 "base_bdevs_list": [ 00:17:48.456 { 00:17:48.456 "name": "BaseBdev1", 00:17:48.456 "uuid": "615e1927-000e-47d1-b28a-1bedd20c218c", 00:17:48.456 "is_configured": true, 00:17:48.456 "data_offset": 0, 00:17:48.456 "data_size": 65536 00:17:48.456 }, 00:17:48.456 { 00:17:48.456 "name": "BaseBdev2", 00:17:48.456 "uuid": "26ea0518-31cb-4590-b888-4175904371ec", 00:17:48.456 "is_configured": true, 00:17:48.456 "data_offset": 0, 00:17:48.456 "data_size": 65536 00:17:48.456 }, 00:17:48.456 { 00:17:48.456 "name": "BaseBdev3", 00:17:48.456 "uuid": "4ad6ed8f-d643-4765-8af3-03dbb740ce25", 00:17:48.456 "is_configured": true, 00:17:48.456 "data_offset": 0, 00:17:48.456 "data_size": 65536 00:17:48.456 }, 00:17:48.456 { 00:17:48.456 "name": "BaseBdev4", 00:17:48.456 "uuid": "fa008bb7-1df4-41a2-9bbf-ffd2cc777a1e", 00:17:48.456 "is_configured": true, 00:17:48.456 "data_offset": 0, 00:17:48.456 "data_size": 65536 00:17:48.456 } 00:17:48.456 ] 00:17:48.456 } 00:17:48.456 } 00:17:48.456 }' 00:17:48.456 22:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:48.456 22:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:48.456 BaseBdev2 00:17:48.456 BaseBdev3 00:17:48.456 BaseBdev4' 00:17:48.456 22:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:48.456 22:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:48.456 22:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:48.715 22:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:48.715 "name": "BaseBdev1", 00:17:48.715 "aliases": [ 00:17:48.715 "615e1927-000e-47d1-b28a-1bedd20c218c" 00:17:48.715 ], 00:17:48.715 "product_name": "Malloc disk", 00:17:48.715 "block_size": 512, 00:17:48.715 "num_blocks": 65536, 00:17:48.715 "uuid": "615e1927-000e-47d1-b28a-1bedd20c218c", 00:17:48.716 "assigned_rate_limits": { 00:17:48.716 "rw_ios_per_sec": 0, 00:17:48.716 "rw_mbytes_per_sec": 0, 00:17:48.716 "r_mbytes_per_sec": 0, 00:17:48.716 "w_mbytes_per_sec": 0 00:17:48.716 }, 00:17:48.716 "claimed": true, 00:17:48.716 "claim_type": "exclusive_write", 00:17:48.716 "zoned": false, 00:17:48.716 "supported_io_types": { 00:17:48.716 "read": true, 00:17:48.716 "write": true, 00:17:48.716 "unmap": true, 00:17:48.716 "flush": true, 00:17:48.716 "reset": true, 00:17:48.716 "nvme_admin": false, 00:17:48.716 "nvme_io": false, 00:17:48.716 "nvme_io_md": false, 00:17:48.716 "write_zeroes": true, 00:17:48.716 "zcopy": true, 00:17:48.716 "get_zone_info": false, 00:17:48.716 "zone_management": false, 00:17:48.716 "zone_append": false, 00:17:48.716 "compare": false, 00:17:48.716 "compare_and_write": false, 00:17:48.716 "abort": true, 00:17:48.716 "seek_hole": false, 00:17:48.716 "seek_data": false, 00:17:48.716 "copy": true, 00:17:48.716 "nvme_iov_md": false 00:17:48.716 }, 00:17:48.716 "memory_domains": [ 00:17:48.716 { 00:17:48.716 "dma_device_id": "system", 00:17:48.716 "dma_device_type": 1 00:17:48.716 }, 00:17:48.716 { 00:17:48.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.716 "dma_device_type": 2 00:17:48.716 } 00:17:48.716 ], 00:17:48.716 "driver_specific": {} 00:17:48.716 }' 00:17:48.716 22:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:48.716 22:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:48.716 22:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:48.716 22:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:48.716 22:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:48.716 22:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:48.716 22:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:48.975 22:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:48.975 22:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:48.975 22:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:48.975 22:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:48.975 22:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:48.975 22:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:48.975 22:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:48.975 22:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:49.234 22:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:49.234 "name": "BaseBdev2", 00:17:49.234 "aliases": [ 00:17:49.234 "26ea0518-31cb-4590-b888-4175904371ec" 00:17:49.234 ], 00:17:49.234 "product_name": "Malloc disk", 00:17:49.234 "block_size": 512, 00:17:49.234 "num_blocks": 65536, 00:17:49.234 "uuid": "26ea0518-31cb-4590-b888-4175904371ec", 00:17:49.234 "assigned_rate_limits": { 00:17:49.234 "rw_ios_per_sec": 0, 00:17:49.234 "rw_mbytes_per_sec": 0, 00:17:49.234 "r_mbytes_per_sec": 0, 00:17:49.234 "w_mbytes_per_sec": 0 00:17:49.234 }, 00:17:49.235 "claimed": true, 00:17:49.235 "claim_type": "exclusive_write", 00:17:49.235 "zoned": false, 00:17:49.235 "supported_io_types": { 00:17:49.235 "read": true, 00:17:49.235 "write": true, 00:17:49.235 "unmap": true, 00:17:49.235 "flush": true, 00:17:49.235 "reset": true, 00:17:49.235 "nvme_admin": false, 00:17:49.235 "nvme_io": false, 00:17:49.235 "nvme_io_md": false, 00:17:49.235 "write_zeroes": true, 00:17:49.235 "zcopy": true, 00:17:49.235 "get_zone_info": false, 00:17:49.235 "zone_management": false, 00:17:49.235 "zone_append": false, 00:17:49.235 "compare": false, 00:17:49.235 "compare_and_write": false, 00:17:49.235 "abort": true, 00:17:49.235 "seek_hole": false, 00:17:49.235 "seek_data": false, 00:17:49.235 "copy": true, 00:17:49.235 "nvme_iov_md": false 00:17:49.235 }, 00:17:49.235 "memory_domains": [ 00:17:49.235 { 00:17:49.235 "dma_device_id": "system", 00:17:49.235 "dma_device_type": 1 00:17:49.235 }, 00:17:49.235 { 00:17:49.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.235 "dma_device_type": 2 00:17:49.235 } 00:17:49.235 ], 00:17:49.235 "driver_specific": {} 00:17:49.235 }' 00:17:49.235 22:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:49.235 22:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:49.235 22:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:49.235 22:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:49.235 22:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:49.235 22:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:49.235 22:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:49.235 22:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:49.235 22:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:49.235 22:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:49.235 22:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:49.492 22:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:49.492 22:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:49.492 22:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:49.492 22:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:49.492 22:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:49.492 "name": "BaseBdev3", 00:17:49.492 "aliases": [ 00:17:49.492 "4ad6ed8f-d643-4765-8af3-03dbb740ce25" 00:17:49.492 ], 00:17:49.492 "product_name": "Malloc disk", 00:17:49.492 "block_size": 512, 00:17:49.492 "num_blocks": 65536, 00:17:49.492 "uuid": "4ad6ed8f-d643-4765-8af3-03dbb740ce25", 00:17:49.492 "assigned_rate_limits": { 00:17:49.492 "rw_ios_per_sec": 0, 00:17:49.492 "rw_mbytes_per_sec": 0, 00:17:49.492 "r_mbytes_per_sec": 0, 00:17:49.492 "w_mbytes_per_sec": 0 00:17:49.492 }, 00:17:49.493 "claimed": true, 00:17:49.493 "claim_type": "exclusive_write", 00:17:49.493 "zoned": false, 00:17:49.493 "supported_io_types": { 00:17:49.493 "read": true, 00:17:49.493 "write": true, 00:17:49.493 "unmap": true, 00:17:49.493 "flush": true, 00:17:49.493 "reset": true, 00:17:49.493 "nvme_admin": false, 00:17:49.493 "nvme_io": false, 00:17:49.493 "nvme_io_md": false, 00:17:49.493 "write_zeroes": true, 00:17:49.493 "zcopy": true, 00:17:49.493 "get_zone_info": false, 00:17:49.493 "zone_management": false, 00:17:49.493 "zone_append": false, 00:17:49.493 "compare": false, 00:17:49.493 "compare_and_write": false, 00:17:49.493 "abort": true, 00:17:49.493 "seek_hole": false, 00:17:49.493 "seek_data": false, 00:17:49.493 "copy": true, 00:17:49.493 "nvme_iov_md": false 00:17:49.493 }, 00:17:49.493 "memory_domains": [ 00:17:49.493 { 00:17:49.493 "dma_device_id": "system", 00:17:49.493 "dma_device_type": 1 00:17:49.493 }, 00:17:49.493 { 00:17:49.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.493 "dma_device_type": 2 00:17:49.493 } 00:17:49.493 ], 00:17:49.493 "driver_specific": {} 00:17:49.493 }' 00:17:49.493 22:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:49.493 22:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:49.493 22:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:49.493 22:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:49.751 22:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:49.751 22:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:49.751 22:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:49.751 22:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:49.751 22:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:49.751 22:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:49.751 22:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:49.751 22:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:49.751 22:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:49.751 22:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:17:49.751 22:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:50.009 22:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:50.009 "name": "BaseBdev4", 00:17:50.009 "aliases": [ 00:17:50.009 "fa008bb7-1df4-41a2-9bbf-ffd2cc777a1e" 00:17:50.009 ], 00:17:50.009 "product_name": "Malloc disk", 00:17:50.009 "block_size": 512, 00:17:50.009 "num_blocks": 65536, 00:17:50.009 "uuid": "fa008bb7-1df4-41a2-9bbf-ffd2cc777a1e", 00:17:50.009 "assigned_rate_limits": { 00:17:50.009 "rw_ios_per_sec": 0, 00:17:50.009 "rw_mbytes_per_sec": 0, 00:17:50.009 "r_mbytes_per_sec": 0, 00:17:50.009 "w_mbytes_per_sec": 0 00:17:50.009 }, 00:17:50.009 "claimed": true, 00:17:50.009 "claim_type": "exclusive_write", 00:17:50.009 "zoned": false, 00:17:50.010 "supported_io_types": { 00:17:50.010 "read": true, 00:17:50.010 "write": true, 00:17:50.010 "unmap": true, 00:17:50.010 "flush": true, 00:17:50.010 "reset": true, 00:17:50.010 "nvme_admin": false, 00:17:50.010 "nvme_io": false, 00:17:50.010 "nvme_io_md": false, 00:17:50.010 "write_zeroes": true, 00:17:50.010 "zcopy": true, 00:17:50.010 "get_zone_info": false, 00:17:50.010 "zone_management": false, 00:17:50.010 "zone_append": false, 00:17:50.010 "compare": false, 00:17:50.010 "compare_and_write": false, 00:17:50.010 "abort": true, 00:17:50.010 "seek_hole": false, 00:17:50.010 "seek_data": false, 00:17:50.010 "copy": true, 00:17:50.010 "nvme_iov_md": false 00:17:50.010 }, 00:17:50.010 "memory_domains": [ 00:17:50.010 { 00:17:50.010 "dma_device_id": "system", 00:17:50.010 "dma_device_type": 1 00:17:50.010 }, 00:17:50.010 { 00:17:50.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.010 "dma_device_type": 2 00:17:50.010 } 00:17:50.010 ], 00:17:50.010 "driver_specific": {} 00:17:50.010 }' 00:17:50.010 22:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:50.010 22:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:50.010 22:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:50.010 22:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:50.010 22:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:50.010 22:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:50.010 22:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:50.010 22:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:50.268 22:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:50.268 22:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:50.268 22:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:50.268 22:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:50.268 22:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:50.527 [2024-07-13 22:01:09.659426] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:50.527 [2024-07-13 22:01:09.659457] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:50.527 [2024-07-13 22:01:09.659506] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:50.527 22:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:50.527 22:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:17:50.527 22:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:50.527 22:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:50.527 22:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:17:50.527 22:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:17:50.527 22:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:50.527 22:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:17:50.527 22:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:50.527 22:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:50.527 22:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:50.527 22:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:50.527 22:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:50.527 22:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:50.527 22:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:50.527 22:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.527 22:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:50.527 22:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:50.527 "name": "Existed_Raid", 00:17:50.527 "uuid": "267a38f9-41f5-4303-91b4-08ee5d4d9b70", 00:17:50.527 "strip_size_kb": 64, 00:17:50.527 "state": "offline", 00:17:50.527 "raid_level": "raid0", 00:17:50.527 "superblock": false, 00:17:50.527 "num_base_bdevs": 4, 00:17:50.527 "num_base_bdevs_discovered": 3, 00:17:50.527 "num_base_bdevs_operational": 3, 00:17:50.527 "base_bdevs_list": [ 00:17:50.527 { 00:17:50.527 "name": null, 00:17:50.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.527 "is_configured": false, 00:17:50.527 "data_offset": 0, 00:17:50.527 "data_size": 65536 00:17:50.527 }, 00:17:50.527 { 00:17:50.527 "name": "BaseBdev2", 00:17:50.527 "uuid": "26ea0518-31cb-4590-b888-4175904371ec", 00:17:50.527 "is_configured": true, 00:17:50.527 "data_offset": 0, 00:17:50.527 "data_size": 65536 00:17:50.527 }, 00:17:50.527 { 00:17:50.527 "name": "BaseBdev3", 00:17:50.527 "uuid": "4ad6ed8f-d643-4765-8af3-03dbb740ce25", 00:17:50.527 "is_configured": true, 00:17:50.527 "data_offset": 0, 00:17:50.527 "data_size": 65536 00:17:50.527 }, 00:17:50.527 { 00:17:50.527 "name": "BaseBdev4", 00:17:50.527 "uuid": "fa008bb7-1df4-41a2-9bbf-ffd2cc777a1e", 00:17:50.527 "is_configured": true, 00:17:50.528 "data_offset": 0, 00:17:50.528 "data_size": 65536 00:17:50.528 } 00:17:50.528 ] 00:17:50.528 }' 00:17:50.528 22:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:50.528 22:01:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.106 22:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:51.106 22:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:51.106 22:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:51.106 22:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:51.365 22:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:51.365 22:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:51.365 22:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:51.365 [2024-07-13 22:01:10.694908] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:51.623 22:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:51.623 22:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:51.623 22:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:51.623 22:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:51.623 22:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:51.623 22:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:51.623 22:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:51.881 [2024-07-13 22:01:11.108386] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:51.881 22:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:51.881 22:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:51.881 22:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:51.881 22:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:52.140 22:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:52.140 22:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:52.140 22:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:17:52.140 [2024-07-13 22:01:11.528242] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:52.140 [2024-07-13 22:01:11.528298] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:17:52.398 22:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:52.398 22:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:52.398 22:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:52.398 22:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:52.670 22:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:52.670 22:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:52.670 22:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:17:52.670 22:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:17:52.670 22:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:52.670 22:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:52.670 BaseBdev2 00:17:52.670 22:01:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:17:52.670 22:01:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:52.670 22:01:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:52.670 22:01:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:52.670 22:01:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:52.670 22:01:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:52.670 22:01:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:52.984 22:01:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:53.250 [ 00:17:53.250 { 00:17:53.250 "name": "BaseBdev2", 00:17:53.250 "aliases": [ 00:17:53.250 "5db7654d-e736-40eb-8c1c-eb995cc2f61c" 00:17:53.250 ], 00:17:53.250 "product_name": "Malloc disk", 00:17:53.250 "block_size": 512, 00:17:53.250 "num_blocks": 65536, 00:17:53.250 "uuid": "5db7654d-e736-40eb-8c1c-eb995cc2f61c", 00:17:53.250 "assigned_rate_limits": { 00:17:53.250 "rw_ios_per_sec": 0, 00:17:53.250 "rw_mbytes_per_sec": 0, 00:17:53.250 "r_mbytes_per_sec": 0, 00:17:53.250 "w_mbytes_per_sec": 0 00:17:53.250 }, 00:17:53.250 "claimed": false, 00:17:53.250 "zoned": false, 00:17:53.250 "supported_io_types": { 00:17:53.250 "read": true, 00:17:53.250 "write": true, 00:17:53.250 "unmap": true, 00:17:53.250 "flush": true, 00:17:53.250 "reset": true, 00:17:53.250 "nvme_admin": false, 00:17:53.250 "nvme_io": false, 00:17:53.250 "nvme_io_md": false, 00:17:53.250 "write_zeroes": true, 00:17:53.250 "zcopy": true, 00:17:53.250 "get_zone_info": false, 00:17:53.250 "zone_management": false, 00:17:53.250 "zone_append": false, 00:17:53.250 "compare": false, 00:17:53.250 "compare_and_write": false, 00:17:53.251 "abort": true, 00:17:53.251 "seek_hole": false, 00:17:53.251 "seek_data": false, 00:17:53.251 "copy": true, 00:17:53.251 "nvme_iov_md": false 00:17:53.251 }, 00:17:53.251 "memory_domains": [ 00:17:53.251 { 00:17:53.251 "dma_device_id": "system", 00:17:53.251 "dma_device_type": 1 00:17:53.251 }, 00:17:53.251 { 00:17:53.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.251 "dma_device_type": 2 00:17:53.251 } 00:17:53.251 ], 00:17:53.251 "driver_specific": {} 00:17:53.251 } 00:17:53.251 ] 00:17:53.251 22:01:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:53.251 22:01:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:53.251 22:01:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:53.251 22:01:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:53.251 BaseBdev3 00:17:53.251 22:01:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:17:53.251 22:01:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:17:53.251 22:01:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:53.251 22:01:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:53.251 22:01:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:53.251 22:01:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:53.251 22:01:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:53.508 22:01:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:53.766 [ 00:17:53.766 { 00:17:53.766 "name": "BaseBdev3", 00:17:53.766 "aliases": [ 00:17:53.766 "23c0f8f4-5d46-482d-8e1e-bf49adfd5421" 00:17:53.766 ], 00:17:53.766 "product_name": "Malloc disk", 00:17:53.766 "block_size": 512, 00:17:53.766 "num_blocks": 65536, 00:17:53.766 "uuid": "23c0f8f4-5d46-482d-8e1e-bf49adfd5421", 00:17:53.766 "assigned_rate_limits": { 00:17:53.766 "rw_ios_per_sec": 0, 00:17:53.766 "rw_mbytes_per_sec": 0, 00:17:53.766 "r_mbytes_per_sec": 0, 00:17:53.766 "w_mbytes_per_sec": 0 00:17:53.766 }, 00:17:53.766 "claimed": false, 00:17:53.766 "zoned": false, 00:17:53.766 "supported_io_types": { 00:17:53.766 "read": true, 00:17:53.766 "write": true, 00:17:53.766 "unmap": true, 00:17:53.766 "flush": true, 00:17:53.766 "reset": true, 00:17:53.766 "nvme_admin": false, 00:17:53.766 "nvme_io": false, 00:17:53.766 "nvme_io_md": false, 00:17:53.766 "write_zeroes": true, 00:17:53.766 "zcopy": true, 00:17:53.766 "get_zone_info": false, 00:17:53.766 "zone_management": false, 00:17:53.766 "zone_append": false, 00:17:53.766 "compare": false, 00:17:53.766 "compare_and_write": false, 00:17:53.766 "abort": true, 00:17:53.766 "seek_hole": false, 00:17:53.766 "seek_data": false, 00:17:53.766 "copy": true, 00:17:53.766 "nvme_iov_md": false 00:17:53.766 }, 00:17:53.766 "memory_domains": [ 00:17:53.766 { 00:17:53.766 "dma_device_id": "system", 00:17:53.766 "dma_device_type": 1 00:17:53.766 }, 00:17:53.766 { 00:17:53.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.766 "dma_device_type": 2 00:17:53.766 } 00:17:53.766 ], 00:17:53.766 "driver_specific": {} 00:17:53.766 } 00:17:53.766 ] 00:17:53.766 22:01:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:53.766 22:01:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:53.766 22:01:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:53.766 22:01:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:53.766 BaseBdev4 00:17:53.766 22:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:17:53.766 22:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:17:53.766 22:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:53.766 22:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:53.766 22:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:53.766 22:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:53.766 22:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:54.032 22:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:54.032 [ 00:17:54.032 { 00:17:54.032 "name": "BaseBdev4", 00:17:54.032 "aliases": [ 00:17:54.032 "1982877a-dccf-4be0-83da-e03d94b6f957" 00:17:54.032 ], 00:17:54.032 "product_name": "Malloc disk", 00:17:54.032 "block_size": 512, 00:17:54.032 "num_blocks": 65536, 00:17:54.032 "uuid": "1982877a-dccf-4be0-83da-e03d94b6f957", 00:17:54.032 "assigned_rate_limits": { 00:17:54.032 "rw_ios_per_sec": 0, 00:17:54.032 "rw_mbytes_per_sec": 0, 00:17:54.032 "r_mbytes_per_sec": 0, 00:17:54.032 "w_mbytes_per_sec": 0 00:17:54.032 }, 00:17:54.032 "claimed": false, 00:17:54.032 "zoned": false, 00:17:54.032 "supported_io_types": { 00:17:54.032 "read": true, 00:17:54.032 "write": true, 00:17:54.032 "unmap": true, 00:17:54.032 "flush": true, 00:17:54.032 "reset": true, 00:17:54.032 "nvme_admin": false, 00:17:54.032 "nvme_io": false, 00:17:54.032 "nvme_io_md": false, 00:17:54.032 "write_zeroes": true, 00:17:54.032 "zcopy": true, 00:17:54.032 "get_zone_info": false, 00:17:54.032 "zone_management": false, 00:17:54.032 "zone_append": false, 00:17:54.032 "compare": false, 00:17:54.032 "compare_and_write": false, 00:17:54.032 "abort": true, 00:17:54.032 "seek_hole": false, 00:17:54.032 "seek_data": false, 00:17:54.032 "copy": true, 00:17:54.032 "nvme_iov_md": false 00:17:54.032 }, 00:17:54.032 "memory_domains": [ 00:17:54.032 { 00:17:54.032 "dma_device_id": "system", 00:17:54.032 "dma_device_type": 1 00:17:54.032 }, 00:17:54.032 { 00:17:54.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:54.032 "dma_device_type": 2 00:17:54.032 } 00:17:54.032 ], 00:17:54.032 "driver_specific": {} 00:17:54.032 } 00:17:54.032 ] 00:17:54.032 22:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:54.032 22:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:54.032 22:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:54.032 22:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:54.290 [2024-07-13 22:01:13.573735] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:54.290 [2024-07-13 22:01:13.573780] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:54.290 [2024-07-13 22:01:13.573808] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:54.290 [2024-07-13 22:01:13.575617] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:54.290 [2024-07-13 22:01:13.575666] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:54.290 22:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:54.290 22:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:54.290 22:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:54.290 22:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:54.290 22:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:54.290 22:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:54.290 22:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:54.290 22:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:54.290 22:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:54.290 22:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:54.290 22:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:54.290 22:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:54.549 22:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:54.549 "name": "Existed_Raid", 00:17:54.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.549 "strip_size_kb": 64, 00:17:54.549 "state": "configuring", 00:17:54.549 "raid_level": "raid0", 00:17:54.549 "superblock": false, 00:17:54.549 "num_base_bdevs": 4, 00:17:54.549 "num_base_bdevs_discovered": 3, 00:17:54.549 "num_base_bdevs_operational": 4, 00:17:54.549 "base_bdevs_list": [ 00:17:54.549 { 00:17:54.549 "name": "BaseBdev1", 00:17:54.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.549 "is_configured": false, 00:17:54.549 "data_offset": 0, 00:17:54.549 "data_size": 0 00:17:54.549 }, 00:17:54.549 { 00:17:54.549 "name": "BaseBdev2", 00:17:54.549 "uuid": "5db7654d-e736-40eb-8c1c-eb995cc2f61c", 00:17:54.549 "is_configured": true, 00:17:54.549 "data_offset": 0, 00:17:54.549 "data_size": 65536 00:17:54.549 }, 00:17:54.549 { 00:17:54.549 "name": "BaseBdev3", 00:17:54.549 "uuid": "23c0f8f4-5d46-482d-8e1e-bf49adfd5421", 00:17:54.549 "is_configured": true, 00:17:54.549 "data_offset": 0, 00:17:54.549 "data_size": 65536 00:17:54.549 }, 00:17:54.549 { 00:17:54.549 "name": "BaseBdev4", 00:17:54.549 "uuid": "1982877a-dccf-4be0-83da-e03d94b6f957", 00:17:54.549 "is_configured": true, 00:17:54.549 "data_offset": 0, 00:17:54.549 "data_size": 65536 00:17:54.549 } 00:17:54.549 ] 00:17:54.549 }' 00:17:54.549 22:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:54.549 22:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.807 22:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:17:55.066 [2024-07-13 22:01:14.347751] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:55.066 22:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:55.066 22:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:55.066 22:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:55.066 22:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:55.066 22:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:55.066 22:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:55.066 22:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:55.066 22:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:55.066 22:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:55.066 22:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:55.066 22:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.066 22:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:55.326 22:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:55.326 "name": "Existed_Raid", 00:17:55.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.326 "strip_size_kb": 64, 00:17:55.326 "state": "configuring", 00:17:55.326 "raid_level": "raid0", 00:17:55.326 "superblock": false, 00:17:55.326 "num_base_bdevs": 4, 00:17:55.326 "num_base_bdevs_discovered": 2, 00:17:55.326 "num_base_bdevs_operational": 4, 00:17:55.326 "base_bdevs_list": [ 00:17:55.326 { 00:17:55.326 "name": "BaseBdev1", 00:17:55.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.326 "is_configured": false, 00:17:55.326 "data_offset": 0, 00:17:55.326 "data_size": 0 00:17:55.326 }, 00:17:55.326 { 00:17:55.326 "name": null, 00:17:55.326 "uuid": "5db7654d-e736-40eb-8c1c-eb995cc2f61c", 00:17:55.326 "is_configured": false, 00:17:55.326 "data_offset": 0, 00:17:55.326 "data_size": 65536 00:17:55.326 }, 00:17:55.326 { 00:17:55.326 "name": "BaseBdev3", 00:17:55.326 "uuid": "23c0f8f4-5d46-482d-8e1e-bf49adfd5421", 00:17:55.326 "is_configured": true, 00:17:55.326 "data_offset": 0, 00:17:55.326 "data_size": 65536 00:17:55.326 }, 00:17:55.326 { 00:17:55.326 "name": "BaseBdev4", 00:17:55.326 "uuid": "1982877a-dccf-4be0-83da-e03d94b6f957", 00:17:55.326 "is_configured": true, 00:17:55.326 "data_offset": 0, 00:17:55.326 "data_size": 65536 00:17:55.326 } 00:17:55.326 ] 00:17:55.326 }' 00:17:55.326 22:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:55.326 22:01:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.893 22:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.893 22:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:55.893 22:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:17:55.893 22:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:56.151 [2024-07-13 22:01:15.349611] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:56.151 BaseBdev1 00:17:56.151 22:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:17:56.151 22:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:56.151 22:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:56.151 22:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:56.151 22:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:56.151 22:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:56.151 22:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:56.151 22:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:56.410 [ 00:17:56.410 { 00:17:56.410 "name": "BaseBdev1", 00:17:56.410 "aliases": [ 00:17:56.410 "40312854-d573-495f-8521-1b3c80138247" 00:17:56.410 ], 00:17:56.410 "product_name": "Malloc disk", 00:17:56.410 "block_size": 512, 00:17:56.410 "num_blocks": 65536, 00:17:56.410 "uuid": "40312854-d573-495f-8521-1b3c80138247", 00:17:56.410 "assigned_rate_limits": { 00:17:56.410 "rw_ios_per_sec": 0, 00:17:56.410 "rw_mbytes_per_sec": 0, 00:17:56.410 "r_mbytes_per_sec": 0, 00:17:56.410 "w_mbytes_per_sec": 0 00:17:56.410 }, 00:17:56.410 "claimed": true, 00:17:56.410 "claim_type": "exclusive_write", 00:17:56.410 "zoned": false, 00:17:56.410 "supported_io_types": { 00:17:56.410 "read": true, 00:17:56.410 "write": true, 00:17:56.410 "unmap": true, 00:17:56.410 "flush": true, 00:17:56.410 "reset": true, 00:17:56.410 "nvme_admin": false, 00:17:56.410 "nvme_io": false, 00:17:56.410 "nvme_io_md": false, 00:17:56.410 "write_zeroes": true, 00:17:56.410 "zcopy": true, 00:17:56.410 "get_zone_info": false, 00:17:56.410 "zone_management": false, 00:17:56.410 "zone_append": false, 00:17:56.410 "compare": false, 00:17:56.410 "compare_and_write": false, 00:17:56.410 "abort": true, 00:17:56.410 "seek_hole": false, 00:17:56.410 "seek_data": false, 00:17:56.410 "copy": true, 00:17:56.410 "nvme_iov_md": false 00:17:56.410 }, 00:17:56.410 "memory_domains": [ 00:17:56.410 { 00:17:56.410 "dma_device_id": "system", 00:17:56.410 "dma_device_type": 1 00:17:56.410 }, 00:17:56.410 { 00:17:56.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.410 "dma_device_type": 2 00:17:56.410 } 00:17:56.410 ], 00:17:56.410 "driver_specific": {} 00:17:56.410 } 00:17:56.410 ] 00:17:56.410 22:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:56.410 22:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:56.410 22:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:56.410 22:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:56.410 22:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:56.410 22:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:56.410 22:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:56.410 22:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:56.410 22:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:56.410 22:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:56.410 22:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:56.410 22:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:56.410 22:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:56.670 22:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:56.670 "name": "Existed_Raid", 00:17:56.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.670 "strip_size_kb": 64, 00:17:56.670 "state": "configuring", 00:17:56.670 "raid_level": "raid0", 00:17:56.670 "superblock": false, 00:17:56.670 "num_base_bdevs": 4, 00:17:56.670 "num_base_bdevs_discovered": 3, 00:17:56.670 "num_base_bdevs_operational": 4, 00:17:56.670 "base_bdevs_list": [ 00:17:56.670 { 00:17:56.670 "name": "BaseBdev1", 00:17:56.670 "uuid": "40312854-d573-495f-8521-1b3c80138247", 00:17:56.670 "is_configured": true, 00:17:56.670 "data_offset": 0, 00:17:56.670 "data_size": 65536 00:17:56.670 }, 00:17:56.670 { 00:17:56.670 "name": null, 00:17:56.670 "uuid": "5db7654d-e736-40eb-8c1c-eb995cc2f61c", 00:17:56.670 "is_configured": false, 00:17:56.670 "data_offset": 0, 00:17:56.670 "data_size": 65536 00:17:56.670 }, 00:17:56.670 { 00:17:56.670 "name": "BaseBdev3", 00:17:56.670 "uuid": "23c0f8f4-5d46-482d-8e1e-bf49adfd5421", 00:17:56.670 "is_configured": true, 00:17:56.670 "data_offset": 0, 00:17:56.670 "data_size": 65536 00:17:56.670 }, 00:17:56.670 { 00:17:56.670 "name": "BaseBdev4", 00:17:56.670 "uuid": "1982877a-dccf-4be0-83da-e03d94b6f957", 00:17:56.670 "is_configured": true, 00:17:56.670 "data_offset": 0, 00:17:56.670 "data_size": 65536 00:17:56.670 } 00:17:56.670 ] 00:17:56.670 }' 00:17:56.670 22:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:56.670 22:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.929 22:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:56.929 22:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.188 22:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:17:57.188 22:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:17:57.466 [2024-07-13 22:01:16.625104] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:57.466 22:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:57.466 22:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:57.466 22:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:57.466 22:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:57.466 22:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:57.466 22:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:57.466 22:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:57.466 22:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:57.466 22:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:57.466 22:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:57.466 22:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:57.466 22:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.466 22:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:57.466 "name": "Existed_Raid", 00:17:57.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.466 "strip_size_kb": 64, 00:17:57.466 "state": "configuring", 00:17:57.466 "raid_level": "raid0", 00:17:57.466 "superblock": false, 00:17:57.467 "num_base_bdevs": 4, 00:17:57.467 "num_base_bdevs_discovered": 2, 00:17:57.467 "num_base_bdevs_operational": 4, 00:17:57.467 "base_bdevs_list": [ 00:17:57.467 { 00:17:57.467 "name": "BaseBdev1", 00:17:57.467 "uuid": "40312854-d573-495f-8521-1b3c80138247", 00:17:57.467 "is_configured": true, 00:17:57.467 "data_offset": 0, 00:17:57.467 "data_size": 65536 00:17:57.467 }, 00:17:57.467 { 00:17:57.467 "name": null, 00:17:57.467 "uuid": "5db7654d-e736-40eb-8c1c-eb995cc2f61c", 00:17:57.467 "is_configured": false, 00:17:57.467 "data_offset": 0, 00:17:57.467 "data_size": 65536 00:17:57.467 }, 00:17:57.467 { 00:17:57.467 "name": null, 00:17:57.467 "uuid": "23c0f8f4-5d46-482d-8e1e-bf49adfd5421", 00:17:57.467 "is_configured": false, 00:17:57.467 "data_offset": 0, 00:17:57.467 "data_size": 65536 00:17:57.467 }, 00:17:57.467 { 00:17:57.467 "name": "BaseBdev4", 00:17:57.467 "uuid": "1982877a-dccf-4be0-83da-e03d94b6f957", 00:17:57.467 "is_configured": true, 00:17:57.467 "data_offset": 0, 00:17:57.467 "data_size": 65536 00:17:57.467 } 00:17:57.467 ] 00:17:57.467 }' 00:17:57.467 22:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:57.467 22:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.037 22:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:58.037 22:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:58.296 22:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:17:58.296 22:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:58.296 [2024-07-13 22:01:17.603697] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:58.296 22:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:58.296 22:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:58.296 22:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:58.296 22:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:58.296 22:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:58.296 22:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:58.296 22:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:58.296 22:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:58.296 22:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:58.296 22:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:58.296 22:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:58.296 22:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:58.555 22:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:58.555 "name": "Existed_Raid", 00:17:58.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.555 "strip_size_kb": 64, 00:17:58.555 "state": "configuring", 00:17:58.555 "raid_level": "raid0", 00:17:58.555 "superblock": false, 00:17:58.555 "num_base_bdevs": 4, 00:17:58.555 "num_base_bdevs_discovered": 3, 00:17:58.555 "num_base_bdevs_operational": 4, 00:17:58.555 "base_bdevs_list": [ 00:17:58.555 { 00:17:58.555 "name": "BaseBdev1", 00:17:58.555 "uuid": "40312854-d573-495f-8521-1b3c80138247", 00:17:58.555 "is_configured": true, 00:17:58.555 "data_offset": 0, 00:17:58.555 "data_size": 65536 00:17:58.555 }, 00:17:58.555 { 00:17:58.555 "name": null, 00:17:58.555 "uuid": "5db7654d-e736-40eb-8c1c-eb995cc2f61c", 00:17:58.555 "is_configured": false, 00:17:58.555 "data_offset": 0, 00:17:58.555 "data_size": 65536 00:17:58.555 }, 00:17:58.555 { 00:17:58.555 "name": "BaseBdev3", 00:17:58.555 "uuid": "23c0f8f4-5d46-482d-8e1e-bf49adfd5421", 00:17:58.555 "is_configured": true, 00:17:58.555 "data_offset": 0, 00:17:58.555 "data_size": 65536 00:17:58.555 }, 00:17:58.555 { 00:17:58.555 "name": "BaseBdev4", 00:17:58.555 "uuid": "1982877a-dccf-4be0-83da-e03d94b6f957", 00:17:58.555 "is_configured": true, 00:17:58.555 "data_offset": 0, 00:17:58.555 "data_size": 65536 00:17:58.555 } 00:17:58.555 ] 00:17:58.555 }' 00:17:58.555 22:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:58.555 22:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.121 22:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:59.121 22:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:59.121 22:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:17:59.121 22:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:59.380 [2024-07-13 22:01:18.546263] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:59.380 22:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:59.380 22:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:59.380 22:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:59.380 22:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:59.380 22:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:59.380 22:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:59.380 22:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:59.380 22:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:59.380 22:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:59.380 22:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:59.380 22:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:59.380 22:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:59.638 22:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:59.638 "name": "Existed_Raid", 00:17:59.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.638 "strip_size_kb": 64, 00:17:59.638 "state": "configuring", 00:17:59.638 "raid_level": "raid0", 00:17:59.638 "superblock": false, 00:17:59.638 "num_base_bdevs": 4, 00:17:59.638 "num_base_bdevs_discovered": 2, 00:17:59.638 "num_base_bdevs_operational": 4, 00:17:59.638 "base_bdevs_list": [ 00:17:59.638 { 00:17:59.638 "name": null, 00:17:59.638 "uuid": "40312854-d573-495f-8521-1b3c80138247", 00:17:59.638 "is_configured": false, 00:17:59.638 "data_offset": 0, 00:17:59.638 "data_size": 65536 00:17:59.638 }, 00:17:59.638 { 00:17:59.638 "name": null, 00:17:59.638 "uuid": "5db7654d-e736-40eb-8c1c-eb995cc2f61c", 00:17:59.638 "is_configured": false, 00:17:59.638 "data_offset": 0, 00:17:59.638 "data_size": 65536 00:17:59.638 }, 00:17:59.638 { 00:17:59.638 "name": "BaseBdev3", 00:17:59.638 "uuid": "23c0f8f4-5d46-482d-8e1e-bf49adfd5421", 00:17:59.638 "is_configured": true, 00:17:59.638 "data_offset": 0, 00:17:59.638 "data_size": 65536 00:17:59.638 }, 00:17:59.638 { 00:17:59.638 "name": "BaseBdev4", 00:17:59.638 "uuid": "1982877a-dccf-4be0-83da-e03d94b6f957", 00:17:59.638 "is_configured": true, 00:17:59.638 "data_offset": 0, 00:17:59.638 "data_size": 65536 00:17:59.638 } 00:17:59.638 ] 00:17:59.638 }' 00:17:59.638 22:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:59.638 22:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.204 22:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:00.204 22:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:00.204 22:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:18:00.204 22:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:00.463 [2024-07-13 22:01:19.615679] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:00.463 22:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:00.463 22:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:00.463 22:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:00.463 22:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:00.463 22:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:00.463 22:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:00.463 22:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:00.463 22:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:00.463 22:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:00.463 22:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:00.463 22:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:00.463 22:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:00.463 22:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:00.463 "name": "Existed_Raid", 00:18:00.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.463 "strip_size_kb": 64, 00:18:00.463 "state": "configuring", 00:18:00.463 "raid_level": "raid0", 00:18:00.463 "superblock": false, 00:18:00.463 "num_base_bdevs": 4, 00:18:00.463 "num_base_bdevs_discovered": 3, 00:18:00.463 "num_base_bdevs_operational": 4, 00:18:00.463 "base_bdevs_list": [ 00:18:00.463 { 00:18:00.463 "name": null, 00:18:00.463 "uuid": "40312854-d573-495f-8521-1b3c80138247", 00:18:00.463 "is_configured": false, 00:18:00.463 "data_offset": 0, 00:18:00.463 "data_size": 65536 00:18:00.463 }, 00:18:00.463 { 00:18:00.463 "name": "BaseBdev2", 00:18:00.463 "uuid": "5db7654d-e736-40eb-8c1c-eb995cc2f61c", 00:18:00.463 "is_configured": true, 00:18:00.463 "data_offset": 0, 00:18:00.463 "data_size": 65536 00:18:00.463 }, 00:18:00.463 { 00:18:00.463 "name": "BaseBdev3", 00:18:00.463 "uuid": "23c0f8f4-5d46-482d-8e1e-bf49adfd5421", 00:18:00.463 "is_configured": true, 00:18:00.463 "data_offset": 0, 00:18:00.463 "data_size": 65536 00:18:00.463 }, 00:18:00.463 { 00:18:00.463 "name": "BaseBdev4", 00:18:00.463 "uuid": "1982877a-dccf-4be0-83da-e03d94b6f957", 00:18:00.463 "is_configured": true, 00:18:00.463 "data_offset": 0, 00:18:00.463 "data_size": 65536 00:18:00.463 } 00:18:00.463 ] 00:18:00.463 }' 00:18:00.463 22:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:00.463 22:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.028 22:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.028 22:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:01.286 22:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:18:01.286 22:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.286 22:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:01.286 22:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 40312854-d573-495f-8521-1b3c80138247 00:18:01.544 [2024-07-13 22:01:20.814379] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:01.544 [2024-07-13 22:01:20.814423] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042080 00:18:01.544 [2024-07-13 22:01:20.814432] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:18:01.544 [2024-07-13 22:01:20.814686] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010b20 00:18:01.544 [2024-07-13 22:01:20.814854] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042080 00:18:01.544 [2024-07-13 22:01:20.814867] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000042080 00:18:01.544 [2024-07-13 22:01:20.815110] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:01.544 NewBaseBdev 00:18:01.544 22:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:18:01.544 22:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:18:01.544 22:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:01.544 22:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:01.544 22:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:01.544 22:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:01.544 22:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:01.803 22:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:01.803 [ 00:18:01.803 { 00:18:01.803 "name": "NewBaseBdev", 00:18:01.803 "aliases": [ 00:18:01.803 "40312854-d573-495f-8521-1b3c80138247" 00:18:01.803 ], 00:18:01.803 "product_name": "Malloc disk", 00:18:01.803 "block_size": 512, 00:18:01.803 "num_blocks": 65536, 00:18:01.803 "uuid": "40312854-d573-495f-8521-1b3c80138247", 00:18:01.803 "assigned_rate_limits": { 00:18:01.803 "rw_ios_per_sec": 0, 00:18:01.803 "rw_mbytes_per_sec": 0, 00:18:01.803 "r_mbytes_per_sec": 0, 00:18:01.803 "w_mbytes_per_sec": 0 00:18:01.803 }, 00:18:01.803 "claimed": true, 00:18:01.803 "claim_type": "exclusive_write", 00:18:01.803 "zoned": false, 00:18:01.803 "supported_io_types": { 00:18:01.803 "read": true, 00:18:01.803 "write": true, 00:18:01.803 "unmap": true, 00:18:01.803 "flush": true, 00:18:01.803 "reset": true, 00:18:01.803 "nvme_admin": false, 00:18:01.803 "nvme_io": false, 00:18:01.803 "nvme_io_md": false, 00:18:01.803 "write_zeroes": true, 00:18:01.803 "zcopy": true, 00:18:01.803 "get_zone_info": false, 00:18:01.803 "zone_management": false, 00:18:01.803 "zone_append": false, 00:18:01.803 "compare": false, 00:18:01.803 "compare_and_write": false, 00:18:01.803 "abort": true, 00:18:01.803 "seek_hole": false, 00:18:01.803 "seek_data": false, 00:18:01.803 "copy": true, 00:18:01.803 "nvme_iov_md": false 00:18:01.803 }, 00:18:01.803 "memory_domains": [ 00:18:01.803 { 00:18:01.803 "dma_device_id": "system", 00:18:01.803 "dma_device_type": 1 00:18:01.803 }, 00:18:01.803 { 00:18:01.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.803 "dma_device_type": 2 00:18:01.803 } 00:18:01.803 ], 00:18:01.803 "driver_specific": {} 00:18:01.803 } 00:18:01.803 ] 00:18:01.803 22:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:01.803 22:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:18:01.803 22:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:01.803 22:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:01.803 22:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:01.803 22:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:01.803 22:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:01.803 22:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:01.803 22:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:01.803 22:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:01.803 22:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:01.803 22:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.803 22:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:02.063 22:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:02.063 "name": "Existed_Raid", 00:18:02.063 "uuid": "342daa78-2f6d-44e0-8c12-4a5cbf701916", 00:18:02.063 "strip_size_kb": 64, 00:18:02.063 "state": "online", 00:18:02.063 "raid_level": "raid0", 00:18:02.063 "superblock": false, 00:18:02.063 "num_base_bdevs": 4, 00:18:02.063 "num_base_bdevs_discovered": 4, 00:18:02.063 "num_base_bdevs_operational": 4, 00:18:02.063 "base_bdevs_list": [ 00:18:02.063 { 00:18:02.063 "name": "NewBaseBdev", 00:18:02.063 "uuid": "40312854-d573-495f-8521-1b3c80138247", 00:18:02.063 "is_configured": true, 00:18:02.063 "data_offset": 0, 00:18:02.063 "data_size": 65536 00:18:02.063 }, 00:18:02.063 { 00:18:02.063 "name": "BaseBdev2", 00:18:02.063 "uuid": "5db7654d-e736-40eb-8c1c-eb995cc2f61c", 00:18:02.063 "is_configured": true, 00:18:02.063 "data_offset": 0, 00:18:02.063 "data_size": 65536 00:18:02.063 }, 00:18:02.063 { 00:18:02.063 "name": "BaseBdev3", 00:18:02.063 "uuid": "23c0f8f4-5d46-482d-8e1e-bf49adfd5421", 00:18:02.063 "is_configured": true, 00:18:02.063 "data_offset": 0, 00:18:02.063 "data_size": 65536 00:18:02.063 }, 00:18:02.063 { 00:18:02.063 "name": "BaseBdev4", 00:18:02.063 "uuid": "1982877a-dccf-4be0-83da-e03d94b6f957", 00:18:02.063 "is_configured": true, 00:18:02.063 "data_offset": 0, 00:18:02.063 "data_size": 65536 00:18:02.063 } 00:18:02.063 ] 00:18:02.063 }' 00:18:02.063 22:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:02.063 22:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.630 22:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:18:02.630 22:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:02.630 22:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:02.630 22:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:02.630 22:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:02.630 22:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:02.630 22:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:02.630 22:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:02.630 [2024-07-13 22:01:21.945791] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:02.630 22:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:02.630 "name": "Existed_Raid", 00:18:02.630 "aliases": [ 00:18:02.630 "342daa78-2f6d-44e0-8c12-4a5cbf701916" 00:18:02.630 ], 00:18:02.630 "product_name": "Raid Volume", 00:18:02.630 "block_size": 512, 00:18:02.630 "num_blocks": 262144, 00:18:02.630 "uuid": "342daa78-2f6d-44e0-8c12-4a5cbf701916", 00:18:02.630 "assigned_rate_limits": { 00:18:02.630 "rw_ios_per_sec": 0, 00:18:02.630 "rw_mbytes_per_sec": 0, 00:18:02.630 "r_mbytes_per_sec": 0, 00:18:02.630 "w_mbytes_per_sec": 0 00:18:02.630 }, 00:18:02.630 "claimed": false, 00:18:02.630 "zoned": false, 00:18:02.630 "supported_io_types": { 00:18:02.630 "read": true, 00:18:02.630 "write": true, 00:18:02.630 "unmap": true, 00:18:02.630 "flush": true, 00:18:02.630 "reset": true, 00:18:02.630 "nvme_admin": false, 00:18:02.630 "nvme_io": false, 00:18:02.630 "nvme_io_md": false, 00:18:02.630 "write_zeroes": true, 00:18:02.630 "zcopy": false, 00:18:02.630 "get_zone_info": false, 00:18:02.630 "zone_management": false, 00:18:02.630 "zone_append": false, 00:18:02.630 "compare": false, 00:18:02.630 "compare_and_write": false, 00:18:02.630 "abort": false, 00:18:02.630 "seek_hole": false, 00:18:02.630 "seek_data": false, 00:18:02.630 "copy": false, 00:18:02.630 "nvme_iov_md": false 00:18:02.630 }, 00:18:02.630 "memory_domains": [ 00:18:02.630 { 00:18:02.630 "dma_device_id": "system", 00:18:02.630 "dma_device_type": 1 00:18:02.630 }, 00:18:02.630 { 00:18:02.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.630 "dma_device_type": 2 00:18:02.630 }, 00:18:02.630 { 00:18:02.630 "dma_device_id": "system", 00:18:02.630 "dma_device_type": 1 00:18:02.630 }, 00:18:02.630 { 00:18:02.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.630 "dma_device_type": 2 00:18:02.630 }, 00:18:02.630 { 00:18:02.630 "dma_device_id": "system", 00:18:02.630 "dma_device_type": 1 00:18:02.630 }, 00:18:02.630 { 00:18:02.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.630 "dma_device_type": 2 00:18:02.630 }, 00:18:02.630 { 00:18:02.630 "dma_device_id": "system", 00:18:02.630 "dma_device_type": 1 00:18:02.630 }, 00:18:02.630 { 00:18:02.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.630 "dma_device_type": 2 00:18:02.630 } 00:18:02.630 ], 00:18:02.630 "driver_specific": { 00:18:02.630 "raid": { 00:18:02.630 "uuid": "342daa78-2f6d-44e0-8c12-4a5cbf701916", 00:18:02.630 "strip_size_kb": 64, 00:18:02.630 "state": "online", 00:18:02.630 "raid_level": "raid0", 00:18:02.630 "superblock": false, 00:18:02.630 "num_base_bdevs": 4, 00:18:02.630 "num_base_bdevs_discovered": 4, 00:18:02.630 "num_base_bdevs_operational": 4, 00:18:02.630 "base_bdevs_list": [ 00:18:02.630 { 00:18:02.630 "name": "NewBaseBdev", 00:18:02.630 "uuid": "40312854-d573-495f-8521-1b3c80138247", 00:18:02.630 "is_configured": true, 00:18:02.630 "data_offset": 0, 00:18:02.630 "data_size": 65536 00:18:02.630 }, 00:18:02.630 { 00:18:02.630 "name": "BaseBdev2", 00:18:02.630 "uuid": "5db7654d-e736-40eb-8c1c-eb995cc2f61c", 00:18:02.630 "is_configured": true, 00:18:02.630 "data_offset": 0, 00:18:02.630 "data_size": 65536 00:18:02.630 }, 00:18:02.630 { 00:18:02.630 "name": "BaseBdev3", 00:18:02.630 "uuid": "23c0f8f4-5d46-482d-8e1e-bf49adfd5421", 00:18:02.630 "is_configured": true, 00:18:02.630 "data_offset": 0, 00:18:02.630 "data_size": 65536 00:18:02.630 }, 00:18:02.630 { 00:18:02.630 "name": "BaseBdev4", 00:18:02.630 "uuid": "1982877a-dccf-4be0-83da-e03d94b6f957", 00:18:02.630 "is_configured": true, 00:18:02.630 "data_offset": 0, 00:18:02.630 "data_size": 65536 00:18:02.630 } 00:18:02.630 ] 00:18:02.630 } 00:18:02.630 } 00:18:02.630 }' 00:18:02.630 22:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:02.630 22:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:18:02.630 BaseBdev2 00:18:02.630 BaseBdev3 00:18:02.630 BaseBdev4' 00:18:02.630 22:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:02.631 22:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:18:02.631 22:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:02.915 22:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:02.915 "name": "NewBaseBdev", 00:18:02.915 "aliases": [ 00:18:02.915 "40312854-d573-495f-8521-1b3c80138247" 00:18:02.915 ], 00:18:02.915 "product_name": "Malloc disk", 00:18:02.915 "block_size": 512, 00:18:02.915 "num_blocks": 65536, 00:18:02.915 "uuid": "40312854-d573-495f-8521-1b3c80138247", 00:18:02.915 "assigned_rate_limits": { 00:18:02.915 "rw_ios_per_sec": 0, 00:18:02.915 "rw_mbytes_per_sec": 0, 00:18:02.915 "r_mbytes_per_sec": 0, 00:18:02.915 "w_mbytes_per_sec": 0 00:18:02.915 }, 00:18:02.915 "claimed": true, 00:18:02.915 "claim_type": "exclusive_write", 00:18:02.915 "zoned": false, 00:18:02.915 "supported_io_types": { 00:18:02.915 "read": true, 00:18:02.915 "write": true, 00:18:02.915 "unmap": true, 00:18:02.915 "flush": true, 00:18:02.915 "reset": true, 00:18:02.915 "nvme_admin": false, 00:18:02.915 "nvme_io": false, 00:18:02.915 "nvme_io_md": false, 00:18:02.915 "write_zeroes": true, 00:18:02.915 "zcopy": true, 00:18:02.915 "get_zone_info": false, 00:18:02.915 "zone_management": false, 00:18:02.915 "zone_append": false, 00:18:02.915 "compare": false, 00:18:02.915 "compare_and_write": false, 00:18:02.915 "abort": true, 00:18:02.915 "seek_hole": false, 00:18:02.915 "seek_data": false, 00:18:02.915 "copy": true, 00:18:02.915 "nvme_iov_md": false 00:18:02.915 }, 00:18:02.915 "memory_domains": [ 00:18:02.915 { 00:18:02.915 "dma_device_id": "system", 00:18:02.915 "dma_device_type": 1 00:18:02.915 }, 00:18:02.915 { 00:18:02.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.915 "dma_device_type": 2 00:18:02.915 } 00:18:02.915 ], 00:18:02.915 "driver_specific": {} 00:18:02.915 }' 00:18:02.915 22:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:02.915 22:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:02.915 22:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:02.915 22:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:02.915 22:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:02.915 22:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:02.915 22:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:02.915 22:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:03.174 22:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:03.174 22:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:03.174 22:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:03.174 22:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:03.174 22:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:03.174 22:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:03.174 22:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:03.433 22:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:03.433 "name": "BaseBdev2", 00:18:03.433 "aliases": [ 00:18:03.433 "5db7654d-e736-40eb-8c1c-eb995cc2f61c" 00:18:03.433 ], 00:18:03.433 "product_name": "Malloc disk", 00:18:03.433 "block_size": 512, 00:18:03.433 "num_blocks": 65536, 00:18:03.433 "uuid": "5db7654d-e736-40eb-8c1c-eb995cc2f61c", 00:18:03.433 "assigned_rate_limits": { 00:18:03.433 "rw_ios_per_sec": 0, 00:18:03.433 "rw_mbytes_per_sec": 0, 00:18:03.433 "r_mbytes_per_sec": 0, 00:18:03.433 "w_mbytes_per_sec": 0 00:18:03.433 }, 00:18:03.433 "claimed": true, 00:18:03.433 "claim_type": "exclusive_write", 00:18:03.433 "zoned": false, 00:18:03.433 "supported_io_types": { 00:18:03.433 "read": true, 00:18:03.433 "write": true, 00:18:03.433 "unmap": true, 00:18:03.433 "flush": true, 00:18:03.433 "reset": true, 00:18:03.433 "nvme_admin": false, 00:18:03.433 "nvme_io": false, 00:18:03.433 "nvme_io_md": false, 00:18:03.433 "write_zeroes": true, 00:18:03.433 "zcopy": true, 00:18:03.433 "get_zone_info": false, 00:18:03.433 "zone_management": false, 00:18:03.433 "zone_append": false, 00:18:03.433 "compare": false, 00:18:03.433 "compare_and_write": false, 00:18:03.433 "abort": true, 00:18:03.433 "seek_hole": false, 00:18:03.433 "seek_data": false, 00:18:03.433 "copy": true, 00:18:03.433 "nvme_iov_md": false 00:18:03.433 }, 00:18:03.433 "memory_domains": [ 00:18:03.433 { 00:18:03.433 "dma_device_id": "system", 00:18:03.433 "dma_device_type": 1 00:18:03.433 }, 00:18:03.433 { 00:18:03.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:03.433 "dma_device_type": 2 00:18:03.433 } 00:18:03.433 ], 00:18:03.433 "driver_specific": {} 00:18:03.433 }' 00:18:03.433 22:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:03.433 22:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:03.433 22:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:03.433 22:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:03.433 22:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:03.433 22:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:03.433 22:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:03.433 22:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:03.691 22:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:03.691 22:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:03.691 22:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:03.691 22:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:03.691 22:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:03.691 22:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:03.691 22:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:03.950 22:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:03.950 "name": "BaseBdev3", 00:18:03.950 "aliases": [ 00:18:03.950 "23c0f8f4-5d46-482d-8e1e-bf49adfd5421" 00:18:03.950 ], 00:18:03.950 "product_name": "Malloc disk", 00:18:03.950 "block_size": 512, 00:18:03.950 "num_blocks": 65536, 00:18:03.950 "uuid": "23c0f8f4-5d46-482d-8e1e-bf49adfd5421", 00:18:03.950 "assigned_rate_limits": { 00:18:03.950 "rw_ios_per_sec": 0, 00:18:03.950 "rw_mbytes_per_sec": 0, 00:18:03.950 "r_mbytes_per_sec": 0, 00:18:03.950 "w_mbytes_per_sec": 0 00:18:03.950 }, 00:18:03.950 "claimed": true, 00:18:03.950 "claim_type": "exclusive_write", 00:18:03.950 "zoned": false, 00:18:03.950 "supported_io_types": { 00:18:03.950 "read": true, 00:18:03.950 "write": true, 00:18:03.950 "unmap": true, 00:18:03.950 "flush": true, 00:18:03.950 "reset": true, 00:18:03.950 "nvme_admin": false, 00:18:03.951 "nvme_io": false, 00:18:03.951 "nvme_io_md": false, 00:18:03.951 "write_zeroes": true, 00:18:03.951 "zcopy": true, 00:18:03.951 "get_zone_info": false, 00:18:03.951 "zone_management": false, 00:18:03.951 "zone_append": false, 00:18:03.951 "compare": false, 00:18:03.951 "compare_and_write": false, 00:18:03.951 "abort": true, 00:18:03.951 "seek_hole": false, 00:18:03.951 "seek_data": false, 00:18:03.951 "copy": true, 00:18:03.951 "nvme_iov_md": false 00:18:03.951 }, 00:18:03.951 "memory_domains": [ 00:18:03.951 { 00:18:03.951 "dma_device_id": "system", 00:18:03.951 "dma_device_type": 1 00:18:03.951 }, 00:18:03.951 { 00:18:03.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:03.951 "dma_device_type": 2 00:18:03.951 } 00:18:03.951 ], 00:18:03.951 "driver_specific": {} 00:18:03.951 }' 00:18:03.951 22:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:03.951 22:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:03.951 22:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:03.951 22:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:03.951 22:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:03.951 22:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:03.951 22:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:03.951 22:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:03.951 22:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:03.951 22:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:03.951 22:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:04.210 22:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:04.210 22:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:04.210 22:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:18:04.210 22:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:04.210 22:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:04.210 "name": "BaseBdev4", 00:18:04.210 "aliases": [ 00:18:04.210 "1982877a-dccf-4be0-83da-e03d94b6f957" 00:18:04.210 ], 00:18:04.210 "product_name": "Malloc disk", 00:18:04.210 "block_size": 512, 00:18:04.210 "num_blocks": 65536, 00:18:04.210 "uuid": "1982877a-dccf-4be0-83da-e03d94b6f957", 00:18:04.210 "assigned_rate_limits": { 00:18:04.210 "rw_ios_per_sec": 0, 00:18:04.210 "rw_mbytes_per_sec": 0, 00:18:04.210 "r_mbytes_per_sec": 0, 00:18:04.210 "w_mbytes_per_sec": 0 00:18:04.210 }, 00:18:04.210 "claimed": true, 00:18:04.210 "claim_type": "exclusive_write", 00:18:04.210 "zoned": false, 00:18:04.210 "supported_io_types": { 00:18:04.210 "read": true, 00:18:04.210 "write": true, 00:18:04.210 "unmap": true, 00:18:04.210 "flush": true, 00:18:04.210 "reset": true, 00:18:04.210 "nvme_admin": false, 00:18:04.210 "nvme_io": false, 00:18:04.210 "nvme_io_md": false, 00:18:04.210 "write_zeroes": true, 00:18:04.210 "zcopy": true, 00:18:04.210 "get_zone_info": false, 00:18:04.210 "zone_management": false, 00:18:04.210 "zone_append": false, 00:18:04.210 "compare": false, 00:18:04.210 "compare_and_write": false, 00:18:04.210 "abort": true, 00:18:04.210 "seek_hole": false, 00:18:04.210 "seek_data": false, 00:18:04.210 "copy": true, 00:18:04.210 "nvme_iov_md": false 00:18:04.210 }, 00:18:04.210 "memory_domains": [ 00:18:04.210 { 00:18:04.210 "dma_device_id": "system", 00:18:04.210 "dma_device_type": 1 00:18:04.210 }, 00:18:04.210 { 00:18:04.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.210 "dma_device_type": 2 00:18:04.210 } 00:18:04.210 ], 00:18:04.210 "driver_specific": {} 00:18:04.210 }' 00:18:04.210 22:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:04.210 22:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:04.210 22:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:04.210 22:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:04.469 22:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:04.469 22:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:04.469 22:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:04.469 22:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:04.469 22:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:04.469 22:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:04.469 22:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:04.469 22:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:04.469 22:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:04.729 [2024-07-13 22:01:23.934738] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:04.729 [2024-07-13 22:01:23.934792] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:04.729 [2024-07-13 22:01:23.934875] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:04.729 [2024-07-13 22:01:23.934950] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:04.729 [2024-07-13 22:01:23.934964] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042080 name Existed_Raid, state offline 00:18:04.729 22:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 1412145 00:18:04.729 22:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 1412145 ']' 00:18:04.729 22:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 1412145 00:18:04.729 22:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:18:04.729 22:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:04.729 22:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1412145 00:18:04.729 22:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:04.729 22:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:04.729 22:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1412145' 00:18:04.729 killing process with pid 1412145 00:18:04.729 22:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 1412145 00:18:04.729 [2024-07-13 22:01:23.996468] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:04.729 22:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 1412145 00:18:04.989 [2024-07-13 22:01:24.311105] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:06.402 22:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:18:06.402 00:18:06.402 real 0m25.591s 00:18:06.402 user 0m44.909s 00:18:06.402 sys 0m4.530s 00:18:06.402 22:01:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:06.402 22:01:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.402 ************************************ 00:18:06.402 END TEST raid_state_function_test 00:18:06.402 ************************************ 00:18:06.402 22:01:25 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:06.402 22:01:25 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:18:06.402 22:01:25 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:06.402 22:01:25 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:06.402 22:01:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:06.402 ************************************ 00:18:06.402 START TEST raid_state_function_test_sb 00:18:06.402 ************************************ 00:18:06.402 22:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 4 true 00:18:06.402 22:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:18:06.402 22:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:18:06.402 22:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:18:06.402 22:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:18:06.402 22:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:18:06.402 22:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:06.402 22:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:18:06.402 22:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:06.402 22:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:06.402 22:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:18:06.402 22:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:06.402 22:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:06.402 22:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:18:06.402 22:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:06.403 22:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:06.403 22:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:18:06.403 22:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:06.403 22:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:06.403 22:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:06.403 22:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:18:06.403 22:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:18:06.403 22:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:18:06.403 22:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:18:06.403 22:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:18:06.403 22:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:18:06.403 22:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:18:06.403 22:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:18:06.403 22:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:18:06.403 22:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:18:06.403 22:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:06.403 22:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=1417066 00:18:06.403 22:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1417066' 00:18:06.403 Process raid pid: 1417066 00:18:06.403 22:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 1417066 /var/tmp/spdk-raid.sock 00:18:06.403 22:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 1417066 ']' 00:18:06.403 22:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:06.403 22:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:06.403 22:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:06.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:06.403 22:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:06.403 22:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.403 [2024-07-13 22:01:25.682729] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:06.403 [2024-07-13 22:01:25.682820] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:06.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:06.403 EAL: Requested device 0000:3d:01.0 cannot be used 00:18:06.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:06.403 EAL: Requested device 0000:3d:01.1 cannot be used 00:18:06.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:06.403 EAL: Requested device 0000:3d:01.2 cannot be used 00:18:06.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:06.403 EAL: Requested device 0000:3d:01.3 cannot be used 00:18:06.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:06.403 EAL: Requested device 0000:3d:01.4 cannot be used 00:18:06.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:06.403 EAL: Requested device 0000:3d:01.5 cannot be used 00:18:06.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:06.403 EAL: Requested device 0000:3d:01.6 cannot be used 00:18:06.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:06.403 EAL: Requested device 0000:3d:01.7 cannot be used 00:18:06.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:06.403 EAL: Requested device 0000:3d:02.0 cannot be used 00:18:06.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:06.403 EAL: Requested device 0000:3d:02.1 cannot be used 00:18:06.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:06.403 EAL: Requested device 0000:3d:02.2 cannot be used 00:18:06.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:06.403 EAL: Requested device 0000:3d:02.3 cannot be used 00:18:06.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:06.403 EAL: Requested device 0000:3d:02.4 cannot be used 00:18:06.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:06.403 EAL: Requested device 0000:3d:02.5 cannot be used 00:18:06.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:06.403 EAL: Requested device 0000:3d:02.6 cannot be used 00:18:06.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:06.403 EAL: Requested device 0000:3d:02.7 cannot be used 00:18:06.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:06.403 EAL: Requested device 0000:3f:01.0 cannot be used 00:18:06.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:06.403 EAL: Requested device 0000:3f:01.1 cannot be used 00:18:06.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:06.403 EAL: Requested device 0000:3f:01.2 cannot be used 00:18:06.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:06.403 EAL: Requested device 0000:3f:01.3 cannot be used 00:18:06.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:06.403 EAL: Requested device 0000:3f:01.4 cannot be used 00:18:06.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:06.403 EAL: Requested device 0000:3f:01.5 cannot be used 00:18:06.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:06.403 EAL: Requested device 0000:3f:01.6 cannot be used 00:18:06.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:06.403 EAL: Requested device 0000:3f:01.7 cannot be used 00:18:06.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:06.403 EAL: Requested device 0000:3f:02.0 cannot be used 00:18:06.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:06.403 EAL: Requested device 0000:3f:02.1 cannot be used 00:18:06.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:06.403 EAL: Requested device 0000:3f:02.2 cannot be used 00:18:06.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:06.403 EAL: Requested device 0000:3f:02.3 cannot be used 00:18:06.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:06.403 EAL: Requested device 0000:3f:02.4 cannot be used 00:18:06.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:06.403 EAL: Requested device 0000:3f:02.5 cannot be used 00:18:06.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:06.403 EAL: Requested device 0000:3f:02.6 cannot be used 00:18:06.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:06.403 EAL: Requested device 0000:3f:02.7 cannot be used 00:18:06.663 [2024-07-13 22:01:25.844663] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.663 [2024-07-13 22:01:26.048862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.922 [2024-07-13 22:01:26.300960] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:06.922 [2024-07-13 22:01:26.300989] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:07.181 22:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:07.181 22:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:18:07.181 22:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:07.441 [2024-07-13 22:01:26.602035] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:07.441 [2024-07-13 22:01:26.602079] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:07.441 [2024-07-13 22:01:26.602088] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:07.441 [2024-07-13 22:01:26.602116] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:07.441 [2024-07-13 22:01:26.602124] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:07.441 [2024-07-13 22:01:26.602135] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:07.441 [2024-07-13 22:01:26.602142] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:07.441 [2024-07-13 22:01:26.602153] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:07.441 22:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:07.441 22:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:07.441 22:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:07.441 22:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:07.441 22:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:07.441 22:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:07.441 22:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:07.441 22:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:07.441 22:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:07.441 22:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:07.441 22:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:07.441 22:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:07.441 22:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:07.441 "name": "Existed_Raid", 00:18:07.441 "uuid": "30f3c0ce-ea8a-43cd-9371-96caab55b317", 00:18:07.441 "strip_size_kb": 64, 00:18:07.441 "state": "configuring", 00:18:07.441 "raid_level": "raid0", 00:18:07.441 "superblock": true, 00:18:07.441 "num_base_bdevs": 4, 00:18:07.441 "num_base_bdevs_discovered": 0, 00:18:07.441 "num_base_bdevs_operational": 4, 00:18:07.441 "base_bdevs_list": [ 00:18:07.441 { 00:18:07.441 "name": "BaseBdev1", 00:18:07.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.441 "is_configured": false, 00:18:07.441 "data_offset": 0, 00:18:07.441 "data_size": 0 00:18:07.441 }, 00:18:07.441 { 00:18:07.441 "name": "BaseBdev2", 00:18:07.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.441 "is_configured": false, 00:18:07.441 "data_offset": 0, 00:18:07.441 "data_size": 0 00:18:07.441 }, 00:18:07.441 { 00:18:07.441 "name": "BaseBdev3", 00:18:07.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.441 "is_configured": false, 00:18:07.441 "data_offset": 0, 00:18:07.441 "data_size": 0 00:18:07.441 }, 00:18:07.441 { 00:18:07.441 "name": "BaseBdev4", 00:18:07.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.441 "is_configured": false, 00:18:07.441 "data_offset": 0, 00:18:07.441 "data_size": 0 00:18:07.441 } 00:18:07.441 ] 00:18:07.441 }' 00:18:07.441 22:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:07.441 22:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.007 22:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:08.265 [2024-07-13 22:01:27.440088] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:08.265 [2024-07-13 22:01:27.440122] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:18:08.265 22:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:08.265 [2024-07-13 22:01:27.616607] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:08.265 [2024-07-13 22:01:27.616644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:08.265 [2024-07-13 22:01:27.616653] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:08.265 [2024-07-13 22:01:27.616670] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:08.265 [2024-07-13 22:01:27.616678] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:08.265 [2024-07-13 22:01:27.616689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:08.265 [2024-07-13 22:01:27.616696] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:08.265 [2024-07-13 22:01:27.616707] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:08.265 22:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:08.523 [2024-07-13 22:01:27.829071] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:08.523 BaseBdev1 00:18:08.523 22:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:18:08.523 22:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:08.523 22:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:08.523 22:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:08.523 22:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:08.523 22:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:08.523 22:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:08.781 22:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:09.040 [ 00:18:09.040 { 00:18:09.040 "name": "BaseBdev1", 00:18:09.040 "aliases": [ 00:18:09.040 "f0463d3e-1e5d-4ff4-b9d2-e02d6432380d" 00:18:09.040 ], 00:18:09.040 "product_name": "Malloc disk", 00:18:09.040 "block_size": 512, 00:18:09.040 "num_blocks": 65536, 00:18:09.040 "uuid": "f0463d3e-1e5d-4ff4-b9d2-e02d6432380d", 00:18:09.040 "assigned_rate_limits": { 00:18:09.040 "rw_ios_per_sec": 0, 00:18:09.040 "rw_mbytes_per_sec": 0, 00:18:09.040 "r_mbytes_per_sec": 0, 00:18:09.040 "w_mbytes_per_sec": 0 00:18:09.040 }, 00:18:09.040 "claimed": true, 00:18:09.040 "claim_type": "exclusive_write", 00:18:09.040 "zoned": false, 00:18:09.040 "supported_io_types": { 00:18:09.040 "read": true, 00:18:09.040 "write": true, 00:18:09.040 "unmap": true, 00:18:09.040 "flush": true, 00:18:09.040 "reset": true, 00:18:09.040 "nvme_admin": false, 00:18:09.040 "nvme_io": false, 00:18:09.040 "nvme_io_md": false, 00:18:09.040 "write_zeroes": true, 00:18:09.040 "zcopy": true, 00:18:09.040 "get_zone_info": false, 00:18:09.040 "zone_management": false, 00:18:09.040 "zone_append": false, 00:18:09.040 "compare": false, 00:18:09.040 "compare_and_write": false, 00:18:09.040 "abort": true, 00:18:09.040 "seek_hole": false, 00:18:09.040 "seek_data": false, 00:18:09.040 "copy": true, 00:18:09.040 "nvme_iov_md": false 00:18:09.040 }, 00:18:09.040 "memory_domains": [ 00:18:09.040 { 00:18:09.041 "dma_device_id": "system", 00:18:09.041 "dma_device_type": 1 00:18:09.041 }, 00:18:09.041 { 00:18:09.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:09.041 "dma_device_type": 2 00:18:09.041 } 00:18:09.041 ], 00:18:09.041 "driver_specific": {} 00:18:09.041 } 00:18:09.041 ] 00:18:09.041 22:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:09.041 22:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:09.041 22:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:09.041 22:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:09.041 22:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:09.041 22:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:09.041 22:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:09.041 22:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:09.041 22:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:09.041 22:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:09.041 22:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:09.041 22:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:09.041 22:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:09.041 22:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:09.041 "name": "Existed_Raid", 00:18:09.041 "uuid": "b7a393ff-0705-48ec-9cbb-b8f83e014ed0", 00:18:09.041 "strip_size_kb": 64, 00:18:09.041 "state": "configuring", 00:18:09.041 "raid_level": "raid0", 00:18:09.041 "superblock": true, 00:18:09.041 "num_base_bdevs": 4, 00:18:09.041 "num_base_bdevs_discovered": 1, 00:18:09.041 "num_base_bdevs_operational": 4, 00:18:09.041 "base_bdevs_list": [ 00:18:09.041 { 00:18:09.041 "name": "BaseBdev1", 00:18:09.041 "uuid": "f0463d3e-1e5d-4ff4-b9d2-e02d6432380d", 00:18:09.041 "is_configured": true, 00:18:09.041 "data_offset": 2048, 00:18:09.041 "data_size": 63488 00:18:09.041 }, 00:18:09.041 { 00:18:09.041 "name": "BaseBdev2", 00:18:09.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.041 "is_configured": false, 00:18:09.041 "data_offset": 0, 00:18:09.041 "data_size": 0 00:18:09.041 }, 00:18:09.041 { 00:18:09.041 "name": "BaseBdev3", 00:18:09.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.041 "is_configured": false, 00:18:09.041 "data_offset": 0, 00:18:09.041 "data_size": 0 00:18:09.041 }, 00:18:09.041 { 00:18:09.041 "name": "BaseBdev4", 00:18:09.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.041 "is_configured": false, 00:18:09.041 "data_offset": 0, 00:18:09.041 "data_size": 0 00:18:09.041 } 00:18:09.041 ] 00:18:09.041 }' 00:18:09.041 22:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:09.041 22:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.607 22:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:09.866 [2024-07-13 22:01:29.008218] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:09.866 [2024-07-13 22:01:29.008263] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:18:09.866 22:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:09.866 [2024-07-13 22:01:29.176754] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:09.866 [2024-07-13 22:01:29.178453] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:09.866 [2024-07-13 22:01:29.178491] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:09.866 [2024-07-13 22:01:29.178501] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:09.866 [2024-07-13 22:01:29.178512] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:09.867 [2024-07-13 22:01:29.178520] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:09.867 [2024-07-13 22:01:29.178536] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:09.867 22:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:18:09.867 22:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:09.867 22:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:09.867 22:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:09.867 22:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:09.867 22:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:09.867 22:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:09.867 22:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:09.867 22:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:09.867 22:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:09.867 22:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:09.867 22:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:09.867 22:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:09.867 22:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:10.124 22:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:10.124 "name": "Existed_Raid", 00:18:10.124 "uuid": "6b136776-77f0-421e-93f5-aa18903d4d2b", 00:18:10.124 "strip_size_kb": 64, 00:18:10.124 "state": "configuring", 00:18:10.124 "raid_level": "raid0", 00:18:10.124 "superblock": true, 00:18:10.124 "num_base_bdevs": 4, 00:18:10.124 "num_base_bdevs_discovered": 1, 00:18:10.124 "num_base_bdevs_operational": 4, 00:18:10.124 "base_bdevs_list": [ 00:18:10.124 { 00:18:10.124 "name": "BaseBdev1", 00:18:10.124 "uuid": "f0463d3e-1e5d-4ff4-b9d2-e02d6432380d", 00:18:10.124 "is_configured": true, 00:18:10.124 "data_offset": 2048, 00:18:10.124 "data_size": 63488 00:18:10.124 }, 00:18:10.124 { 00:18:10.124 "name": "BaseBdev2", 00:18:10.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.124 "is_configured": false, 00:18:10.124 "data_offset": 0, 00:18:10.124 "data_size": 0 00:18:10.124 }, 00:18:10.124 { 00:18:10.124 "name": "BaseBdev3", 00:18:10.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.124 "is_configured": false, 00:18:10.124 "data_offset": 0, 00:18:10.124 "data_size": 0 00:18:10.124 }, 00:18:10.124 { 00:18:10.124 "name": "BaseBdev4", 00:18:10.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.124 "is_configured": false, 00:18:10.124 "data_offset": 0, 00:18:10.124 "data_size": 0 00:18:10.124 } 00:18:10.124 ] 00:18:10.124 }' 00:18:10.124 22:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:10.124 22:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.692 22:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:10.692 [2024-07-13 22:01:30.046157] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:10.692 BaseBdev2 00:18:10.692 22:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:18:10.692 22:01:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:18:10.692 22:01:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:10.692 22:01:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:10.692 22:01:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:10.692 22:01:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:10.692 22:01:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:10.955 22:01:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:11.215 [ 00:18:11.215 { 00:18:11.215 "name": "BaseBdev2", 00:18:11.215 "aliases": [ 00:18:11.215 "252cd7c4-3d0e-49bf-b8d5-b113ebfa40d8" 00:18:11.215 ], 00:18:11.215 "product_name": "Malloc disk", 00:18:11.215 "block_size": 512, 00:18:11.215 "num_blocks": 65536, 00:18:11.215 "uuid": "252cd7c4-3d0e-49bf-b8d5-b113ebfa40d8", 00:18:11.215 "assigned_rate_limits": { 00:18:11.215 "rw_ios_per_sec": 0, 00:18:11.215 "rw_mbytes_per_sec": 0, 00:18:11.215 "r_mbytes_per_sec": 0, 00:18:11.215 "w_mbytes_per_sec": 0 00:18:11.215 }, 00:18:11.215 "claimed": true, 00:18:11.215 "claim_type": "exclusive_write", 00:18:11.215 "zoned": false, 00:18:11.215 "supported_io_types": { 00:18:11.215 "read": true, 00:18:11.215 "write": true, 00:18:11.215 "unmap": true, 00:18:11.215 "flush": true, 00:18:11.215 "reset": true, 00:18:11.215 "nvme_admin": false, 00:18:11.215 "nvme_io": false, 00:18:11.215 "nvme_io_md": false, 00:18:11.215 "write_zeroes": true, 00:18:11.215 "zcopy": true, 00:18:11.215 "get_zone_info": false, 00:18:11.215 "zone_management": false, 00:18:11.215 "zone_append": false, 00:18:11.215 "compare": false, 00:18:11.215 "compare_and_write": false, 00:18:11.216 "abort": true, 00:18:11.216 "seek_hole": false, 00:18:11.216 "seek_data": false, 00:18:11.216 "copy": true, 00:18:11.216 "nvme_iov_md": false 00:18:11.216 }, 00:18:11.216 "memory_domains": [ 00:18:11.216 { 00:18:11.216 "dma_device_id": "system", 00:18:11.216 "dma_device_type": 1 00:18:11.216 }, 00:18:11.216 { 00:18:11.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:11.216 "dma_device_type": 2 00:18:11.216 } 00:18:11.216 ], 00:18:11.216 "driver_specific": {} 00:18:11.216 } 00:18:11.216 ] 00:18:11.216 22:01:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:11.216 22:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:11.216 22:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:11.216 22:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:11.216 22:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:11.216 22:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:11.216 22:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:11.216 22:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:11.216 22:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:11.216 22:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:11.216 22:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:11.216 22:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:11.216 22:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:11.216 22:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:11.216 22:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:11.216 22:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:11.216 "name": "Existed_Raid", 00:18:11.216 "uuid": "6b136776-77f0-421e-93f5-aa18903d4d2b", 00:18:11.216 "strip_size_kb": 64, 00:18:11.216 "state": "configuring", 00:18:11.216 "raid_level": "raid0", 00:18:11.216 "superblock": true, 00:18:11.216 "num_base_bdevs": 4, 00:18:11.216 "num_base_bdevs_discovered": 2, 00:18:11.216 "num_base_bdevs_operational": 4, 00:18:11.216 "base_bdevs_list": [ 00:18:11.216 { 00:18:11.216 "name": "BaseBdev1", 00:18:11.216 "uuid": "f0463d3e-1e5d-4ff4-b9d2-e02d6432380d", 00:18:11.216 "is_configured": true, 00:18:11.216 "data_offset": 2048, 00:18:11.216 "data_size": 63488 00:18:11.216 }, 00:18:11.216 { 00:18:11.216 "name": "BaseBdev2", 00:18:11.216 "uuid": "252cd7c4-3d0e-49bf-b8d5-b113ebfa40d8", 00:18:11.216 "is_configured": true, 00:18:11.216 "data_offset": 2048, 00:18:11.216 "data_size": 63488 00:18:11.216 }, 00:18:11.216 { 00:18:11.216 "name": "BaseBdev3", 00:18:11.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.216 "is_configured": false, 00:18:11.216 "data_offset": 0, 00:18:11.216 "data_size": 0 00:18:11.216 }, 00:18:11.216 { 00:18:11.216 "name": "BaseBdev4", 00:18:11.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.216 "is_configured": false, 00:18:11.216 "data_offset": 0, 00:18:11.216 "data_size": 0 00:18:11.216 } 00:18:11.216 ] 00:18:11.216 }' 00:18:11.216 22:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:11.216 22:01:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.781 22:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:12.040 [2024-07-13 22:01:31.223538] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:12.040 BaseBdev3 00:18:12.040 22:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:18:12.040 22:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:18:12.040 22:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:12.040 22:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:12.040 22:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:12.040 22:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:12.040 22:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:12.040 22:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:12.298 [ 00:18:12.298 { 00:18:12.298 "name": "BaseBdev3", 00:18:12.298 "aliases": [ 00:18:12.298 "26252f3b-3b0d-40fd-a403-08c5bdb2d341" 00:18:12.298 ], 00:18:12.298 "product_name": "Malloc disk", 00:18:12.298 "block_size": 512, 00:18:12.298 "num_blocks": 65536, 00:18:12.298 "uuid": "26252f3b-3b0d-40fd-a403-08c5bdb2d341", 00:18:12.298 "assigned_rate_limits": { 00:18:12.298 "rw_ios_per_sec": 0, 00:18:12.298 "rw_mbytes_per_sec": 0, 00:18:12.298 "r_mbytes_per_sec": 0, 00:18:12.298 "w_mbytes_per_sec": 0 00:18:12.298 }, 00:18:12.298 "claimed": true, 00:18:12.298 "claim_type": "exclusive_write", 00:18:12.299 "zoned": false, 00:18:12.299 "supported_io_types": { 00:18:12.299 "read": true, 00:18:12.299 "write": true, 00:18:12.299 "unmap": true, 00:18:12.299 "flush": true, 00:18:12.299 "reset": true, 00:18:12.299 "nvme_admin": false, 00:18:12.299 "nvme_io": false, 00:18:12.299 "nvme_io_md": false, 00:18:12.299 "write_zeroes": true, 00:18:12.299 "zcopy": true, 00:18:12.299 "get_zone_info": false, 00:18:12.299 "zone_management": false, 00:18:12.299 "zone_append": false, 00:18:12.299 "compare": false, 00:18:12.299 "compare_and_write": false, 00:18:12.299 "abort": true, 00:18:12.299 "seek_hole": false, 00:18:12.299 "seek_data": false, 00:18:12.299 "copy": true, 00:18:12.299 "nvme_iov_md": false 00:18:12.299 }, 00:18:12.299 "memory_domains": [ 00:18:12.299 { 00:18:12.299 "dma_device_id": "system", 00:18:12.299 "dma_device_type": 1 00:18:12.299 }, 00:18:12.299 { 00:18:12.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:12.299 "dma_device_type": 2 00:18:12.299 } 00:18:12.299 ], 00:18:12.299 "driver_specific": {} 00:18:12.299 } 00:18:12.299 ] 00:18:12.299 22:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:12.299 22:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:12.299 22:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:12.299 22:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:12.299 22:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:12.299 22:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:12.299 22:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:12.299 22:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:12.299 22:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:12.299 22:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:12.299 22:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:12.299 22:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:12.299 22:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:12.299 22:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.299 22:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:12.558 22:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:12.558 "name": "Existed_Raid", 00:18:12.558 "uuid": "6b136776-77f0-421e-93f5-aa18903d4d2b", 00:18:12.558 "strip_size_kb": 64, 00:18:12.558 "state": "configuring", 00:18:12.558 "raid_level": "raid0", 00:18:12.558 "superblock": true, 00:18:12.558 "num_base_bdevs": 4, 00:18:12.558 "num_base_bdevs_discovered": 3, 00:18:12.558 "num_base_bdevs_operational": 4, 00:18:12.558 "base_bdevs_list": [ 00:18:12.558 { 00:18:12.558 "name": "BaseBdev1", 00:18:12.558 "uuid": "f0463d3e-1e5d-4ff4-b9d2-e02d6432380d", 00:18:12.558 "is_configured": true, 00:18:12.558 "data_offset": 2048, 00:18:12.558 "data_size": 63488 00:18:12.558 }, 00:18:12.558 { 00:18:12.558 "name": "BaseBdev2", 00:18:12.558 "uuid": "252cd7c4-3d0e-49bf-b8d5-b113ebfa40d8", 00:18:12.558 "is_configured": true, 00:18:12.558 "data_offset": 2048, 00:18:12.558 "data_size": 63488 00:18:12.558 }, 00:18:12.558 { 00:18:12.558 "name": "BaseBdev3", 00:18:12.558 "uuid": "26252f3b-3b0d-40fd-a403-08c5bdb2d341", 00:18:12.558 "is_configured": true, 00:18:12.558 "data_offset": 2048, 00:18:12.558 "data_size": 63488 00:18:12.558 }, 00:18:12.558 { 00:18:12.558 "name": "BaseBdev4", 00:18:12.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.558 "is_configured": false, 00:18:12.558 "data_offset": 0, 00:18:12.558 "data_size": 0 00:18:12.558 } 00:18:12.558 ] 00:18:12.558 }' 00:18:12.558 22:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:12.558 22:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.125 22:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:13.125 [2024-07-13 22:01:32.409602] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:13.125 [2024-07-13 22:01:32.409817] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:18:13.125 [2024-07-13 22:01:32.409833] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:13.125 [2024-07-13 22:01:32.410096] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:18:13.125 [2024-07-13 22:01:32.410269] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:18:13.125 [2024-07-13 22:01:32.410282] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:18:13.125 [2024-07-13 22:01:32.410409] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:13.125 BaseBdev4 00:18:13.125 22:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:18:13.125 22:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:18:13.125 22:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:13.125 22:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:13.125 22:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:13.125 22:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:13.125 22:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:13.383 22:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:13.383 [ 00:18:13.383 { 00:18:13.383 "name": "BaseBdev4", 00:18:13.383 "aliases": [ 00:18:13.383 "a161fcb6-03da-4fe2-8dcb-8ec641d2ff4d" 00:18:13.383 ], 00:18:13.383 "product_name": "Malloc disk", 00:18:13.383 "block_size": 512, 00:18:13.383 "num_blocks": 65536, 00:18:13.383 "uuid": "a161fcb6-03da-4fe2-8dcb-8ec641d2ff4d", 00:18:13.383 "assigned_rate_limits": { 00:18:13.383 "rw_ios_per_sec": 0, 00:18:13.383 "rw_mbytes_per_sec": 0, 00:18:13.383 "r_mbytes_per_sec": 0, 00:18:13.383 "w_mbytes_per_sec": 0 00:18:13.383 }, 00:18:13.383 "claimed": true, 00:18:13.383 "claim_type": "exclusive_write", 00:18:13.383 "zoned": false, 00:18:13.383 "supported_io_types": { 00:18:13.383 "read": true, 00:18:13.383 "write": true, 00:18:13.383 "unmap": true, 00:18:13.383 "flush": true, 00:18:13.383 "reset": true, 00:18:13.383 "nvme_admin": false, 00:18:13.383 "nvme_io": false, 00:18:13.383 "nvme_io_md": false, 00:18:13.383 "write_zeroes": true, 00:18:13.383 "zcopy": true, 00:18:13.383 "get_zone_info": false, 00:18:13.383 "zone_management": false, 00:18:13.383 "zone_append": false, 00:18:13.383 "compare": false, 00:18:13.383 "compare_and_write": false, 00:18:13.383 "abort": true, 00:18:13.383 "seek_hole": false, 00:18:13.383 "seek_data": false, 00:18:13.383 "copy": true, 00:18:13.383 "nvme_iov_md": false 00:18:13.383 }, 00:18:13.383 "memory_domains": [ 00:18:13.383 { 00:18:13.383 "dma_device_id": "system", 00:18:13.383 "dma_device_type": 1 00:18:13.383 }, 00:18:13.383 { 00:18:13.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:13.383 "dma_device_type": 2 00:18:13.383 } 00:18:13.383 ], 00:18:13.383 "driver_specific": {} 00:18:13.383 } 00:18:13.383 ] 00:18:13.383 22:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:13.383 22:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:13.383 22:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:13.383 22:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:18:13.383 22:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:13.383 22:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:13.383 22:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:13.383 22:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:13.383 22:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:13.383 22:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:13.383 22:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:13.383 22:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:13.383 22:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:13.383 22:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.383 22:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:13.641 22:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:13.641 "name": "Existed_Raid", 00:18:13.641 "uuid": "6b136776-77f0-421e-93f5-aa18903d4d2b", 00:18:13.641 "strip_size_kb": 64, 00:18:13.641 "state": "online", 00:18:13.641 "raid_level": "raid0", 00:18:13.641 "superblock": true, 00:18:13.641 "num_base_bdevs": 4, 00:18:13.641 "num_base_bdevs_discovered": 4, 00:18:13.641 "num_base_bdevs_operational": 4, 00:18:13.641 "base_bdevs_list": [ 00:18:13.641 { 00:18:13.641 "name": "BaseBdev1", 00:18:13.641 "uuid": "f0463d3e-1e5d-4ff4-b9d2-e02d6432380d", 00:18:13.641 "is_configured": true, 00:18:13.641 "data_offset": 2048, 00:18:13.641 "data_size": 63488 00:18:13.641 }, 00:18:13.641 { 00:18:13.641 "name": "BaseBdev2", 00:18:13.641 "uuid": "252cd7c4-3d0e-49bf-b8d5-b113ebfa40d8", 00:18:13.641 "is_configured": true, 00:18:13.641 "data_offset": 2048, 00:18:13.641 "data_size": 63488 00:18:13.641 }, 00:18:13.641 { 00:18:13.641 "name": "BaseBdev3", 00:18:13.641 "uuid": "26252f3b-3b0d-40fd-a403-08c5bdb2d341", 00:18:13.641 "is_configured": true, 00:18:13.641 "data_offset": 2048, 00:18:13.641 "data_size": 63488 00:18:13.641 }, 00:18:13.641 { 00:18:13.641 "name": "BaseBdev4", 00:18:13.641 "uuid": "a161fcb6-03da-4fe2-8dcb-8ec641d2ff4d", 00:18:13.641 "is_configured": true, 00:18:13.641 "data_offset": 2048, 00:18:13.641 "data_size": 63488 00:18:13.641 } 00:18:13.641 ] 00:18:13.641 }' 00:18:13.641 22:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:13.641 22:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.207 22:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:18:14.207 22:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:14.207 22:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:14.207 22:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:14.207 22:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:14.207 22:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:18:14.207 22:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:14.207 22:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:14.207 [2024-07-13 22:01:33.573018] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:14.207 22:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:14.207 "name": "Existed_Raid", 00:18:14.207 "aliases": [ 00:18:14.207 "6b136776-77f0-421e-93f5-aa18903d4d2b" 00:18:14.207 ], 00:18:14.207 "product_name": "Raid Volume", 00:18:14.207 "block_size": 512, 00:18:14.207 "num_blocks": 253952, 00:18:14.207 "uuid": "6b136776-77f0-421e-93f5-aa18903d4d2b", 00:18:14.207 "assigned_rate_limits": { 00:18:14.207 "rw_ios_per_sec": 0, 00:18:14.207 "rw_mbytes_per_sec": 0, 00:18:14.207 "r_mbytes_per_sec": 0, 00:18:14.207 "w_mbytes_per_sec": 0 00:18:14.207 }, 00:18:14.207 "claimed": false, 00:18:14.207 "zoned": false, 00:18:14.207 "supported_io_types": { 00:18:14.207 "read": true, 00:18:14.207 "write": true, 00:18:14.207 "unmap": true, 00:18:14.207 "flush": true, 00:18:14.207 "reset": true, 00:18:14.207 "nvme_admin": false, 00:18:14.207 "nvme_io": false, 00:18:14.207 "nvme_io_md": false, 00:18:14.207 "write_zeroes": true, 00:18:14.207 "zcopy": false, 00:18:14.207 "get_zone_info": false, 00:18:14.207 "zone_management": false, 00:18:14.207 "zone_append": false, 00:18:14.207 "compare": false, 00:18:14.207 "compare_and_write": false, 00:18:14.207 "abort": false, 00:18:14.207 "seek_hole": false, 00:18:14.207 "seek_data": false, 00:18:14.207 "copy": false, 00:18:14.207 "nvme_iov_md": false 00:18:14.207 }, 00:18:14.207 "memory_domains": [ 00:18:14.207 { 00:18:14.207 "dma_device_id": "system", 00:18:14.207 "dma_device_type": 1 00:18:14.207 }, 00:18:14.207 { 00:18:14.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.207 "dma_device_type": 2 00:18:14.207 }, 00:18:14.207 { 00:18:14.207 "dma_device_id": "system", 00:18:14.207 "dma_device_type": 1 00:18:14.207 }, 00:18:14.207 { 00:18:14.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.207 "dma_device_type": 2 00:18:14.207 }, 00:18:14.207 { 00:18:14.207 "dma_device_id": "system", 00:18:14.207 "dma_device_type": 1 00:18:14.207 }, 00:18:14.207 { 00:18:14.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.207 "dma_device_type": 2 00:18:14.207 }, 00:18:14.207 { 00:18:14.207 "dma_device_id": "system", 00:18:14.207 "dma_device_type": 1 00:18:14.207 }, 00:18:14.207 { 00:18:14.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.207 "dma_device_type": 2 00:18:14.207 } 00:18:14.207 ], 00:18:14.207 "driver_specific": { 00:18:14.207 "raid": { 00:18:14.207 "uuid": "6b136776-77f0-421e-93f5-aa18903d4d2b", 00:18:14.207 "strip_size_kb": 64, 00:18:14.207 "state": "online", 00:18:14.207 "raid_level": "raid0", 00:18:14.207 "superblock": true, 00:18:14.207 "num_base_bdevs": 4, 00:18:14.207 "num_base_bdevs_discovered": 4, 00:18:14.207 "num_base_bdevs_operational": 4, 00:18:14.207 "base_bdevs_list": [ 00:18:14.207 { 00:18:14.207 "name": "BaseBdev1", 00:18:14.207 "uuid": "f0463d3e-1e5d-4ff4-b9d2-e02d6432380d", 00:18:14.207 "is_configured": true, 00:18:14.207 "data_offset": 2048, 00:18:14.207 "data_size": 63488 00:18:14.207 }, 00:18:14.207 { 00:18:14.207 "name": "BaseBdev2", 00:18:14.207 "uuid": "252cd7c4-3d0e-49bf-b8d5-b113ebfa40d8", 00:18:14.207 "is_configured": true, 00:18:14.207 "data_offset": 2048, 00:18:14.207 "data_size": 63488 00:18:14.207 }, 00:18:14.207 { 00:18:14.207 "name": "BaseBdev3", 00:18:14.207 "uuid": "26252f3b-3b0d-40fd-a403-08c5bdb2d341", 00:18:14.207 "is_configured": true, 00:18:14.207 "data_offset": 2048, 00:18:14.207 "data_size": 63488 00:18:14.207 }, 00:18:14.207 { 00:18:14.207 "name": "BaseBdev4", 00:18:14.207 "uuid": "a161fcb6-03da-4fe2-8dcb-8ec641d2ff4d", 00:18:14.207 "is_configured": true, 00:18:14.207 "data_offset": 2048, 00:18:14.207 "data_size": 63488 00:18:14.207 } 00:18:14.207 ] 00:18:14.207 } 00:18:14.207 } 00:18:14.207 }' 00:18:14.208 22:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:14.466 22:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:18:14.466 BaseBdev2 00:18:14.466 BaseBdev3 00:18:14.466 BaseBdev4' 00:18:14.466 22:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:14.466 22:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:18:14.466 22:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:14.466 22:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:14.466 "name": "BaseBdev1", 00:18:14.466 "aliases": [ 00:18:14.466 "f0463d3e-1e5d-4ff4-b9d2-e02d6432380d" 00:18:14.466 ], 00:18:14.466 "product_name": "Malloc disk", 00:18:14.466 "block_size": 512, 00:18:14.466 "num_blocks": 65536, 00:18:14.466 "uuid": "f0463d3e-1e5d-4ff4-b9d2-e02d6432380d", 00:18:14.466 "assigned_rate_limits": { 00:18:14.466 "rw_ios_per_sec": 0, 00:18:14.466 "rw_mbytes_per_sec": 0, 00:18:14.466 "r_mbytes_per_sec": 0, 00:18:14.466 "w_mbytes_per_sec": 0 00:18:14.466 }, 00:18:14.466 "claimed": true, 00:18:14.466 "claim_type": "exclusive_write", 00:18:14.466 "zoned": false, 00:18:14.466 "supported_io_types": { 00:18:14.466 "read": true, 00:18:14.466 "write": true, 00:18:14.466 "unmap": true, 00:18:14.466 "flush": true, 00:18:14.466 "reset": true, 00:18:14.466 "nvme_admin": false, 00:18:14.466 "nvme_io": false, 00:18:14.466 "nvme_io_md": false, 00:18:14.466 "write_zeroes": true, 00:18:14.466 "zcopy": true, 00:18:14.466 "get_zone_info": false, 00:18:14.466 "zone_management": false, 00:18:14.466 "zone_append": false, 00:18:14.466 "compare": false, 00:18:14.466 "compare_and_write": false, 00:18:14.466 "abort": true, 00:18:14.466 "seek_hole": false, 00:18:14.466 "seek_data": false, 00:18:14.466 "copy": true, 00:18:14.466 "nvme_iov_md": false 00:18:14.466 }, 00:18:14.466 "memory_domains": [ 00:18:14.466 { 00:18:14.466 "dma_device_id": "system", 00:18:14.466 "dma_device_type": 1 00:18:14.466 }, 00:18:14.466 { 00:18:14.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.466 "dma_device_type": 2 00:18:14.466 } 00:18:14.466 ], 00:18:14.466 "driver_specific": {} 00:18:14.466 }' 00:18:14.466 22:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:14.466 22:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:14.725 22:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:14.725 22:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:14.725 22:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:14.725 22:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:14.725 22:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:14.725 22:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:14.725 22:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:14.725 22:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:14.725 22:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:14.725 22:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:14.725 22:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:14.725 22:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:14.725 22:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:14.983 22:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:14.983 "name": "BaseBdev2", 00:18:14.983 "aliases": [ 00:18:14.983 "252cd7c4-3d0e-49bf-b8d5-b113ebfa40d8" 00:18:14.983 ], 00:18:14.983 "product_name": "Malloc disk", 00:18:14.983 "block_size": 512, 00:18:14.983 "num_blocks": 65536, 00:18:14.983 "uuid": "252cd7c4-3d0e-49bf-b8d5-b113ebfa40d8", 00:18:14.983 "assigned_rate_limits": { 00:18:14.983 "rw_ios_per_sec": 0, 00:18:14.983 "rw_mbytes_per_sec": 0, 00:18:14.983 "r_mbytes_per_sec": 0, 00:18:14.983 "w_mbytes_per_sec": 0 00:18:14.983 }, 00:18:14.983 "claimed": true, 00:18:14.983 "claim_type": "exclusive_write", 00:18:14.983 "zoned": false, 00:18:14.983 "supported_io_types": { 00:18:14.983 "read": true, 00:18:14.983 "write": true, 00:18:14.983 "unmap": true, 00:18:14.983 "flush": true, 00:18:14.983 "reset": true, 00:18:14.983 "nvme_admin": false, 00:18:14.983 "nvme_io": false, 00:18:14.983 "nvme_io_md": false, 00:18:14.983 "write_zeroes": true, 00:18:14.983 "zcopy": true, 00:18:14.983 "get_zone_info": false, 00:18:14.983 "zone_management": false, 00:18:14.983 "zone_append": false, 00:18:14.983 "compare": false, 00:18:14.983 "compare_and_write": false, 00:18:14.983 "abort": true, 00:18:14.983 "seek_hole": false, 00:18:14.983 "seek_data": false, 00:18:14.983 "copy": true, 00:18:14.983 "nvme_iov_md": false 00:18:14.983 }, 00:18:14.983 "memory_domains": [ 00:18:14.983 { 00:18:14.983 "dma_device_id": "system", 00:18:14.983 "dma_device_type": 1 00:18:14.983 }, 00:18:14.983 { 00:18:14.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.983 "dma_device_type": 2 00:18:14.983 } 00:18:14.983 ], 00:18:14.983 "driver_specific": {} 00:18:14.983 }' 00:18:14.983 22:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:14.983 22:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:14.983 22:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:14.983 22:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:14.983 22:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:15.242 22:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:15.242 22:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:15.242 22:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:15.242 22:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:15.242 22:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:15.242 22:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:15.242 22:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:15.242 22:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:15.242 22:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:15.242 22:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:15.500 22:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:15.500 "name": "BaseBdev3", 00:18:15.500 "aliases": [ 00:18:15.500 "26252f3b-3b0d-40fd-a403-08c5bdb2d341" 00:18:15.500 ], 00:18:15.500 "product_name": "Malloc disk", 00:18:15.500 "block_size": 512, 00:18:15.500 "num_blocks": 65536, 00:18:15.500 "uuid": "26252f3b-3b0d-40fd-a403-08c5bdb2d341", 00:18:15.500 "assigned_rate_limits": { 00:18:15.500 "rw_ios_per_sec": 0, 00:18:15.500 "rw_mbytes_per_sec": 0, 00:18:15.500 "r_mbytes_per_sec": 0, 00:18:15.500 "w_mbytes_per_sec": 0 00:18:15.500 }, 00:18:15.500 "claimed": true, 00:18:15.500 "claim_type": "exclusive_write", 00:18:15.500 "zoned": false, 00:18:15.500 "supported_io_types": { 00:18:15.500 "read": true, 00:18:15.500 "write": true, 00:18:15.500 "unmap": true, 00:18:15.500 "flush": true, 00:18:15.500 "reset": true, 00:18:15.500 "nvme_admin": false, 00:18:15.500 "nvme_io": false, 00:18:15.500 "nvme_io_md": false, 00:18:15.500 "write_zeroes": true, 00:18:15.500 "zcopy": true, 00:18:15.500 "get_zone_info": false, 00:18:15.500 "zone_management": false, 00:18:15.500 "zone_append": false, 00:18:15.500 "compare": false, 00:18:15.500 "compare_and_write": false, 00:18:15.500 "abort": true, 00:18:15.500 "seek_hole": false, 00:18:15.500 "seek_data": false, 00:18:15.500 "copy": true, 00:18:15.500 "nvme_iov_md": false 00:18:15.500 }, 00:18:15.500 "memory_domains": [ 00:18:15.500 { 00:18:15.500 "dma_device_id": "system", 00:18:15.500 "dma_device_type": 1 00:18:15.500 }, 00:18:15.500 { 00:18:15.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.500 "dma_device_type": 2 00:18:15.500 } 00:18:15.500 ], 00:18:15.500 "driver_specific": {} 00:18:15.500 }' 00:18:15.500 22:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:15.500 22:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:15.500 22:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:15.500 22:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:15.500 22:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:15.500 22:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:15.500 22:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:15.500 22:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:15.759 22:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:15.759 22:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:15.759 22:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:15.759 22:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:15.759 22:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:15.759 22:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:18:15.759 22:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:15.759 22:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:15.759 "name": "BaseBdev4", 00:18:15.759 "aliases": [ 00:18:15.759 "a161fcb6-03da-4fe2-8dcb-8ec641d2ff4d" 00:18:15.759 ], 00:18:15.759 "product_name": "Malloc disk", 00:18:15.759 "block_size": 512, 00:18:15.759 "num_blocks": 65536, 00:18:15.759 "uuid": "a161fcb6-03da-4fe2-8dcb-8ec641d2ff4d", 00:18:15.759 "assigned_rate_limits": { 00:18:15.759 "rw_ios_per_sec": 0, 00:18:15.759 "rw_mbytes_per_sec": 0, 00:18:15.759 "r_mbytes_per_sec": 0, 00:18:15.759 "w_mbytes_per_sec": 0 00:18:15.759 }, 00:18:15.759 "claimed": true, 00:18:15.759 "claim_type": "exclusive_write", 00:18:15.759 "zoned": false, 00:18:15.759 "supported_io_types": { 00:18:15.759 "read": true, 00:18:15.759 "write": true, 00:18:15.759 "unmap": true, 00:18:15.759 "flush": true, 00:18:15.759 "reset": true, 00:18:15.759 "nvme_admin": false, 00:18:15.759 "nvme_io": false, 00:18:15.759 "nvme_io_md": false, 00:18:15.759 "write_zeroes": true, 00:18:15.759 "zcopy": true, 00:18:15.759 "get_zone_info": false, 00:18:15.759 "zone_management": false, 00:18:15.759 "zone_append": false, 00:18:15.759 "compare": false, 00:18:15.759 "compare_and_write": false, 00:18:15.759 "abort": true, 00:18:15.759 "seek_hole": false, 00:18:15.759 "seek_data": false, 00:18:15.759 "copy": true, 00:18:15.759 "nvme_iov_md": false 00:18:15.759 }, 00:18:15.759 "memory_domains": [ 00:18:15.759 { 00:18:15.759 "dma_device_id": "system", 00:18:15.759 "dma_device_type": 1 00:18:15.759 }, 00:18:15.759 { 00:18:15.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.759 "dma_device_type": 2 00:18:15.759 } 00:18:15.759 ], 00:18:15.759 "driver_specific": {} 00:18:15.759 }' 00:18:15.759 22:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:16.015 22:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:16.015 22:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:16.015 22:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:16.015 22:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:16.015 22:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:16.015 22:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:16.015 22:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:16.015 22:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:16.015 22:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:16.015 22:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:16.273 22:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:16.273 22:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:16.273 [2024-07-13 22:01:35.574068] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:16.273 [2024-07-13 22:01:35.574098] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:16.273 [2024-07-13 22:01:35.574144] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:16.273 22:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:18:16.273 22:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:18:16.273 22:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:16.273 22:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:18:16.273 22:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:18:16.273 22:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:18:16.273 22:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:16.273 22:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:18:16.273 22:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:16.273 22:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:16.273 22:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:16.273 22:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:16.273 22:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:16.273 22:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:16.273 22:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:16.273 22:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.273 22:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:16.531 22:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:16.531 "name": "Existed_Raid", 00:18:16.531 "uuid": "6b136776-77f0-421e-93f5-aa18903d4d2b", 00:18:16.531 "strip_size_kb": 64, 00:18:16.531 "state": "offline", 00:18:16.531 "raid_level": "raid0", 00:18:16.531 "superblock": true, 00:18:16.531 "num_base_bdevs": 4, 00:18:16.531 "num_base_bdevs_discovered": 3, 00:18:16.531 "num_base_bdevs_operational": 3, 00:18:16.531 "base_bdevs_list": [ 00:18:16.531 { 00:18:16.531 "name": null, 00:18:16.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.531 "is_configured": false, 00:18:16.531 "data_offset": 2048, 00:18:16.531 "data_size": 63488 00:18:16.531 }, 00:18:16.531 { 00:18:16.531 "name": "BaseBdev2", 00:18:16.531 "uuid": "252cd7c4-3d0e-49bf-b8d5-b113ebfa40d8", 00:18:16.531 "is_configured": true, 00:18:16.531 "data_offset": 2048, 00:18:16.531 "data_size": 63488 00:18:16.531 }, 00:18:16.531 { 00:18:16.531 "name": "BaseBdev3", 00:18:16.531 "uuid": "26252f3b-3b0d-40fd-a403-08c5bdb2d341", 00:18:16.531 "is_configured": true, 00:18:16.531 "data_offset": 2048, 00:18:16.531 "data_size": 63488 00:18:16.531 }, 00:18:16.531 { 00:18:16.531 "name": "BaseBdev4", 00:18:16.531 "uuid": "a161fcb6-03da-4fe2-8dcb-8ec641d2ff4d", 00:18:16.531 "is_configured": true, 00:18:16.531 "data_offset": 2048, 00:18:16.531 "data_size": 63488 00:18:16.531 } 00:18:16.531 ] 00:18:16.531 }' 00:18:16.531 22:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:16.531 22:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.097 22:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:18:17.097 22:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:17.097 22:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.097 22:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:17.097 22:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:17.097 22:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:17.097 22:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:17.356 [2024-07-13 22:01:36.592772] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:17.356 22:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:17.356 22:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:17.356 22:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.356 22:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:17.615 22:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:17.615 22:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:17.615 22:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:17.874 [2024-07-13 22:01:37.020681] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:17.874 22:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:17.874 22:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:17.874 22:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.874 22:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:18.133 22:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:18.133 22:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:18.133 22:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:18.133 [2024-07-13 22:01:37.439648] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:18.133 [2024-07-13 22:01:37.439698] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:18:18.392 22:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:18.392 22:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:18.392 22:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:18.392 22:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:18:18.392 22:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:18:18.392 22:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:18:18.392 22:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:18:18.392 22:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:18:18.392 22:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:18.392 22:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:18.652 BaseBdev2 00:18:18.652 22:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:18:18.652 22:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:18:18.652 22:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:18.652 22:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:18.652 22:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:18.652 22:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:18.652 22:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:18.930 22:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:18.930 [ 00:18:18.930 { 00:18:18.930 "name": "BaseBdev2", 00:18:18.930 "aliases": [ 00:18:18.930 "4bab027a-d2eb-4066-b4d2-a98f97ffda62" 00:18:18.930 ], 00:18:18.930 "product_name": "Malloc disk", 00:18:18.930 "block_size": 512, 00:18:18.930 "num_blocks": 65536, 00:18:18.930 "uuid": "4bab027a-d2eb-4066-b4d2-a98f97ffda62", 00:18:18.930 "assigned_rate_limits": { 00:18:18.930 "rw_ios_per_sec": 0, 00:18:18.931 "rw_mbytes_per_sec": 0, 00:18:18.931 "r_mbytes_per_sec": 0, 00:18:18.931 "w_mbytes_per_sec": 0 00:18:18.931 }, 00:18:18.931 "claimed": false, 00:18:18.931 "zoned": false, 00:18:18.931 "supported_io_types": { 00:18:18.931 "read": true, 00:18:18.931 "write": true, 00:18:18.931 "unmap": true, 00:18:18.931 "flush": true, 00:18:18.931 "reset": true, 00:18:18.931 "nvme_admin": false, 00:18:18.931 "nvme_io": false, 00:18:18.931 "nvme_io_md": false, 00:18:18.931 "write_zeroes": true, 00:18:18.931 "zcopy": true, 00:18:18.931 "get_zone_info": false, 00:18:18.931 "zone_management": false, 00:18:18.931 "zone_append": false, 00:18:18.931 "compare": false, 00:18:18.931 "compare_and_write": false, 00:18:18.931 "abort": true, 00:18:18.931 "seek_hole": false, 00:18:18.931 "seek_data": false, 00:18:18.931 "copy": true, 00:18:18.931 "nvme_iov_md": false 00:18:18.931 }, 00:18:18.931 "memory_domains": [ 00:18:18.931 { 00:18:18.931 "dma_device_id": "system", 00:18:18.931 "dma_device_type": 1 00:18:18.931 }, 00:18:18.931 { 00:18:18.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.931 "dma_device_type": 2 00:18:18.931 } 00:18:18.931 ], 00:18:18.931 "driver_specific": {} 00:18:18.931 } 00:18:18.931 ] 00:18:18.931 22:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:18.931 22:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:18.931 22:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:18.931 22:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:19.189 BaseBdev3 00:18:19.189 22:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:18:19.189 22:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:18:19.189 22:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:19.189 22:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:19.189 22:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:19.189 22:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:19.189 22:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:19.448 22:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:19.448 [ 00:18:19.448 { 00:18:19.448 "name": "BaseBdev3", 00:18:19.448 "aliases": [ 00:18:19.448 "f49d1a47-4530-4df7-8fce-df490452ca74" 00:18:19.448 ], 00:18:19.448 "product_name": "Malloc disk", 00:18:19.448 "block_size": 512, 00:18:19.448 "num_blocks": 65536, 00:18:19.448 "uuid": "f49d1a47-4530-4df7-8fce-df490452ca74", 00:18:19.448 "assigned_rate_limits": { 00:18:19.448 "rw_ios_per_sec": 0, 00:18:19.448 "rw_mbytes_per_sec": 0, 00:18:19.448 "r_mbytes_per_sec": 0, 00:18:19.448 "w_mbytes_per_sec": 0 00:18:19.448 }, 00:18:19.448 "claimed": false, 00:18:19.448 "zoned": false, 00:18:19.448 "supported_io_types": { 00:18:19.448 "read": true, 00:18:19.448 "write": true, 00:18:19.448 "unmap": true, 00:18:19.448 "flush": true, 00:18:19.448 "reset": true, 00:18:19.448 "nvme_admin": false, 00:18:19.448 "nvme_io": false, 00:18:19.448 "nvme_io_md": false, 00:18:19.448 "write_zeroes": true, 00:18:19.448 "zcopy": true, 00:18:19.448 "get_zone_info": false, 00:18:19.448 "zone_management": false, 00:18:19.448 "zone_append": false, 00:18:19.448 "compare": false, 00:18:19.448 "compare_and_write": false, 00:18:19.448 "abort": true, 00:18:19.448 "seek_hole": false, 00:18:19.448 "seek_data": false, 00:18:19.448 "copy": true, 00:18:19.448 "nvme_iov_md": false 00:18:19.448 }, 00:18:19.448 "memory_domains": [ 00:18:19.448 { 00:18:19.448 "dma_device_id": "system", 00:18:19.448 "dma_device_type": 1 00:18:19.448 }, 00:18:19.448 { 00:18:19.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.448 "dma_device_type": 2 00:18:19.448 } 00:18:19.448 ], 00:18:19.448 "driver_specific": {} 00:18:19.448 } 00:18:19.448 ] 00:18:19.448 22:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:19.448 22:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:19.448 22:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:19.448 22:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:19.708 BaseBdev4 00:18:19.708 22:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:18:19.708 22:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:18:19.708 22:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:19.708 22:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:19.708 22:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:19.708 22:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:19.708 22:01:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:19.967 22:01:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:19.967 [ 00:18:19.967 { 00:18:19.967 "name": "BaseBdev4", 00:18:19.967 "aliases": [ 00:18:19.967 "b835bb9e-02d5-4a43-8030-fa917b38711b" 00:18:19.967 ], 00:18:19.967 "product_name": "Malloc disk", 00:18:19.967 "block_size": 512, 00:18:19.967 "num_blocks": 65536, 00:18:19.967 "uuid": "b835bb9e-02d5-4a43-8030-fa917b38711b", 00:18:19.967 "assigned_rate_limits": { 00:18:19.967 "rw_ios_per_sec": 0, 00:18:19.967 "rw_mbytes_per_sec": 0, 00:18:19.967 "r_mbytes_per_sec": 0, 00:18:19.967 "w_mbytes_per_sec": 0 00:18:19.967 }, 00:18:19.967 "claimed": false, 00:18:19.967 "zoned": false, 00:18:19.967 "supported_io_types": { 00:18:19.967 "read": true, 00:18:19.967 "write": true, 00:18:19.967 "unmap": true, 00:18:19.967 "flush": true, 00:18:19.967 "reset": true, 00:18:19.967 "nvme_admin": false, 00:18:19.967 "nvme_io": false, 00:18:19.967 "nvme_io_md": false, 00:18:19.967 "write_zeroes": true, 00:18:19.967 "zcopy": true, 00:18:19.967 "get_zone_info": false, 00:18:19.967 "zone_management": false, 00:18:19.967 "zone_append": false, 00:18:19.967 "compare": false, 00:18:19.967 "compare_and_write": false, 00:18:19.967 "abort": true, 00:18:19.967 "seek_hole": false, 00:18:19.967 "seek_data": false, 00:18:19.967 "copy": true, 00:18:19.967 "nvme_iov_md": false 00:18:19.967 }, 00:18:19.967 "memory_domains": [ 00:18:19.967 { 00:18:19.967 "dma_device_id": "system", 00:18:19.967 "dma_device_type": 1 00:18:19.967 }, 00:18:19.967 { 00:18:19.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.967 "dma_device_type": 2 00:18:19.967 } 00:18:19.967 ], 00:18:19.967 "driver_specific": {} 00:18:19.967 } 00:18:19.967 ] 00:18:19.967 22:01:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:19.967 22:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:19.967 22:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:19.967 22:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:20.226 [2024-07-13 22:01:39.467417] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:20.226 [2024-07-13 22:01:39.467456] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:20.226 [2024-07-13 22:01:39.467497] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:20.226 [2024-07-13 22:01:39.469233] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:20.226 [2024-07-13 22:01:39.469281] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:20.226 22:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:20.226 22:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:20.226 22:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:20.226 22:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:20.226 22:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:20.226 22:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:20.226 22:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:20.226 22:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:20.226 22:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:20.226 22:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:20.226 22:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:20.226 22:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:20.483 22:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:20.483 "name": "Existed_Raid", 00:18:20.483 "uuid": "10f187e9-2700-454e-b0a3-93a45482473f", 00:18:20.483 "strip_size_kb": 64, 00:18:20.483 "state": "configuring", 00:18:20.483 "raid_level": "raid0", 00:18:20.483 "superblock": true, 00:18:20.483 "num_base_bdevs": 4, 00:18:20.484 "num_base_bdevs_discovered": 3, 00:18:20.484 "num_base_bdevs_operational": 4, 00:18:20.484 "base_bdevs_list": [ 00:18:20.484 { 00:18:20.484 "name": "BaseBdev1", 00:18:20.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.484 "is_configured": false, 00:18:20.484 "data_offset": 0, 00:18:20.484 "data_size": 0 00:18:20.484 }, 00:18:20.484 { 00:18:20.484 "name": "BaseBdev2", 00:18:20.484 "uuid": "4bab027a-d2eb-4066-b4d2-a98f97ffda62", 00:18:20.484 "is_configured": true, 00:18:20.484 "data_offset": 2048, 00:18:20.484 "data_size": 63488 00:18:20.484 }, 00:18:20.484 { 00:18:20.484 "name": "BaseBdev3", 00:18:20.484 "uuid": "f49d1a47-4530-4df7-8fce-df490452ca74", 00:18:20.484 "is_configured": true, 00:18:20.484 "data_offset": 2048, 00:18:20.484 "data_size": 63488 00:18:20.484 }, 00:18:20.484 { 00:18:20.484 "name": "BaseBdev4", 00:18:20.484 "uuid": "b835bb9e-02d5-4a43-8030-fa917b38711b", 00:18:20.484 "is_configured": true, 00:18:20.484 "data_offset": 2048, 00:18:20.484 "data_size": 63488 00:18:20.484 } 00:18:20.484 ] 00:18:20.484 }' 00:18:20.484 22:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:20.484 22:01:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.049 22:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:18:21.049 [2024-07-13 22:01:40.317598] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:21.049 22:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:21.049 22:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:21.049 22:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:21.049 22:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:21.049 22:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:21.049 22:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:21.049 22:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:21.049 22:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:21.049 22:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:21.049 22:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:21.049 22:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:21.049 22:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:21.307 22:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:21.307 "name": "Existed_Raid", 00:18:21.307 "uuid": "10f187e9-2700-454e-b0a3-93a45482473f", 00:18:21.307 "strip_size_kb": 64, 00:18:21.307 "state": "configuring", 00:18:21.307 "raid_level": "raid0", 00:18:21.307 "superblock": true, 00:18:21.307 "num_base_bdevs": 4, 00:18:21.307 "num_base_bdevs_discovered": 2, 00:18:21.307 "num_base_bdevs_operational": 4, 00:18:21.307 "base_bdevs_list": [ 00:18:21.307 { 00:18:21.307 "name": "BaseBdev1", 00:18:21.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.307 "is_configured": false, 00:18:21.307 "data_offset": 0, 00:18:21.307 "data_size": 0 00:18:21.307 }, 00:18:21.307 { 00:18:21.307 "name": null, 00:18:21.307 "uuid": "4bab027a-d2eb-4066-b4d2-a98f97ffda62", 00:18:21.307 "is_configured": false, 00:18:21.307 "data_offset": 2048, 00:18:21.307 "data_size": 63488 00:18:21.307 }, 00:18:21.307 { 00:18:21.307 "name": "BaseBdev3", 00:18:21.308 "uuid": "f49d1a47-4530-4df7-8fce-df490452ca74", 00:18:21.308 "is_configured": true, 00:18:21.308 "data_offset": 2048, 00:18:21.308 "data_size": 63488 00:18:21.308 }, 00:18:21.308 { 00:18:21.308 "name": "BaseBdev4", 00:18:21.308 "uuid": "b835bb9e-02d5-4a43-8030-fa917b38711b", 00:18:21.308 "is_configured": true, 00:18:21.308 "data_offset": 2048, 00:18:21.308 "data_size": 63488 00:18:21.308 } 00:18:21.308 ] 00:18:21.308 }' 00:18:21.308 22:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:21.308 22:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.888 22:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:21.888 22:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:21.888 22:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:18:21.888 22:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:22.146 [2024-07-13 22:01:41.352836] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:22.146 BaseBdev1 00:18:22.146 22:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:18:22.146 22:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:22.146 22:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:22.146 22:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:22.146 22:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:22.146 22:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:22.146 22:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:22.146 22:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:22.405 [ 00:18:22.405 { 00:18:22.405 "name": "BaseBdev1", 00:18:22.405 "aliases": [ 00:18:22.405 "155b1e9c-b030-4ce7-846c-c47430e403dc" 00:18:22.405 ], 00:18:22.405 "product_name": "Malloc disk", 00:18:22.405 "block_size": 512, 00:18:22.405 "num_blocks": 65536, 00:18:22.405 "uuid": "155b1e9c-b030-4ce7-846c-c47430e403dc", 00:18:22.405 "assigned_rate_limits": { 00:18:22.405 "rw_ios_per_sec": 0, 00:18:22.405 "rw_mbytes_per_sec": 0, 00:18:22.405 "r_mbytes_per_sec": 0, 00:18:22.405 "w_mbytes_per_sec": 0 00:18:22.405 }, 00:18:22.405 "claimed": true, 00:18:22.405 "claim_type": "exclusive_write", 00:18:22.405 "zoned": false, 00:18:22.405 "supported_io_types": { 00:18:22.405 "read": true, 00:18:22.405 "write": true, 00:18:22.405 "unmap": true, 00:18:22.405 "flush": true, 00:18:22.405 "reset": true, 00:18:22.405 "nvme_admin": false, 00:18:22.405 "nvme_io": false, 00:18:22.405 "nvme_io_md": false, 00:18:22.405 "write_zeroes": true, 00:18:22.405 "zcopy": true, 00:18:22.405 "get_zone_info": false, 00:18:22.405 "zone_management": false, 00:18:22.405 "zone_append": false, 00:18:22.405 "compare": false, 00:18:22.405 "compare_and_write": false, 00:18:22.405 "abort": true, 00:18:22.405 "seek_hole": false, 00:18:22.405 "seek_data": false, 00:18:22.405 "copy": true, 00:18:22.405 "nvme_iov_md": false 00:18:22.405 }, 00:18:22.405 "memory_domains": [ 00:18:22.405 { 00:18:22.405 "dma_device_id": "system", 00:18:22.405 "dma_device_type": 1 00:18:22.406 }, 00:18:22.406 { 00:18:22.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.406 "dma_device_type": 2 00:18:22.406 } 00:18:22.406 ], 00:18:22.406 "driver_specific": {} 00:18:22.406 } 00:18:22.406 ] 00:18:22.406 22:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:22.406 22:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:22.406 22:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:22.406 22:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:22.406 22:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:22.406 22:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:22.406 22:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:22.406 22:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:22.406 22:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:22.406 22:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:22.406 22:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:22.406 22:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:22.406 22:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:22.665 22:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:22.665 "name": "Existed_Raid", 00:18:22.665 "uuid": "10f187e9-2700-454e-b0a3-93a45482473f", 00:18:22.665 "strip_size_kb": 64, 00:18:22.665 "state": "configuring", 00:18:22.665 "raid_level": "raid0", 00:18:22.665 "superblock": true, 00:18:22.665 "num_base_bdevs": 4, 00:18:22.665 "num_base_bdevs_discovered": 3, 00:18:22.665 "num_base_bdevs_operational": 4, 00:18:22.665 "base_bdevs_list": [ 00:18:22.665 { 00:18:22.665 "name": "BaseBdev1", 00:18:22.665 "uuid": "155b1e9c-b030-4ce7-846c-c47430e403dc", 00:18:22.665 "is_configured": true, 00:18:22.665 "data_offset": 2048, 00:18:22.665 "data_size": 63488 00:18:22.665 }, 00:18:22.665 { 00:18:22.665 "name": null, 00:18:22.665 "uuid": "4bab027a-d2eb-4066-b4d2-a98f97ffda62", 00:18:22.665 "is_configured": false, 00:18:22.665 "data_offset": 2048, 00:18:22.665 "data_size": 63488 00:18:22.665 }, 00:18:22.665 { 00:18:22.665 "name": "BaseBdev3", 00:18:22.665 "uuid": "f49d1a47-4530-4df7-8fce-df490452ca74", 00:18:22.665 "is_configured": true, 00:18:22.665 "data_offset": 2048, 00:18:22.665 "data_size": 63488 00:18:22.665 }, 00:18:22.665 { 00:18:22.665 "name": "BaseBdev4", 00:18:22.665 "uuid": "b835bb9e-02d5-4a43-8030-fa917b38711b", 00:18:22.665 "is_configured": true, 00:18:22.665 "data_offset": 2048, 00:18:22.665 "data_size": 63488 00:18:22.665 } 00:18:22.665 ] 00:18:22.665 }' 00:18:22.665 22:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:22.665 22:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.232 22:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:23.232 22:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:23.232 22:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:18:23.232 22:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:18:23.490 [2024-07-13 22:01:42.676388] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:23.490 22:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:23.490 22:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:23.490 22:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:23.490 22:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:23.490 22:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:23.490 22:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:23.490 22:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:23.490 22:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:23.490 22:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:23.490 22:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:23.490 22:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:23.490 22:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:23.490 22:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:23.490 "name": "Existed_Raid", 00:18:23.490 "uuid": "10f187e9-2700-454e-b0a3-93a45482473f", 00:18:23.490 "strip_size_kb": 64, 00:18:23.490 "state": "configuring", 00:18:23.490 "raid_level": "raid0", 00:18:23.490 "superblock": true, 00:18:23.490 "num_base_bdevs": 4, 00:18:23.490 "num_base_bdevs_discovered": 2, 00:18:23.490 "num_base_bdevs_operational": 4, 00:18:23.490 "base_bdevs_list": [ 00:18:23.490 { 00:18:23.490 "name": "BaseBdev1", 00:18:23.490 "uuid": "155b1e9c-b030-4ce7-846c-c47430e403dc", 00:18:23.490 "is_configured": true, 00:18:23.490 "data_offset": 2048, 00:18:23.490 "data_size": 63488 00:18:23.490 }, 00:18:23.490 { 00:18:23.490 "name": null, 00:18:23.490 "uuid": "4bab027a-d2eb-4066-b4d2-a98f97ffda62", 00:18:23.490 "is_configured": false, 00:18:23.490 "data_offset": 2048, 00:18:23.490 "data_size": 63488 00:18:23.490 }, 00:18:23.490 { 00:18:23.490 "name": null, 00:18:23.490 "uuid": "f49d1a47-4530-4df7-8fce-df490452ca74", 00:18:23.490 "is_configured": false, 00:18:23.490 "data_offset": 2048, 00:18:23.490 "data_size": 63488 00:18:23.490 }, 00:18:23.490 { 00:18:23.490 "name": "BaseBdev4", 00:18:23.490 "uuid": "b835bb9e-02d5-4a43-8030-fa917b38711b", 00:18:23.490 "is_configured": true, 00:18:23.490 "data_offset": 2048, 00:18:23.490 "data_size": 63488 00:18:23.490 } 00:18:23.490 ] 00:18:23.490 }' 00:18:23.490 22:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:23.490 22:01:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.057 22:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:24.057 22:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:24.314 22:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:18:24.314 22:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:24.314 [2024-07-13 22:01:43.639127] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:24.314 22:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:24.314 22:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:24.314 22:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:24.314 22:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:24.314 22:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:24.315 22:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:24.315 22:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:24.315 22:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:24.315 22:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:24.315 22:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:24.315 22:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:24.315 22:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:24.573 22:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:24.573 "name": "Existed_Raid", 00:18:24.573 "uuid": "10f187e9-2700-454e-b0a3-93a45482473f", 00:18:24.573 "strip_size_kb": 64, 00:18:24.573 "state": "configuring", 00:18:24.573 "raid_level": "raid0", 00:18:24.573 "superblock": true, 00:18:24.573 "num_base_bdevs": 4, 00:18:24.573 "num_base_bdevs_discovered": 3, 00:18:24.573 "num_base_bdevs_operational": 4, 00:18:24.573 "base_bdevs_list": [ 00:18:24.573 { 00:18:24.573 "name": "BaseBdev1", 00:18:24.573 "uuid": "155b1e9c-b030-4ce7-846c-c47430e403dc", 00:18:24.573 "is_configured": true, 00:18:24.573 "data_offset": 2048, 00:18:24.573 "data_size": 63488 00:18:24.573 }, 00:18:24.573 { 00:18:24.573 "name": null, 00:18:24.573 "uuid": "4bab027a-d2eb-4066-b4d2-a98f97ffda62", 00:18:24.573 "is_configured": false, 00:18:24.573 "data_offset": 2048, 00:18:24.573 "data_size": 63488 00:18:24.573 }, 00:18:24.573 { 00:18:24.573 "name": "BaseBdev3", 00:18:24.573 "uuid": "f49d1a47-4530-4df7-8fce-df490452ca74", 00:18:24.573 "is_configured": true, 00:18:24.573 "data_offset": 2048, 00:18:24.573 "data_size": 63488 00:18:24.573 }, 00:18:24.573 { 00:18:24.573 "name": "BaseBdev4", 00:18:24.573 "uuid": "b835bb9e-02d5-4a43-8030-fa917b38711b", 00:18:24.573 "is_configured": true, 00:18:24.573 "data_offset": 2048, 00:18:24.573 "data_size": 63488 00:18:24.573 } 00:18:24.573 ] 00:18:24.573 }' 00:18:24.573 22:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:24.573 22:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.140 22:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:25.140 22:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:25.140 22:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:18:25.140 22:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:25.399 [2024-07-13 22:01:44.649804] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:25.399 22:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:25.399 22:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:25.399 22:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:25.399 22:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:25.399 22:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:25.399 22:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:25.399 22:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:25.399 22:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:25.399 22:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:25.399 22:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:25.400 22:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:25.400 22:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:25.674 22:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:25.674 "name": "Existed_Raid", 00:18:25.674 "uuid": "10f187e9-2700-454e-b0a3-93a45482473f", 00:18:25.674 "strip_size_kb": 64, 00:18:25.674 "state": "configuring", 00:18:25.674 "raid_level": "raid0", 00:18:25.674 "superblock": true, 00:18:25.674 "num_base_bdevs": 4, 00:18:25.674 "num_base_bdevs_discovered": 2, 00:18:25.674 "num_base_bdevs_operational": 4, 00:18:25.674 "base_bdevs_list": [ 00:18:25.674 { 00:18:25.674 "name": null, 00:18:25.674 "uuid": "155b1e9c-b030-4ce7-846c-c47430e403dc", 00:18:25.674 "is_configured": false, 00:18:25.674 "data_offset": 2048, 00:18:25.674 "data_size": 63488 00:18:25.674 }, 00:18:25.674 { 00:18:25.674 "name": null, 00:18:25.674 "uuid": "4bab027a-d2eb-4066-b4d2-a98f97ffda62", 00:18:25.674 "is_configured": false, 00:18:25.674 "data_offset": 2048, 00:18:25.674 "data_size": 63488 00:18:25.674 }, 00:18:25.674 { 00:18:25.674 "name": "BaseBdev3", 00:18:25.674 "uuid": "f49d1a47-4530-4df7-8fce-df490452ca74", 00:18:25.674 "is_configured": true, 00:18:25.674 "data_offset": 2048, 00:18:25.674 "data_size": 63488 00:18:25.674 }, 00:18:25.674 { 00:18:25.674 "name": "BaseBdev4", 00:18:25.674 "uuid": "b835bb9e-02d5-4a43-8030-fa917b38711b", 00:18:25.674 "is_configured": true, 00:18:25.674 "data_offset": 2048, 00:18:25.674 "data_size": 63488 00:18:25.674 } 00:18:25.674 ] 00:18:25.674 }' 00:18:25.674 22:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:25.674 22:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.251 22:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:26.251 22:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:26.251 22:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:18:26.251 22:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:26.509 [2024-07-13 22:01:45.732186] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:26.509 22:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:26.509 22:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:26.509 22:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:26.509 22:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:26.509 22:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:26.509 22:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:26.509 22:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:26.509 22:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:26.509 22:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:26.509 22:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:26.509 22:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:26.509 22:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:26.769 22:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:26.769 "name": "Existed_Raid", 00:18:26.769 "uuid": "10f187e9-2700-454e-b0a3-93a45482473f", 00:18:26.769 "strip_size_kb": 64, 00:18:26.769 "state": "configuring", 00:18:26.769 "raid_level": "raid0", 00:18:26.769 "superblock": true, 00:18:26.769 "num_base_bdevs": 4, 00:18:26.769 "num_base_bdevs_discovered": 3, 00:18:26.769 "num_base_bdevs_operational": 4, 00:18:26.769 "base_bdevs_list": [ 00:18:26.769 { 00:18:26.769 "name": null, 00:18:26.769 "uuid": "155b1e9c-b030-4ce7-846c-c47430e403dc", 00:18:26.769 "is_configured": false, 00:18:26.769 "data_offset": 2048, 00:18:26.769 "data_size": 63488 00:18:26.769 }, 00:18:26.769 { 00:18:26.769 "name": "BaseBdev2", 00:18:26.769 "uuid": "4bab027a-d2eb-4066-b4d2-a98f97ffda62", 00:18:26.769 "is_configured": true, 00:18:26.769 "data_offset": 2048, 00:18:26.769 "data_size": 63488 00:18:26.769 }, 00:18:26.769 { 00:18:26.769 "name": "BaseBdev3", 00:18:26.769 "uuid": "f49d1a47-4530-4df7-8fce-df490452ca74", 00:18:26.769 "is_configured": true, 00:18:26.769 "data_offset": 2048, 00:18:26.769 "data_size": 63488 00:18:26.769 }, 00:18:26.769 { 00:18:26.769 "name": "BaseBdev4", 00:18:26.769 "uuid": "b835bb9e-02d5-4a43-8030-fa917b38711b", 00:18:26.769 "is_configured": true, 00:18:26.769 "data_offset": 2048, 00:18:26.769 "data_size": 63488 00:18:26.769 } 00:18:26.769 ] 00:18:26.769 }' 00:18:26.769 22:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:26.769 22:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.028 22:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:27.028 22:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:27.286 22:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:18:27.286 22:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:27.286 22:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:27.545 22:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 155b1e9c-b030-4ce7-846c-c47430e403dc 00:18:27.545 [2024-07-13 22:01:46.923476] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:27.545 [2024-07-13 22:01:46.923686] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042080 00:18:27.545 [2024-07-13 22:01:46.923701] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:27.545 [2024-07-13 22:01:46.923956] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010b20 00:18:27.545 [2024-07-13 22:01:46.924125] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042080 00:18:27.545 [2024-07-13 22:01:46.924138] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000042080 00:18:27.545 [2024-07-13 22:01:46.924263] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:27.545 NewBaseBdev 00:18:27.804 22:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:18:27.804 22:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:18:27.804 22:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:27.804 22:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:27.804 22:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:27.804 22:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:27.804 22:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:27.804 22:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:28.062 [ 00:18:28.062 { 00:18:28.062 "name": "NewBaseBdev", 00:18:28.062 "aliases": [ 00:18:28.062 "155b1e9c-b030-4ce7-846c-c47430e403dc" 00:18:28.062 ], 00:18:28.062 "product_name": "Malloc disk", 00:18:28.062 "block_size": 512, 00:18:28.062 "num_blocks": 65536, 00:18:28.062 "uuid": "155b1e9c-b030-4ce7-846c-c47430e403dc", 00:18:28.062 "assigned_rate_limits": { 00:18:28.062 "rw_ios_per_sec": 0, 00:18:28.062 "rw_mbytes_per_sec": 0, 00:18:28.062 "r_mbytes_per_sec": 0, 00:18:28.062 "w_mbytes_per_sec": 0 00:18:28.062 }, 00:18:28.062 "claimed": true, 00:18:28.062 "claim_type": "exclusive_write", 00:18:28.062 "zoned": false, 00:18:28.062 "supported_io_types": { 00:18:28.062 "read": true, 00:18:28.062 "write": true, 00:18:28.062 "unmap": true, 00:18:28.062 "flush": true, 00:18:28.062 "reset": true, 00:18:28.062 "nvme_admin": false, 00:18:28.062 "nvme_io": false, 00:18:28.062 "nvme_io_md": false, 00:18:28.062 "write_zeroes": true, 00:18:28.062 "zcopy": true, 00:18:28.062 "get_zone_info": false, 00:18:28.062 "zone_management": false, 00:18:28.062 "zone_append": false, 00:18:28.062 "compare": false, 00:18:28.062 "compare_and_write": false, 00:18:28.062 "abort": true, 00:18:28.062 "seek_hole": false, 00:18:28.062 "seek_data": false, 00:18:28.062 "copy": true, 00:18:28.062 "nvme_iov_md": false 00:18:28.062 }, 00:18:28.062 "memory_domains": [ 00:18:28.062 { 00:18:28.062 "dma_device_id": "system", 00:18:28.062 "dma_device_type": 1 00:18:28.062 }, 00:18:28.062 { 00:18:28.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:28.062 "dma_device_type": 2 00:18:28.062 } 00:18:28.062 ], 00:18:28.062 "driver_specific": {} 00:18:28.062 } 00:18:28.062 ] 00:18:28.062 22:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:28.062 22:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:18:28.062 22:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:28.062 22:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:28.062 22:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:28.062 22:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:28.062 22:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:28.062 22:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:28.063 22:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:28.063 22:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:28.063 22:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:28.063 22:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:28.063 22:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:28.063 22:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:28.063 "name": "Existed_Raid", 00:18:28.063 "uuid": "10f187e9-2700-454e-b0a3-93a45482473f", 00:18:28.063 "strip_size_kb": 64, 00:18:28.063 "state": "online", 00:18:28.063 "raid_level": "raid0", 00:18:28.063 "superblock": true, 00:18:28.063 "num_base_bdevs": 4, 00:18:28.063 "num_base_bdevs_discovered": 4, 00:18:28.063 "num_base_bdevs_operational": 4, 00:18:28.063 "base_bdevs_list": [ 00:18:28.063 { 00:18:28.063 "name": "NewBaseBdev", 00:18:28.063 "uuid": "155b1e9c-b030-4ce7-846c-c47430e403dc", 00:18:28.063 "is_configured": true, 00:18:28.063 "data_offset": 2048, 00:18:28.063 "data_size": 63488 00:18:28.063 }, 00:18:28.063 { 00:18:28.063 "name": "BaseBdev2", 00:18:28.063 "uuid": "4bab027a-d2eb-4066-b4d2-a98f97ffda62", 00:18:28.063 "is_configured": true, 00:18:28.063 "data_offset": 2048, 00:18:28.063 "data_size": 63488 00:18:28.063 }, 00:18:28.063 { 00:18:28.063 "name": "BaseBdev3", 00:18:28.063 "uuid": "f49d1a47-4530-4df7-8fce-df490452ca74", 00:18:28.063 "is_configured": true, 00:18:28.063 "data_offset": 2048, 00:18:28.063 "data_size": 63488 00:18:28.063 }, 00:18:28.063 { 00:18:28.063 "name": "BaseBdev4", 00:18:28.063 "uuid": "b835bb9e-02d5-4a43-8030-fa917b38711b", 00:18:28.063 "is_configured": true, 00:18:28.063 "data_offset": 2048, 00:18:28.063 "data_size": 63488 00:18:28.063 } 00:18:28.063 ] 00:18:28.063 }' 00:18:28.063 22:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:28.063 22:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.629 22:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:18:28.630 22:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:28.630 22:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:28.630 22:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:28.630 22:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:28.630 22:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:18:28.630 22:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:28.630 22:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:28.888 [2024-07-13 22:01:48.066853] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:28.888 22:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:28.888 "name": "Existed_Raid", 00:18:28.888 "aliases": [ 00:18:28.888 "10f187e9-2700-454e-b0a3-93a45482473f" 00:18:28.888 ], 00:18:28.888 "product_name": "Raid Volume", 00:18:28.888 "block_size": 512, 00:18:28.888 "num_blocks": 253952, 00:18:28.888 "uuid": "10f187e9-2700-454e-b0a3-93a45482473f", 00:18:28.888 "assigned_rate_limits": { 00:18:28.888 "rw_ios_per_sec": 0, 00:18:28.888 "rw_mbytes_per_sec": 0, 00:18:28.888 "r_mbytes_per_sec": 0, 00:18:28.888 "w_mbytes_per_sec": 0 00:18:28.888 }, 00:18:28.888 "claimed": false, 00:18:28.888 "zoned": false, 00:18:28.888 "supported_io_types": { 00:18:28.888 "read": true, 00:18:28.888 "write": true, 00:18:28.888 "unmap": true, 00:18:28.888 "flush": true, 00:18:28.888 "reset": true, 00:18:28.888 "nvme_admin": false, 00:18:28.888 "nvme_io": false, 00:18:28.888 "nvme_io_md": false, 00:18:28.888 "write_zeroes": true, 00:18:28.888 "zcopy": false, 00:18:28.888 "get_zone_info": false, 00:18:28.888 "zone_management": false, 00:18:28.888 "zone_append": false, 00:18:28.888 "compare": false, 00:18:28.888 "compare_and_write": false, 00:18:28.888 "abort": false, 00:18:28.888 "seek_hole": false, 00:18:28.888 "seek_data": false, 00:18:28.888 "copy": false, 00:18:28.888 "nvme_iov_md": false 00:18:28.888 }, 00:18:28.888 "memory_domains": [ 00:18:28.888 { 00:18:28.888 "dma_device_id": "system", 00:18:28.888 "dma_device_type": 1 00:18:28.888 }, 00:18:28.888 { 00:18:28.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:28.888 "dma_device_type": 2 00:18:28.888 }, 00:18:28.888 { 00:18:28.888 "dma_device_id": "system", 00:18:28.888 "dma_device_type": 1 00:18:28.888 }, 00:18:28.888 { 00:18:28.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:28.888 "dma_device_type": 2 00:18:28.888 }, 00:18:28.888 { 00:18:28.888 "dma_device_id": "system", 00:18:28.888 "dma_device_type": 1 00:18:28.888 }, 00:18:28.888 { 00:18:28.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:28.888 "dma_device_type": 2 00:18:28.888 }, 00:18:28.888 { 00:18:28.888 "dma_device_id": "system", 00:18:28.888 "dma_device_type": 1 00:18:28.888 }, 00:18:28.888 { 00:18:28.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:28.888 "dma_device_type": 2 00:18:28.888 } 00:18:28.888 ], 00:18:28.888 "driver_specific": { 00:18:28.888 "raid": { 00:18:28.888 "uuid": "10f187e9-2700-454e-b0a3-93a45482473f", 00:18:28.888 "strip_size_kb": 64, 00:18:28.888 "state": "online", 00:18:28.888 "raid_level": "raid0", 00:18:28.888 "superblock": true, 00:18:28.888 "num_base_bdevs": 4, 00:18:28.888 "num_base_bdevs_discovered": 4, 00:18:28.888 "num_base_bdevs_operational": 4, 00:18:28.889 "base_bdevs_list": [ 00:18:28.889 { 00:18:28.889 "name": "NewBaseBdev", 00:18:28.889 "uuid": "155b1e9c-b030-4ce7-846c-c47430e403dc", 00:18:28.889 "is_configured": true, 00:18:28.889 "data_offset": 2048, 00:18:28.889 "data_size": 63488 00:18:28.889 }, 00:18:28.889 { 00:18:28.889 "name": "BaseBdev2", 00:18:28.889 "uuid": "4bab027a-d2eb-4066-b4d2-a98f97ffda62", 00:18:28.889 "is_configured": true, 00:18:28.889 "data_offset": 2048, 00:18:28.889 "data_size": 63488 00:18:28.889 }, 00:18:28.889 { 00:18:28.889 "name": "BaseBdev3", 00:18:28.889 "uuid": "f49d1a47-4530-4df7-8fce-df490452ca74", 00:18:28.889 "is_configured": true, 00:18:28.889 "data_offset": 2048, 00:18:28.889 "data_size": 63488 00:18:28.889 }, 00:18:28.889 { 00:18:28.889 "name": "BaseBdev4", 00:18:28.889 "uuid": "b835bb9e-02d5-4a43-8030-fa917b38711b", 00:18:28.889 "is_configured": true, 00:18:28.889 "data_offset": 2048, 00:18:28.889 "data_size": 63488 00:18:28.889 } 00:18:28.889 ] 00:18:28.889 } 00:18:28.889 } 00:18:28.889 }' 00:18:28.889 22:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:28.889 22:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:18:28.889 BaseBdev2 00:18:28.889 BaseBdev3 00:18:28.889 BaseBdev4' 00:18:28.889 22:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:28.889 22:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:18:28.889 22:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:29.146 22:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:29.146 "name": "NewBaseBdev", 00:18:29.146 "aliases": [ 00:18:29.146 "155b1e9c-b030-4ce7-846c-c47430e403dc" 00:18:29.146 ], 00:18:29.146 "product_name": "Malloc disk", 00:18:29.146 "block_size": 512, 00:18:29.146 "num_blocks": 65536, 00:18:29.146 "uuid": "155b1e9c-b030-4ce7-846c-c47430e403dc", 00:18:29.146 "assigned_rate_limits": { 00:18:29.146 "rw_ios_per_sec": 0, 00:18:29.146 "rw_mbytes_per_sec": 0, 00:18:29.146 "r_mbytes_per_sec": 0, 00:18:29.146 "w_mbytes_per_sec": 0 00:18:29.146 }, 00:18:29.146 "claimed": true, 00:18:29.146 "claim_type": "exclusive_write", 00:18:29.146 "zoned": false, 00:18:29.146 "supported_io_types": { 00:18:29.146 "read": true, 00:18:29.146 "write": true, 00:18:29.146 "unmap": true, 00:18:29.146 "flush": true, 00:18:29.146 "reset": true, 00:18:29.146 "nvme_admin": false, 00:18:29.146 "nvme_io": false, 00:18:29.146 "nvme_io_md": false, 00:18:29.146 "write_zeroes": true, 00:18:29.146 "zcopy": true, 00:18:29.146 "get_zone_info": false, 00:18:29.146 "zone_management": false, 00:18:29.146 "zone_append": false, 00:18:29.146 "compare": false, 00:18:29.146 "compare_and_write": false, 00:18:29.146 "abort": true, 00:18:29.146 "seek_hole": false, 00:18:29.146 "seek_data": false, 00:18:29.146 "copy": true, 00:18:29.146 "nvme_iov_md": false 00:18:29.146 }, 00:18:29.146 "memory_domains": [ 00:18:29.146 { 00:18:29.146 "dma_device_id": "system", 00:18:29.146 "dma_device_type": 1 00:18:29.146 }, 00:18:29.146 { 00:18:29.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:29.146 "dma_device_type": 2 00:18:29.146 } 00:18:29.146 ], 00:18:29.146 "driver_specific": {} 00:18:29.146 }' 00:18:29.146 22:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:29.146 22:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:29.146 22:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:29.146 22:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:29.146 22:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:29.146 22:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:29.146 22:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:29.146 22:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:29.146 22:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:29.146 22:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:29.405 22:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:29.405 22:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:29.405 22:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:29.405 22:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:29.405 22:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:29.405 22:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:29.405 "name": "BaseBdev2", 00:18:29.405 "aliases": [ 00:18:29.405 "4bab027a-d2eb-4066-b4d2-a98f97ffda62" 00:18:29.405 ], 00:18:29.405 "product_name": "Malloc disk", 00:18:29.405 "block_size": 512, 00:18:29.405 "num_blocks": 65536, 00:18:29.405 "uuid": "4bab027a-d2eb-4066-b4d2-a98f97ffda62", 00:18:29.405 "assigned_rate_limits": { 00:18:29.405 "rw_ios_per_sec": 0, 00:18:29.405 "rw_mbytes_per_sec": 0, 00:18:29.405 "r_mbytes_per_sec": 0, 00:18:29.405 "w_mbytes_per_sec": 0 00:18:29.405 }, 00:18:29.405 "claimed": true, 00:18:29.405 "claim_type": "exclusive_write", 00:18:29.405 "zoned": false, 00:18:29.405 "supported_io_types": { 00:18:29.405 "read": true, 00:18:29.405 "write": true, 00:18:29.405 "unmap": true, 00:18:29.405 "flush": true, 00:18:29.405 "reset": true, 00:18:29.405 "nvme_admin": false, 00:18:29.405 "nvme_io": false, 00:18:29.405 "nvme_io_md": false, 00:18:29.405 "write_zeroes": true, 00:18:29.405 "zcopy": true, 00:18:29.405 "get_zone_info": false, 00:18:29.405 "zone_management": false, 00:18:29.405 "zone_append": false, 00:18:29.405 "compare": false, 00:18:29.405 "compare_and_write": false, 00:18:29.405 "abort": true, 00:18:29.405 "seek_hole": false, 00:18:29.405 "seek_data": false, 00:18:29.405 "copy": true, 00:18:29.405 "nvme_iov_md": false 00:18:29.405 }, 00:18:29.405 "memory_domains": [ 00:18:29.405 { 00:18:29.405 "dma_device_id": "system", 00:18:29.405 "dma_device_type": 1 00:18:29.405 }, 00:18:29.405 { 00:18:29.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:29.405 "dma_device_type": 2 00:18:29.405 } 00:18:29.405 ], 00:18:29.405 "driver_specific": {} 00:18:29.405 }' 00:18:29.405 22:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:29.663 22:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:29.663 22:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:29.663 22:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:29.663 22:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:29.663 22:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:29.663 22:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:29.663 22:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:29.663 22:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:29.663 22:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:29.663 22:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:29.922 22:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:29.922 22:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:29.922 22:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:29.922 22:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:29.922 22:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:29.922 "name": "BaseBdev3", 00:18:29.922 "aliases": [ 00:18:29.922 "f49d1a47-4530-4df7-8fce-df490452ca74" 00:18:29.922 ], 00:18:29.922 "product_name": "Malloc disk", 00:18:29.922 "block_size": 512, 00:18:29.922 "num_blocks": 65536, 00:18:29.922 "uuid": "f49d1a47-4530-4df7-8fce-df490452ca74", 00:18:29.922 "assigned_rate_limits": { 00:18:29.922 "rw_ios_per_sec": 0, 00:18:29.922 "rw_mbytes_per_sec": 0, 00:18:29.922 "r_mbytes_per_sec": 0, 00:18:29.922 "w_mbytes_per_sec": 0 00:18:29.922 }, 00:18:29.922 "claimed": true, 00:18:29.922 "claim_type": "exclusive_write", 00:18:29.922 "zoned": false, 00:18:29.922 "supported_io_types": { 00:18:29.922 "read": true, 00:18:29.922 "write": true, 00:18:29.922 "unmap": true, 00:18:29.922 "flush": true, 00:18:29.922 "reset": true, 00:18:29.922 "nvme_admin": false, 00:18:29.922 "nvme_io": false, 00:18:29.922 "nvme_io_md": false, 00:18:29.922 "write_zeroes": true, 00:18:29.922 "zcopy": true, 00:18:29.922 "get_zone_info": false, 00:18:29.922 "zone_management": false, 00:18:29.922 "zone_append": false, 00:18:29.922 "compare": false, 00:18:29.922 "compare_and_write": false, 00:18:29.922 "abort": true, 00:18:29.922 "seek_hole": false, 00:18:29.922 "seek_data": false, 00:18:29.922 "copy": true, 00:18:29.922 "nvme_iov_md": false 00:18:29.922 }, 00:18:29.922 "memory_domains": [ 00:18:29.922 { 00:18:29.922 "dma_device_id": "system", 00:18:29.922 "dma_device_type": 1 00:18:29.922 }, 00:18:29.922 { 00:18:29.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:29.922 "dma_device_type": 2 00:18:29.922 } 00:18:29.922 ], 00:18:29.922 "driver_specific": {} 00:18:29.922 }' 00:18:29.922 22:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:29.922 22:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:30.181 22:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:30.181 22:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:30.181 22:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:30.181 22:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:30.181 22:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:30.181 22:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:30.181 22:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:30.181 22:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:30.181 22:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:30.181 22:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:30.181 22:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:30.181 22:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:18:30.181 22:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:30.439 22:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:30.439 "name": "BaseBdev4", 00:18:30.439 "aliases": [ 00:18:30.439 "b835bb9e-02d5-4a43-8030-fa917b38711b" 00:18:30.439 ], 00:18:30.439 "product_name": "Malloc disk", 00:18:30.439 "block_size": 512, 00:18:30.439 "num_blocks": 65536, 00:18:30.439 "uuid": "b835bb9e-02d5-4a43-8030-fa917b38711b", 00:18:30.439 "assigned_rate_limits": { 00:18:30.439 "rw_ios_per_sec": 0, 00:18:30.439 "rw_mbytes_per_sec": 0, 00:18:30.439 "r_mbytes_per_sec": 0, 00:18:30.439 "w_mbytes_per_sec": 0 00:18:30.439 }, 00:18:30.439 "claimed": true, 00:18:30.439 "claim_type": "exclusive_write", 00:18:30.439 "zoned": false, 00:18:30.439 "supported_io_types": { 00:18:30.439 "read": true, 00:18:30.439 "write": true, 00:18:30.439 "unmap": true, 00:18:30.439 "flush": true, 00:18:30.439 "reset": true, 00:18:30.439 "nvme_admin": false, 00:18:30.439 "nvme_io": false, 00:18:30.439 "nvme_io_md": false, 00:18:30.439 "write_zeroes": true, 00:18:30.439 "zcopy": true, 00:18:30.439 "get_zone_info": false, 00:18:30.439 "zone_management": false, 00:18:30.439 "zone_append": false, 00:18:30.439 "compare": false, 00:18:30.439 "compare_and_write": false, 00:18:30.439 "abort": true, 00:18:30.439 "seek_hole": false, 00:18:30.439 "seek_data": false, 00:18:30.439 "copy": true, 00:18:30.439 "nvme_iov_md": false 00:18:30.439 }, 00:18:30.439 "memory_domains": [ 00:18:30.439 { 00:18:30.439 "dma_device_id": "system", 00:18:30.439 "dma_device_type": 1 00:18:30.439 }, 00:18:30.439 { 00:18:30.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:30.439 "dma_device_type": 2 00:18:30.439 } 00:18:30.439 ], 00:18:30.439 "driver_specific": {} 00:18:30.439 }' 00:18:30.439 22:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:30.439 22:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:30.439 22:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:30.439 22:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:30.698 22:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:30.698 22:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:30.698 22:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:30.698 22:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:30.698 22:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:30.698 22:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:30.698 22:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:30.698 22:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:30.698 22:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:30.956 [2024-07-13 22:01:50.164081] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:30.956 [2024-07-13 22:01:50.164110] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:30.956 [2024-07-13 22:01:50.164182] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:30.956 [2024-07-13 22:01:50.164244] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:30.956 [2024-07-13 22:01:50.164256] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042080 name Existed_Raid, state offline 00:18:30.956 22:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 1417066 00:18:30.956 22:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 1417066 ']' 00:18:30.956 22:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 1417066 00:18:30.956 22:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:18:30.956 22:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:30.956 22:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1417066 00:18:30.956 22:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:30.956 22:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:30.956 22:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1417066' 00:18:30.956 killing process with pid 1417066 00:18:30.956 22:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 1417066 00:18:30.956 [2024-07-13 22:01:50.240812] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:30.956 22:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 1417066 00:18:31.215 [2024-07-13 22:01:50.564059] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:32.698 22:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:18:32.698 00:18:32.698 real 0m26.180s 00:18:32.698 user 0m46.002s 00:18:32.698 sys 0m4.869s 00:18:32.698 22:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:32.698 22:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.698 ************************************ 00:18:32.698 END TEST raid_state_function_test_sb 00:18:32.698 ************************************ 00:18:32.698 22:01:51 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:32.698 22:01:51 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:18:32.698 22:01:51 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:18:32.698 22:01:51 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:32.698 22:01:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:32.698 ************************************ 00:18:32.698 START TEST raid_superblock_test 00:18:32.698 ************************************ 00:18:32.698 22:01:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 4 00:18:32.698 22:01:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:18:32.698 22:01:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:18:32.698 22:01:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:18:32.698 22:01:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:18:32.698 22:01:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:18:32.698 22:01:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:18:32.698 22:01:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:18:32.698 22:01:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:18:32.698 22:01:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:18:32.698 22:01:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:18:32.698 22:01:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:18:32.698 22:01:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:18:32.698 22:01:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:18:32.698 22:01:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:18:32.698 22:01:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:18:32.698 22:01:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:18:32.698 22:01:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=1422237 00:18:32.698 22:01:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 1422237 /var/tmp/spdk-raid.sock 00:18:32.698 22:01:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 1422237 ']' 00:18:32.698 22:01:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:32.698 22:01:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:32.698 22:01:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:32.698 22:01:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:32.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:32.698 22:01:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:32.698 22:01:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.698 [2024-07-13 22:01:51.957498] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:32.698 [2024-07-13 22:01:51.957606] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1422237 ] 00:18:32.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.698 EAL: Requested device 0000:3d:01.0 cannot be used 00:18:32.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.698 EAL: Requested device 0000:3d:01.1 cannot be used 00:18:32.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.698 EAL: Requested device 0000:3d:01.2 cannot be used 00:18:32.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.698 EAL: Requested device 0000:3d:01.3 cannot be used 00:18:32.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.698 EAL: Requested device 0000:3d:01.4 cannot be used 00:18:32.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.698 EAL: Requested device 0000:3d:01.5 cannot be used 00:18:32.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.698 EAL: Requested device 0000:3d:01.6 cannot be used 00:18:32.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.698 EAL: Requested device 0000:3d:01.7 cannot be used 00:18:32.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.698 EAL: Requested device 0000:3d:02.0 cannot be used 00:18:32.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.698 EAL: Requested device 0000:3d:02.1 cannot be used 00:18:32.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.698 EAL: Requested device 0000:3d:02.2 cannot be used 00:18:32.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.698 EAL: Requested device 0000:3d:02.3 cannot be used 00:18:32.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.698 EAL: Requested device 0000:3d:02.4 cannot be used 00:18:32.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.698 EAL: Requested device 0000:3d:02.5 cannot be used 00:18:32.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.698 EAL: Requested device 0000:3d:02.6 cannot be used 00:18:32.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.698 EAL: Requested device 0000:3d:02.7 cannot be used 00:18:32.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.698 EAL: Requested device 0000:3f:01.0 cannot be used 00:18:32.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.698 EAL: Requested device 0000:3f:01.1 cannot be used 00:18:32.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.698 EAL: Requested device 0000:3f:01.2 cannot be used 00:18:32.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.698 EAL: Requested device 0000:3f:01.3 cannot be used 00:18:32.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.698 EAL: Requested device 0000:3f:01.4 cannot be used 00:18:32.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.698 EAL: Requested device 0000:3f:01.5 cannot be used 00:18:32.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.698 EAL: Requested device 0000:3f:01.6 cannot be used 00:18:32.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.698 EAL: Requested device 0000:3f:01.7 cannot be used 00:18:32.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.698 EAL: Requested device 0000:3f:02.0 cannot be used 00:18:32.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.698 EAL: Requested device 0000:3f:02.1 cannot be used 00:18:32.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.698 EAL: Requested device 0000:3f:02.2 cannot be used 00:18:32.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.698 EAL: Requested device 0000:3f:02.3 cannot be used 00:18:32.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.698 EAL: Requested device 0000:3f:02.4 cannot be used 00:18:32.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.698 EAL: Requested device 0000:3f:02.5 cannot be used 00:18:32.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.698 EAL: Requested device 0000:3f:02.6 cannot be used 00:18:32.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.698 EAL: Requested device 0000:3f:02.7 cannot be used 00:18:32.956 [2024-07-13 22:01:52.115986] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.956 [2024-07-13 22:01:52.322296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.214 [2024-07-13 22:01:52.559516] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:33.214 [2024-07-13 22:01:52.559544] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:33.473 22:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:33.473 22:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:18:33.473 22:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:18:33.473 22:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:33.473 22:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:18:33.473 22:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:18:33.473 22:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:33.473 22:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:33.473 22:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:33.473 22:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:33.473 22:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:33.732 malloc1 00:18:33.732 22:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:33.732 [2024-07-13 22:01:53.086503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:33.732 [2024-07-13 22:01:53.086556] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:33.732 [2024-07-13 22:01:53.086599] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:18:33.732 [2024-07-13 22:01:53.086611] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:33.732 [2024-07-13 22:01:53.088756] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:33.732 [2024-07-13 22:01:53.088786] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:33.732 pt1 00:18:33.732 22:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:33.732 22:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:33.732 22:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:18:33.732 22:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:18:33.732 22:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:33.732 22:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:33.732 22:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:33.732 22:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:33.732 22:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:33.990 malloc2 00:18:33.990 22:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:34.249 [2024-07-13 22:01:53.458024] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:34.249 [2024-07-13 22:01:53.458073] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.249 [2024-07-13 22:01:53.458112] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:18:34.249 [2024-07-13 22:01:53.458123] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.249 [2024-07-13 22:01:53.460203] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.249 [2024-07-13 22:01:53.460234] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:34.249 pt2 00:18:34.249 22:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:34.249 22:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:34.249 22:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:18:34.249 22:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:18:34.249 22:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:34.249 22:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:34.249 22:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:34.249 22:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:34.249 22:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:18:34.508 malloc3 00:18:34.508 22:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:34.508 [2024-07-13 22:01:53.825393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:34.508 [2024-07-13 22:01:53.825444] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.508 [2024-07-13 22:01:53.825473] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040e80 00:18:34.508 [2024-07-13 22:01:53.825484] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.508 [2024-07-13 22:01:53.827615] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.508 [2024-07-13 22:01:53.827645] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:34.508 pt3 00:18:34.508 22:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:34.508 22:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:34.508 22:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:18:34.508 22:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:18:34.508 22:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:18:34.508 22:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:34.508 22:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:34.508 22:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:34.508 22:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:18:34.767 malloc4 00:18:34.767 22:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:35.026 [2024-07-13 22:01:54.197565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:35.026 [2024-07-13 22:01:54.197635] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:35.026 [2024-07-13 22:01:54.197658] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041a80 00:18:35.026 [2024-07-13 22:01:54.197669] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:35.026 [2024-07-13 22:01:54.199759] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:35.026 [2024-07-13 22:01:54.199788] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:35.026 pt4 00:18:35.026 22:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:35.026 22:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:35.026 22:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:18:35.026 [2024-07-13 22:01:54.366108] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:35.026 [2024-07-13 22:01:54.367847] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:35.026 [2024-07-13 22:01:54.367918] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:35.026 [2024-07-13 22:01:54.367958] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:35.026 [2024-07-13 22:01:54.368130] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042080 00:18:35.026 [2024-07-13 22:01:54.368142] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:35.026 [2024-07-13 22:01:54.368407] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:18:35.026 [2024-07-13 22:01:54.368587] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042080 00:18:35.026 [2024-07-13 22:01:54.368599] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000042080 00:18:35.026 [2024-07-13 22:01:54.368754] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:35.026 22:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:18:35.026 22:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:35.026 22:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:35.026 22:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:35.026 22:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:35.026 22:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:35.026 22:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:35.026 22:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:35.026 22:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:35.026 22:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:35.026 22:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:35.026 22:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.285 22:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:35.285 "name": "raid_bdev1", 00:18:35.285 "uuid": "7646d07d-f073-4141-9120-00d9ee94ab32", 00:18:35.285 "strip_size_kb": 64, 00:18:35.285 "state": "online", 00:18:35.285 "raid_level": "raid0", 00:18:35.285 "superblock": true, 00:18:35.285 "num_base_bdevs": 4, 00:18:35.285 "num_base_bdevs_discovered": 4, 00:18:35.285 "num_base_bdevs_operational": 4, 00:18:35.285 "base_bdevs_list": [ 00:18:35.285 { 00:18:35.285 "name": "pt1", 00:18:35.285 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:35.285 "is_configured": true, 00:18:35.285 "data_offset": 2048, 00:18:35.285 "data_size": 63488 00:18:35.285 }, 00:18:35.285 { 00:18:35.285 "name": "pt2", 00:18:35.285 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:35.285 "is_configured": true, 00:18:35.285 "data_offset": 2048, 00:18:35.285 "data_size": 63488 00:18:35.285 }, 00:18:35.285 { 00:18:35.285 "name": "pt3", 00:18:35.285 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:35.285 "is_configured": true, 00:18:35.285 "data_offset": 2048, 00:18:35.285 "data_size": 63488 00:18:35.285 }, 00:18:35.285 { 00:18:35.285 "name": "pt4", 00:18:35.285 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:35.285 "is_configured": true, 00:18:35.285 "data_offset": 2048, 00:18:35.285 "data_size": 63488 00:18:35.285 } 00:18:35.285 ] 00:18:35.285 }' 00:18:35.285 22:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:35.285 22:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.852 22:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:18:35.852 22:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:35.852 22:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:35.852 22:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:35.852 22:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:35.852 22:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:35.852 22:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:35.852 22:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:35.852 [2024-07-13 22:01:55.164390] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:35.852 22:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:35.852 "name": "raid_bdev1", 00:18:35.852 "aliases": [ 00:18:35.852 "7646d07d-f073-4141-9120-00d9ee94ab32" 00:18:35.852 ], 00:18:35.852 "product_name": "Raid Volume", 00:18:35.852 "block_size": 512, 00:18:35.852 "num_blocks": 253952, 00:18:35.852 "uuid": "7646d07d-f073-4141-9120-00d9ee94ab32", 00:18:35.852 "assigned_rate_limits": { 00:18:35.852 "rw_ios_per_sec": 0, 00:18:35.852 "rw_mbytes_per_sec": 0, 00:18:35.852 "r_mbytes_per_sec": 0, 00:18:35.852 "w_mbytes_per_sec": 0 00:18:35.852 }, 00:18:35.853 "claimed": false, 00:18:35.853 "zoned": false, 00:18:35.853 "supported_io_types": { 00:18:35.853 "read": true, 00:18:35.853 "write": true, 00:18:35.853 "unmap": true, 00:18:35.853 "flush": true, 00:18:35.853 "reset": true, 00:18:35.853 "nvme_admin": false, 00:18:35.853 "nvme_io": false, 00:18:35.853 "nvme_io_md": false, 00:18:35.853 "write_zeroes": true, 00:18:35.853 "zcopy": false, 00:18:35.853 "get_zone_info": false, 00:18:35.853 "zone_management": false, 00:18:35.853 "zone_append": false, 00:18:35.853 "compare": false, 00:18:35.853 "compare_and_write": false, 00:18:35.853 "abort": false, 00:18:35.853 "seek_hole": false, 00:18:35.853 "seek_data": false, 00:18:35.853 "copy": false, 00:18:35.853 "nvme_iov_md": false 00:18:35.853 }, 00:18:35.853 "memory_domains": [ 00:18:35.853 { 00:18:35.853 "dma_device_id": "system", 00:18:35.853 "dma_device_type": 1 00:18:35.853 }, 00:18:35.853 { 00:18:35.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:35.853 "dma_device_type": 2 00:18:35.853 }, 00:18:35.853 { 00:18:35.853 "dma_device_id": "system", 00:18:35.853 "dma_device_type": 1 00:18:35.853 }, 00:18:35.853 { 00:18:35.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:35.853 "dma_device_type": 2 00:18:35.853 }, 00:18:35.853 { 00:18:35.853 "dma_device_id": "system", 00:18:35.853 "dma_device_type": 1 00:18:35.853 }, 00:18:35.853 { 00:18:35.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:35.853 "dma_device_type": 2 00:18:35.853 }, 00:18:35.853 { 00:18:35.853 "dma_device_id": "system", 00:18:35.853 "dma_device_type": 1 00:18:35.853 }, 00:18:35.853 { 00:18:35.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:35.853 "dma_device_type": 2 00:18:35.853 } 00:18:35.853 ], 00:18:35.853 "driver_specific": { 00:18:35.853 "raid": { 00:18:35.853 "uuid": "7646d07d-f073-4141-9120-00d9ee94ab32", 00:18:35.853 "strip_size_kb": 64, 00:18:35.853 "state": "online", 00:18:35.853 "raid_level": "raid0", 00:18:35.853 "superblock": true, 00:18:35.853 "num_base_bdevs": 4, 00:18:35.853 "num_base_bdevs_discovered": 4, 00:18:35.853 "num_base_bdevs_operational": 4, 00:18:35.853 "base_bdevs_list": [ 00:18:35.853 { 00:18:35.853 "name": "pt1", 00:18:35.853 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:35.853 "is_configured": true, 00:18:35.853 "data_offset": 2048, 00:18:35.853 "data_size": 63488 00:18:35.853 }, 00:18:35.853 { 00:18:35.853 "name": "pt2", 00:18:35.853 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:35.853 "is_configured": true, 00:18:35.853 "data_offset": 2048, 00:18:35.853 "data_size": 63488 00:18:35.853 }, 00:18:35.853 { 00:18:35.853 "name": "pt3", 00:18:35.853 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:35.853 "is_configured": true, 00:18:35.853 "data_offset": 2048, 00:18:35.853 "data_size": 63488 00:18:35.853 }, 00:18:35.853 { 00:18:35.853 "name": "pt4", 00:18:35.853 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:35.853 "is_configured": true, 00:18:35.853 "data_offset": 2048, 00:18:35.853 "data_size": 63488 00:18:35.853 } 00:18:35.853 ] 00:18:35.853 } 00:18:35.853 } 00:18:35.853 }' 00:18:35.853 22:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:35.853 22:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:35.853 pt2 00:18:35.853 pt3 00:18:35.853 pt4' 00:18:35.853 22:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:35.853 22:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:35.853 22:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:36.111 22:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:36.111 "name": "pt1", 00:18:36.111 "aliases": [ 00:18:36.111 "00000000-0000-0000-0000-000000000001" 00:18:36.111 ], 00:18:36.111 "product_name": "passthru", 00:18:36.111 "block_size": 512, 00:18:36.111 "num_blocks": 65536, 00:18:36.111 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:36.111 "assigned_rate_limits": { 00:18:36.111 "rw_ios_per_sec": 0, 00:18:36.111 "rw_mbytes_per_sec": 0, 00:18:36.111 "r_mbytes_per_sec": 0, 00:18:36.111 "w_mbytes_per_sec": 0 00:18:36.111 }, 00:18:36.111 "claimed": true, 00:18:36.111 "claim_type": "exclusive_write", 00:18:36.111 "zoned": false, 00:18:36.111 "supported_io_types": { 00:18:36.111 "read": true, 00:18:36.111 "write": true, 00:18:36.111 "unmap": true, 00:18:36.111 "flush": true, 00:18:36.111 "reset": true, 00:18:36.111 "nvme_admin": false, 00:18:36.111 "nvme_io": false, 00:18:36.111 "nvme_io_md": false, 00:18:36.111 "write_zeroes": true, 00:18:36.111 "zcopy": true, 00:18:36.111 "get_zone_info": false, 00:18:36.111 "zone_management": false, 00:18:36.111 "zone_append": false, 00:18:36.111 "compare": false, 00:18:36.111 "compare_and_write": false, 00:18:36.111 "abort": true, 00:18:36.111 "seek_hole": false, 00:18:36.111 "seek_data": false, 00:18:36.111 "copy": true, 00:18:36.111 "nvme_iov_md": false 00:18:36.111 }, 00:18:36.111 "memory_domains": [ 00:18:36.111 { 00:18:36.111 "dma_device_id": "system", 00:18:36.111 "dma_device_type": 1 00:18:36.111 }, 00:18:36.111 { 00:18:36.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.111 "dma_device_type": 2 00:18:36.111 } 00:18:36.111 ], 00:18:36.111 "driver_specific": { 00:18:36.111 "passthru": { 00:18:36.111 "name": "pt1", 00:18:36.111 "base_bdev_name": "malloc1" 00:18:36.111 } 00:18:36.111 } 00:18:36.111 }' 00:18:36.111 22:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:36.111 22:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:36.111 22:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:36.111 22:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:36.111 22:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:36.369 22:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:36.369 22:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:36.369 22:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:36.369 22:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:36.369 22:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:36.369 22:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:36.369 22:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:36.369 22:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:36.370 22:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:36.370 22:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:36.627 22:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:36.627 "name": "pt2", 00:18:36.627 "aliases": [ 00:18:36.627 "00000000-0000-0000-0000-000000000002" 00:18:36.627 ], 00:18:36.627 "product_name": "passthru", 00:18:36.627 "block_size": 512, 00:18:36.627 "num_blocks": 65536, 00:18:36.627 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:36.627 "assigned_rate_limits": { 00:18:36.627 "rw_ios_per_sec": 0, 00:18:36.627 "rw_mbytes_per_sec": 0, 00:18:36.627 "r_mbytes_per_sec": 0, 00:18:36.627 "w_mbytes_per_sec": 0 00:18:36.627 }, 00:18:36.628 "claimed": true, 00:18:36.628 "claim_type": "exclusive_write", 00:18:36.628 "zoned": false, 00:18:36.628 "supported_io_types": { 00:18:36.628 "read": true, 00:18:36.628 "write": true, 00:18:36.628 "unmap": true, 00:18:36.628 "flush": true, 00:18:36.628 "reset": true, 00:18:36.628 "nvme_admin": false, 00:18:36.628 "nvme_io": false, 00:18:36.628 "nvme_io_md": false, 00:18:36.628 "write_zeroes": true, 00:18:36.628 "zcopy": true, 00:18:36.628 "get_zone_info": false, 00:18:36.628 "zone_management": false, 00:18:36.628 "zone_append": false, 00:18:36.628 "compare": false, 00:18:36.628 "compare_and_write": false, 00:18:36.628 "abort": true, 00:18:36.628 "seek_hole": false, 00:18:36.628 "seek_data": false, 00:18:36.628 "copy": true, 00:18:36.628 "nvme_iov_md": false 00:18:36.628 }, 00:18:36.628 "memory_domains": [ 00:18:36.628 { 00:18:36.628 "dma_device_id": "system", 00:18:36.628 "dma_device_type": 1 00:18:36.628 }, 00:18:36.628 { 00:18:36.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.628 "dma_device_type": 2 00:18:36.628 } 00:18:36.628 ], 00:18:36.628 "driver_specific": { 00:18:36.628 "passthru": { 00:18:36.628 "name": "pt2", 00:18:36.628 "base_bdev_name": "malloc2" 00:18:36.628 } 00:18:36.628 } 00:18:36.628 }' 00:18:36.628 22:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:36.628 22:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:36.628 22:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:36.628 22:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:36.628 22:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:36.934 22:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:36.934 22:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:36.934 22:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:36.934 22:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:36.934 22:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:36.934 22:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:36.934 22:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:36.934 22:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:36.934 22:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:36.934 22:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:18:37.192 22:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:37.192 "name": "pt3", 00:18:37.192 "aliases": [ 00:18:37.192 "00000000-0000-0000-0000-000000000003" 00:18:37.192 ], 00:18:37.192 "product_name": "passthru", 00:18:37.192 "block_size": 512, 00:18:37.192 "num_blocks": 65536, 00:18:37.192 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:37.192 "assigned_rate_limits": { 00:18:37.192 "rw_ios_per_sec": 0, 00:18:37.192 "rw_mbytes_per_sec": 0, 00:18:37.192 "r_mbytes_per_sec": 0, 00:18:37.192 "w_mbytes_per_sec": 0 00:18:37.192 }, 00:18:37.192 "claimed": true, 00:18:37.192 "claim_type": "exclusive_write", 00:18:37.192 "zoned": false, 00:18:37.192 "supported_io_types": { 00:18:37.192 "read": true, 00:18:37.192 "write": true, 00:18:37.192 "unmap": true, 00:18:37.192 "flush": true, 00:18:37.192 "reset": true, 00:18:37.192 "nvme_admin": false, 00:18:37.192 "nvme_io": false, 00:18:37.192 "nvme_io_md": false, 00:18:37.192 "write_zeroes": true, 00:18:37.192 "zcopy": true, 00:18:37.192 "get_zone_info": false, 00:18:37.192 "zone_management": false, 00:18:37.192 "zone_append": false, 00:18:37.192 "compare": false, 00:18:37.192 "compare_and_write": false, 00:18:37.192 "abort": true, 00:18:37.192 "seek_hole": false, 00:18:37.192 "seek_data": false, 00:18:37.192 "copy": true, 00:18:37.192 "nvme_iov_md": false 00:18:37.192 }, 00:18:37.192 "memory_domains": [ 00:18:37.192 { 00:18:37.192 "dma_device_id": "system", 00:18:37.192 "dma_device_type": 1 00:18:37.192 }, 00:18:37.192 { 00:18:37.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:37.192 "dma_device_type": 2 00:18:37.192 } 00:18:37.192 ], 00:18:37.192 "driver_specific": { 00:18:37.192 "passthru": { 00:18:37.192 "name": "pt3", 00:18:37.192 "base_bdev_name": "malloc3" 00:18:37.192 } 00:18:37.192 } 00:18:37.192 }' 00:18:37.192 22:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:37.192 22:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:37.192 22:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:37.192 22:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:37.192 22:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:37.192 22:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:37.192 22:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:37.192 22:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:37.450 22:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:37.450 22:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:37.450 22:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:37.450 22:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:37.450 22:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:37.450 22:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:18:37.450 22:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:37.709 22:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:37.709 "name": "pt4", 00:18:37.709 "aliases": [ 00:18:37.709 "00000000-0000-0000-0000-000000000004" 00:18:37.709 ], 00:18:37.709 "product_name": "passthru", 00:18:37.709 "block_size": 512, 00:18:37.709 "num_blocks": 65536, 00:18:37.709 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:37.709 "assigned_rate_limits": { 00:18:37.709 "rw_ios_per_sec": 0, 00:18:37.709 "rw_mbytes_per_sec": 0, 00:18:37.709 "r_mbytes_per_sec": 0, 00:18:37.709 "w_mbytes_per_sec": 0 00:18:37.709 }, 00:18:37.709 "claimed": true, 00:18:37.709 "claim_type": "exclusive_write", 00:18:37.709 "zoned": false, 00:18:37.709 "supported_io_types": { 00:18:37.709 "read": true, 00:18:37.709 "write": true, 00:18:37.709 "unmap": true, 00:18:37.709 "flush": true, 00:18:37.709 "reset": true, 00:18:37.709 "nvme_admin": false, 00:18:37.709 "nvme_io": false, 00:18:37.709 "nvme_io_md": false, 00:18:37.709 "write_zeroes": true, 00:18:37.709 "zcopy": true, 00:18:37.709 "get_zone_info": false, 00:18:37.709 "zone_management": false, 00:18:37.709 "zone_append": false, 00:18:37.709 "compare": false, 00:18:37.709 "compare_and_write": false, 00:18:37.709 "abort": true, 00:18:37.709 "seek_hole": false, 00:18:37.709 "seek_data": false, 00:18:37.709 "copy": true, 00:18:37.709 "nvme_iov_md": false 00:18:37.709 }, 00:18:37.709 "memory_domains": [ 00:18:37.709 { 00:18:37.709 "dma_device_id": "system", 00:18:37.709 "dma_device_type": 1 00:18:37.709 }, 00:18:37.709 { 00:18:37.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:37.709 "dma_device_type": 2 00:18:37.709 } 00:18:37.709 ], 00:18:37.709 "driver_specific": { 00:18:37.709 "passthru": { 00:18:37.709 "name": "pt4", 00:18:37.709 "base_bdev_name": "malloc4" 00:18:37.709 } 00:18:37.709 } 00:18:37.709 }' 00:18:37.709 22:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:37.709 22:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:37.709 22:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:37.709 22:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:37.709 22:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:37.709 22:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:37.709 22:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:37.709 22:01:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:37.709 22:01:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:37.709 22:01:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:37.709 22:01:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:37.968 22:01:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:37.968 22:01:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:18:37.968 22:01:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:37.968 [2024-07-13 22:01:57.277936] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:37.968 22:01:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=7646d07d-f073-4141-9120-00d9ee94ab32 00:18:37.968 22:01:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 7646d07d-f073-4141-9120-00d9ee94ab32 ']' 00:18:37.968 22:01:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:38.226 [2024-07-13 22:01:57.434051] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:38.226 [2024-07-13 22:01:57.434076] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:38.226 [2024-07-13 22:01:57.434151] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:38.226 [2024-07-13 22:01:57.434216] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:38.226 [2024-07-13 22:01:57.434230] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042080 name raid_bdev1, state offline 00:18:38.226 22:01:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:38.226 22:01:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:18:38.226 22:01:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:18:38.484 22:01:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:18:38.484 22:01:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:38.484 22:01:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:38.484 22:01:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:38.484 22:01:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:38.743 22:01:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:38.743 22:01:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:38.743 22:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:38.743 22:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:18:39.002 22:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:39.002 22:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:39.261 22:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:18:39.261 22:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:39.261 22:01:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:18:39.261 22:01:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:39.261 22:01:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:18:39.261 22:01:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:39.261 22:01:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:18:39.261 22:01:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:39.261 22:01:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:18:39.261 22:01:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:39.261 22:01:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:18:39.261 22:01:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:18:39.262 22:01:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:39.262 [2024-07-13 22:01:58.556983] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:39.262 [2024-07-13 22:01:58.558678] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:39.262 [2024-07-13 22:01:58.558722] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:39.262 [2024-07-13 22:01:58.558753] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:18:39.262 [2024-07-13 22:01:58.558796] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:39.262 [2024-07-13 22:01:58.558838] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:39.262 [2024-07-13 22:01:58.558856] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:18:39.262 [2024-07-13 22:01:58.558876] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:18:39.262 [2024-07-13 22:01:58.558890] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:39.262 [2024-07-13 22:01:58.558909] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042680 name raid_bdev1, state configuring 00:18:39.262 request: 00:18:39.262 { 00:18:39.262 "name": "raid_bdev1", 00:18:39.262 "raid_level": "raid0", 00:18:39.262 "base_bdevs": [ 00:18:39.262 "malloc1", 00:18:39.262 "malloc2", 00:18:39.262 "malloc3", 00:18:39.262 "malloc4" 00:18:39.262 ], 00:18:39.262 "strip_size_kb": 64, 00:18:39.262 "superblock": false, 00:18:39.262 "method": "bdev_raid_create", 00:18:39.262 "req_id": 1 00:18:39.262 } 00:18:39.262 Got JSON-RPC error response 00:18:39.262 response: 00:18:39.262 { 00:18:39.262 "code": -17, 00:18:39.262 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:39.262 } 00:18:39.262 22:01:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:18:39.262 22:01:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:39.262 22:01:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:39.262 22:01:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:39.262 22:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:39.262 22:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:18:39.521 22:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:18:39.521 22:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:18:39.521 22:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:39.521 [2024-07-13 22:01:58.897811] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:39.521 [2024-07-13 22:01:58.897870] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:39.521 [2024-07-13 22:01:58.897889] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042c80 00:18:39.521 [2024-07-13 22:01:58.897909] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:39.521 [2024-07-13 22:01:58.900098] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:39.521 [2024-07-13 22:01:58.900131] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:39.521 [2024-07-13 22:01:58.900207] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:39.521 [2024-07-13 22:01:58.900263] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:39.521 pt1 00:18:39.779 22:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:18:39.779 22:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:39.779 22:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:39.780 22:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:39.780 22:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:39.780 22:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:39.780 22:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:39.780 22:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:39.780 22:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:39.780 22:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:39.780 22:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:39.780 22:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.780 22:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:39.780 "name": "raid_bdev1", 00:18:39.780 "uuid": "7646d07d-f073-4141-9120-00d9ee94ab32", 00:18:39.780 "strip_size_kb": 64, 00:18:39.780 "state": "configuring", 00:18:39.780 "raid_level": "raid0", 00:18:39.780 "superblock": true, 00:18:39.780 "num_base_bdevs": 4, 00:18:39.780 "num_base_bdevs_discovered": 1, 00:18:39.780 "num_base_bdevs_operational": 4, 00:18:39.780 "base_bdevs_list": [ 00:18:39.780 { 00:18:39.780 "name": "pt1", 00:18:39.780 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:39.780 "is_configured": true, 00:18:39.780 "data_offset": 2048, 00:18:39.780 "data_size": 63488 00:18:39.780 }, 00:18:39.780 { 00:18:39.780 "name": null, 00:18:39.780 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:39.780 "is_configured": false, 00:18:39.780 "data_offset": 2048, 00:18:39.780 "data_size": 63488 00:18:39.780 }, 00:18:39.780 { 00:18:39.780 "name": null, 00:18:39.780 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:39.780 "is_configured": false, 00:18:39.780 "data_offset": 2048, 00:18:39.780 "data_size": 63488 00:18:39.780 }, 00:18:39.780 { 00:18:39.780 "name": null, 00:18:39.780 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:39.780 "is_configured": false, 00:18:39.780 "data_offset": 2048, 00:18:39.780 "data_size": 63488 00:18:39.780 } 00:18:39.780 ] 00:18:39.780 }' 00:18:39.780 22:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:39.780 22:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.346 22:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:18:40.346 22:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:40.347 [2024-07-13 22:01:59.711991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:40.347 [2024-07-13 22:01:59.712049] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:40.347 [2024-07-13 22:01:59.712087] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043580 00:18:40.347 [2024-07-13 22:01:59.712101] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:40.347 [2024-07-13 22:01:59.712541] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:40.347 [2024-07-13 22:01:59.712562] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:40.347 [2024-07-13 22:01:59.712635] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:40.347 [2024-07-13 22:01:59.712661] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:40.347 pt2 00:18:40.347 22:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:40.605 [2024-07-13 22:01:59.880467] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:40.605 22:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:18:40.605 22:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:40.605 22:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:40.605 22:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:40.605 22:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:40.605 22:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:40.605 22:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:40.605 22:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:40.605 22:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:40.605 22:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:40.605 22:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.605 22:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.864 22:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:40.864 "name": "raid_bdev1", 00:18:40.864 "uuid": "7646d07d-f073-4141-9120-00d9ee94ab32", 00:18:40.864 "strip_size_kb": 64, 00:18:40.864 "state": "configuring", 00:18:40.864 "raid_level": "raid0", 00:18:40.864 "superblock": true, 00:18:40.864 "num_base_bdevs": 4, 00:18:40.864 "num_base_bdevs_discovered": 1, 00:18:40.864 "num_base_bdevs_operational": 4, 00:18:40.864 "base_bdevs_list": [ 00:18:40.864 { 00:18:40.864 "name": "pt1", 00:18:40.864 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:40.864 "is_configured": true, 00:18:40.864 "data_offset": 2048, 00:18:40.864 "data_size": 63488 00:18:40.864 }, 00:18:40.864 { 00:18:40.864 "name": null, 00:18:40.864 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:40.864 "is_configured": false, 00:18:40.864 "data_offset": 2048, 00:18:40.864 "data_size": 63488 00:18:40.864 }, 00:18:40.864 { 00:18:40.864 "name": null, 00:18:40.864 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:40.864 "is_configured": false, 00:18:40.864 "data_offset": 2048, 00:18:40.864 "data_size": 63488 00:18:40.864 }, 00:18:40.864 { 00:18:40.864 "name": null, 00:18:40.864 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:40.864 "is_configured": false, 00:18:40.864 "data_offset": 2048, 00:18:40.864 "data_size": 63488 00:18:40.864 } 00:18:40.864 ] 00:18:40.864 }' 00:18:40.864 22:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:40.864 22:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.432 22:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:18:41.432 22:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:41.432 22:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:41.432 [2024-07-13 22:02:00.690553] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:41.432 [2024-07-13 22:02:00.690605] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:41.432 [2024-07-13 22:02:00.690627] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043880 00:18:41.432 [2024-07-13 22:02:00.690639] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:41.432 [2024-07-13 22:02:00.691113] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:41.432 [2024-07-13 22:02:00.691135] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:41.432 [2024-07-13 22:02:00.691216] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:41.432 [2024-07-13 22:02:00.691238] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:41.432 pt2 00:18:41.432 22:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:18:41.432 22:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:41.432 22:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:41.691 [2024-07-13 22:02:00.867028] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:41.691 [2024-07-13 22:02:00.867074] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:41.691 [2024-07-13 22:02:00.867103] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043b80 00:18:41.691 [2024-07-13 22:02:00.867114] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:41.691 [2024-07-13 22:02:00.867563] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:41.691 [2024-07-13 22:02:00.867581] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:41.691 [2024-07-13 22:02:00.867654] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:41.691 [2024-07-13 22:02:00.867675] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:41.691 pt3 00:18:41.691 22:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:18:41.691 22:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:41.691 22:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:41.691 [2024-07-13 22:02:01.039492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:41.691 [2024-07-13 22:02:01.039542] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:41.691 [2024-07-13 22:02:01.039565] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043e80 00:18:41.691 [2024-07-13 22:02:01.039576] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:41.691 [2024-07-13 22:02:01.040008] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:41.691 [2024-07-13 22:02:01.040028] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:41.691 [2024-07-13 22:02:01.040109] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:41.691 [2024-07-13 22:02:01.040129] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:41.691 [2024-07-13 22:02:01.040288] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000043280 00:18:41.691 [2024-07-13 22:02:01.040298] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:41.691 [2024-07-13 22:02:01.040533] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:18:41.691 [2024-07-13 22:02:01.040708] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000043280 00:18:41.691 [2024-07-13 22:02:01.040726] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000043280 00:18:41.691 [2024-07-13 22:02:01.040854] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:41.691 pt4 00:18:41.691 22:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:18:41.691 22:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:41.691 22:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:18:41.691 22:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:41.691 22:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:41.691 22:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:41.691 22:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:41.691 22:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:41.691 22:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:41.691 22:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:41.691 22:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:41.691 22:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:41.691 22:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:41.691 22:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.951 22:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:41.951 "name": "raid_bdev1", 00:18:41.951 "uuid": "7646d07d-f073-4141-9120-00d9ee94ab32", 00:18:41.951 "strip_size_kb": 64, 00:18:41.951 "state": "online", 00:18:41.951 "raid_level": "raid0", 00:18:41.951 "superblock": true, 00:18:41.951 "num_base_bdevs": 4, 00:18:41.951 "num_base_bdevs_discovered": 4, 00:18:41.951 "num_base_bdevs_operational": 4, 00:18:41.951 "base_bdevs_list": [ 00:18:41.951 { 00:18:41.951 "name": "pt1", 00:18:41.951 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:41.951 "is_configured": true, 00:18:41.951 "data_offset": 2048, 00:18:41.951 "data_size": 63488 00:18:41.951 }, 00:18:41.951 { 00:18:41.951 "name": "pt2", 00:18:41.951 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:41.951 "is_configured": true, 00:18:41.951 "data_offset": 2048, 00:18:41.951 "data_size": 63488 00:18:41.951 }, 00:18:41.951 { 00:18:41.951 "name": "pt3", 00:18:41.951 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:41.951 "is_configured": true, 00:18:41.951 "data_offset": 2048, 00:18:41.951 "data_size": 63488 00:18:41.951 }, 00:18:41.951 { 00:18:41.951 "name": "pt4", 00:18:41.951 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:41.951 "is_configured": true, 00:18:41.951 "data_offset": 2048, 00:18:41.951 "data_size": 63488 00:18:41.951 } 00:18:41.951 ] 00:18:41.951 }' 00:18:41.951 22:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:41.951 22:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.518 22:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:18:42.518 22:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:42.518 22:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:42.518 22:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:42.518 22:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:42.518 22:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:42.518 22:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:42.518 22:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:42.518 [2024-07-13 22:02:01.861908] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:42.518 22:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:42.518 "name": "raid_bdev1", 00:18:42.518 "aliases": [ 00:18:42.518 "7646d07d-f073-4141-9120-00d9ee94ab32" 00:18:42.518 ], 00:18:42.518 "product_name": "Raid Volume", 00:18:42.518 "block_size": 512, 00:18:42.518 "num_blocks": 253952, 00:18:42.518 "uuid": "7646d07d-f073-4141-9120-00d9ee94ab32", 00:18:42.518 "assigned_rate_limits": { 00:18:42.518 "rw_ios_per_sec": 0, 00:18:42.518 "rw_mbytes_per_sec": 0, 00:18:42.518 "r_mbytes_per_sec": 0, 00:18:42.518 "w_mbytes_per_sec": 0 00:18:42.518 }, 00:18:42.518 "claimed": false, 00:18:42.518 "zoned": false, 00:18:42.518 "supported_io_types": { 00:18:42.518 "read": true, 00:18:42.519 "write": true, 00:18:42.519 "unmap": true, 00:18:42.519 "flush": true, 00:18:42.519 "reset": true, 00:18:42.519 "nvme_admin": false, 00:18:42.519 "nvme_io": false, 00:18:42.519 "nvme_io_md": false, 00:18:42.519 "write_zeroes": true, 00:18:42.519 "zcopy": false, 00:18:42.519 "get_zone_info": false, 00:18:42.519 "zone_management": false, 00:18:42.519 "zone_append": false, 00:18:42.519 "compare": false, 00:18:42.519 "compare_and_write": false, 00:18:42.519 "abort": false, 00:18:42.519 "seek_hole": false, 00:18:42.519 "seek_data": false, 00:18:42.519 "copy": false, 00:18:42.519 "nvme_iov_md": false 00:18:42.519 }, 00:18:42.519 "memory_domains": [ 00:18:42.519 { 00:18:42.519 "dma_device_id": "system", 00:18:42.519 "dma_device_type": 1 00:18:42.519 }, 00:18:42.519 { 00:18:42.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:42.519 "dma_device_type": 2 00:18:42.519 }, 00:18:42.519 { 00:18:42.519 "dma_device_id": "system", 00:18:42.519 "dma_device_type": 1 00:18:42.519 }, 00:18:42.519 { 00:18:42.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:42.519 "dma_device_type": 2 00:18:42.519 }, 00:18:42.519 { 00:18:42.519 "dma_device_id": "system", 00:18:42.519 "dma_device_type": 1 00:18:42.519 }, 00:18:42.519 { 00:18:42.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:42.519 "dma_device_type": 2 00:18:42.519 }, 00:18:42.519 { 00:18:42.519 "dma_device_id": "system", 00:18:42.519 "dma_device_type": 1 00:18:42.519 }, 00:18:42.519 { 00:18:42.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:42.519 "dma_device_type": 2 00:18:42.519 } 00:18:42.519 ], 00:18:42.519 "driver_specific": { 00:18:42.519 "raid": { 00:18:42.519 "uuid": "7646d07d-f073-4141-9120-00d9ee94ab32", 00:18:42.519 "strip_size_kb": 64, 00:18:42.519 "state": "online", 00:18:42.519 "raid_level": "raid0", 00:18:42.519 "superblock": true, 00:18:42.519 "num_base_bdevs": 4, 00:18:42.519 "num_base_bdevs_discovered": 4, 00:18:42.519 "num_base_bdevs_operational": 4, 00:18:42.519 "base_bdevs_list": [ 00:18:42.519 { 00:18:42.519 "name": "pt1", 00:18:42.519 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:42.519 "is_configured": true, 00:18:42.519 "data_offset": 2048, 00:18:42.519 "data_size": 63488 00:18:42.519 }, 00:18:42.519 { 00:18:42.519 "name": "pt2", 00:18:42.519 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:42.519 "is_configured": true, 00:18:42.519 "data_offset": 2048, 00:18:42.519 "data_size": 63488 00:18:42.519 }, 00:18:42.519 { 00:18:42.519 "name": "pt3", 00:18:42.519 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:42.519 "is_configured": true, 00:18:42.519 "data_offset": 2048, 00:18:42.519 "data_size": 63488 00:18:42.519 }, 00:18:42.519 { 00:18:42.519 "name": "pt4", 00:18:42.519 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:42.519 "is_configured": true, 00:18:42.519 "data_offset": 2048, 00:18:42.519 "data_size": 63488 00:18:42.519 } 00:18:42.519 ] 00:18:42.519 } 00:18:42.519 } 00:18:42.519 }' 00:18:42.519 22:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:42.779 22:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:42.779 pt2 00:18:42.779 pt3 00:18:42.779 pt4' 00:18:42.779 22:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:42.779 22:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:42.779 22:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:42.779 22:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:42.779 "name": "pt1", 00:18:42.779 "aliases": [ 00:18:42.779 "00000000-0000-0000-0000-000000000001" 00:18:42.779 ], 00:18:42.779 "product_name": "passthru", 00:18:42.779 "block_size": 512, 00:18:42.779 "num_blocks": 65536, 00:18:42.779 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:42.779 "assigned_rate_limits": { 00:18:42.779 "rw_ios_per_sec": 0, 00:18:42.779 "rw_mbytes_per_sec": 0, 00:18:42.779 "r_mbytes_per_sec": 0, 00:18:42.779 "w_mbytes_per_sec": 0 00:18:42.779 }, 00:18:42.779 "claimed": true, 00:18:42.779 "claim_type": "exclusive_write", 00:18:42.779 "zoned": false, 00:18:42.779 "supported_io_types": { 00:18:42.779 "read": true, 00:18:42.779 "write": true, 00:18:42.779 "unmap": true, 00:18:42.779 "flush": true, 00:18:42.779 "reset": true, 00:18:42.779 "nvme_admin": false, 00:18:42.779 "nvme_io": false, 00:18:42.779 "nvme_io_md": false, 00:18:42.779 "write_zeroes": true, 00:18:42.779 "zcopy": true, 00:18:42.779 "get_zone_info": false, 00:18:42.779 "zone_management": false, 00:18:42.779 "zone_append": false, 00:18:42.779 "compare": false, 00:18:42.779 "compare_and_write": false, 00:18:42.779 "abort": true, 00:18:42.779 "seek_hole": false, 00:18:42.779 "seek_data": false, 00:18:42.779 "copy": true, 00:18:42.779 "nvme_iov_md": false 00:18:42.779 }, 00:18:42.779 "memory_domains": [ 00:18:42.779 { 00:18:42.779 "dma_device_id": "system", 00:18:42.779 "dma_device_type": 1 00:18:42.779 }, 00:18:42.779 { 00:18:42.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:42.779 "dma_device_type": 2 00:18:42.779 } 00:18:42.779 ], 00:18:42.779 "driver_specific": { 00:18:42.779 "passthru": { 00:18:42.779 "name": "pt1", 00:18:42.779 "base_bdev_name": "malloc1" 00:18:42.779 } 00:18:42.779 } 00:18:42.779 }' 00:18:42.779 22:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:42.779 22:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:43.039 22:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:43.039 22:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:43.039 22:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:43.039 22:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:43.039 22:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:43.039 22:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:43.039 22:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:43.039 22:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:43.039 22:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:43.297 22:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:43.297 22:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:43.297 22:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:43.297 22:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:43.297 22:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:43.297 "name": "pt2", 00:18:43.297 "aliases": [ 00:18:43.297 "00000000-0000-0000-0000-000000000002" 00:18:43.297 ], 00:18:43.297 "product_name": "passthru", 00:18:43.297 "block_size": 512, 00:18:43.297 "num_blocks": 65536, 00:18:43.297 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:43.297 "assigned_rate_limits": { 00:18:43.297 "rw_ios_per_sec": 0, 00:18:43.297 "rw_mbytes_per_sec": 0, 00:18:43.297 "r_mbytes_per_sec": 0, 00:18:43.297 "w_mbytes_per_sec": 0 00:18:43.297 }, 00:18:43.297 "claimed": true, 00:18:43.297 "claim_type": "exclusive_write", 00:18:43.297 "zoned": false, 00:18:43.297 "supported_io_types": { 00:18:43.297 "read": true, 00:18:43.297 "write": true, 00:18:43.297 "unmap": true, 00:18:43.297 "flush": true, 00:18:43.297 "reset": true, 00:18:43.297 "nvme_admin": false, 00:18:43.297 "nvme_io": false, 00:18:43.297 "nvme_io_md": false, 00:18:43.297 "write_zeroes": true, 00:18:43.297 "zcopy": true, 00:18:43.297 "get_zone_info": false, 00:18:43.297 "zone_management": false, 00:18:43.297 "zone_append": false, 00:18:43.297 "compare": false, 00:18:43.297 "compare_and_write": false, 00:18:43.297 "abort": true, 00:18:43.297 "seek_hole": false, 00:18:43.297 "seek_data": false, 00:18:43.297 "copy": true, 00:18:43.297 "nvme_iov_md": false 00:18:43.297 }, 00:18:43.297 "memory_domains": [ 00:18:43.297 { 00:18:43.297 "dma_device_id": "system", 00:18:43.297 "dma_device_type": 1 00:18:43.297 }, 00:18:43.297 { 00:18:43.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:43.297 "dma_device_type": 2 00:18:43.297 } 00:18:43.297 ], 00:18:43.297 "driver_specific": { 00:18:43.297 "passthru": { 00:18:43.297 "name": "pt2", 00:18:43.297 "base_bdev_name": "malloc2" 00:18:43.297 } 00:18:43.297 } 00:18:43.297 }' 00:18:43.297 22:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:43.297 22:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:43.556 22:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:43.556 22:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:43.556 22:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:43.556 22:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:43.556 22:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:43.557 22:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:43.557 22:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:43.557 22:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:43.557 22:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:43.557 22:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:43.557 22:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:43.557 22:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:18:43.557 22:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:43.816 22:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:43.816 "name": "pt3", 00:18:43.816 "aliases": [ 00:18:43.816 "00000000-0000-0000-0000-000000000003" 00:18:43.816 ], 00:18:43.816 "product_name": "passthru", 00:18:43.816 "block_size": 512, 00:18:43.816 "num_blocks": 65536, 00:18:43.816 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:43.816 "assigned_rate_limits": { 00:18:43.816 "rw_ios_per_sec": 0, 00:18:43.816 "rw_mbytes_per_sec": 0, 00:18:43.816 "r_mbytes_per_sec": 0, 00:18:43.816 "w_mbytes_per_sec": 0 00:18:43.816 }, 00:18:43.816 "claimed": true, 00:18:43.816 "claim_type": "exclusive_write", 00:18:43.816 "zoned": false, 00:18:43.816 "supported_io_types": { 00:18:43.816 "read": true, 00:18:43.816 "write": true, 00:18:43.816 "unmap": true, 00:18:43.816 "flush": true, 00:18:43.816 "reset": true, 00:18:43.816 "nvme_admin": false, 00:18:43.816 "nvme_io": false, 00:18:43.816 "nvme_io_md": false, 00:18:43.816 "write_zeroes": true, 00:18:43.816 "zcopy": true, 00:18:43.816 "get_zone_info": false, 00:18:43.816 "zone_management": false, 00:18:43.816 "zone_append": false, 00:18:43.816 "compare": false, 00:18:43.816 "compare_and_write": false, 00:18:43.816 "abort": true, 00:18:43.816 "seek_hole": false, 00:18:43.816 "seek_data": false, 00:18:43.816 "copy": true, 00:18:43.816 "nvme_iov_md": false 00:18:43.816 }, 00:18:43.816 "memory_domains": [ 00:18:43.816 { 00:18:43.816 "dma_device_id": "system", 00:18:43.816 "dma_device_type": 1 00:18:43.816 }, 00:18:43.816 { 00:18:43.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:43.816 "dma_device_type": 2 00:18:43.816 } 00:18:43.816 ], 00:18:43.816 "driver_specific": { 00:18:43.816 "passthru": { 00:18:43.816 "name": "pt3", 00:18:43.816 "base_bdev_name": "malloc3" 00:18:43.816 } 00:18:43.816 } 00:18:43.816 }' 00:18:43.816 22:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:43.816 22:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:43.816 22:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:43.816 22:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:43.816 22:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:44.075 22:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:44.075 22:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:44.075 22:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:44.075 22:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:44.075 22:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:44.075 22:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:44.075 22:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:44.075 22:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:44.075 22:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:18:44.075 22:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:44.334 22:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:44.334 "name": "pt4", 00:18:44.334 "aliases": [ 00:18:44.334 "00000000-0000-0000-0000-000000000004" 00:18:44.334 ], 00:18:44.334 "product_name": "passthru", 00:18:44.334 "block_size": 512, 00:18:44.334 "num_blocks": 65536, 00:18:44.334 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:44.334 "assigned_rate_limits": { 00:18:44.334 "rw_ios_per_sec": 0, 00:18:44.334 "rw_mbytes_per_sec": 0, 00:18:44.334 "r_mbytes_per_sec": 0, 00:18:44.334 "w_mbytes_per_sec": 0 00:18:44.334 }, 00:18:44.334 "claimed": true, 00:18:44.334 "claim_type": "exclusive_write", 00:18:44.334 "zoned": false, 00:18:44.334 "supported_io_types": { 00:18:44.334 "read": true, 00:18:44.334 "write": true, 00:18:44.334 "unmap": true, 00:18:44.334 "flush": true, 00:18:44.334 "reset": true, 00:18:44.334 "nvme_admin": false, 00:18:44.334 "nvme_io": false, 00:18:44.334 "nvme_io_md": false, 00:18:44.334 "write_zeroes": true, 00:18:44.334 "zcopy": true, 00:18:44.334 "get_zone_info": false, 00:18:44.334 "zone_management": false, 00:18:44.334 "zone_append": false, 00:18:44.335 "compare": false, 00:18:44.335 "compare_and_write": false, 00:18:44.335 "abort": true, 00:18:44.335 "seek_hole": false, 00:18:44.335 "seek_data": false, 00:18:44.335 "copy": true, 00:18:44.335 "nvme_iov_md": false 00:18:44.335 }, 00:18:44.335 "memory_domains": [ 00:18:44.335 { 00:18:44.335 "dma_device_id": "system", 00:18:44.335 "dma_device_type": 1 00:18:44.335 }, 00:18:44.335 { 00:18:44.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:44.335 "dma_device_type": 2 00:18:44.335 } 00:18:44.335 ], 00:18:44.335 "driver_specific": { 00:18:44.335 "passthru": { 00:18:44.335 "name": "pt4", 00:18:44.335 "base_bdev_name": "malloc4" 00:18:44.335 } 00:18:44.335 } 00:18:44.335 }' 00:18:44.335 22:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:44.335 22:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:44.335 22:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:44.335 22:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:44.335 22:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:44.335 22:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:44.335 22:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:44.335 22:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:44.594 22:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:44.594 22:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:44.594 22:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:44.594 22:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:44.594 22:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:44.594 22:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:18:44.888 [2024-07-13 22:02:03.991506] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:44.888 22:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 7646d07d-f073-4141-9120-00d9ee94ab32 '!=' 7646d07d-f073-4141-9120-00d9ee94ab32 ']' 00:18:44.888 22:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:18:44.888 22:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:44.888 22:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:44.888 22:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 1422237 00:18:44.888 22:02:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 1422237 ']' 00:18:44.888 22:02:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 1422237 00:18:44.888 22:02:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:18:44.888 22:02:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:44.888 22:02:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1422237 00:18:44.888 22:02:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:44.888 22:02:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:44.888 22:02:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1422237' 00:18:44.888 killing process with pid 1422237 00:18:44.888 22:02:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 1422237 00:18:44.888 [2024-07-13 22:02:04.066000] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:44.888 [2024-07-13 22:02:04.066078] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:44.888 [2024-07-13 22:02:04.066145] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:44.888 [2024-07-13 22:02:04.066157] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000043280 name raid_bdev1, state offline 00:18:44.888 22:02:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 1422237 00:18:45.148 [2024-07-13 22:02:04.386493] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:46.528 22:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:18:46.528 00:18:46.528 real 0m13.708s 00:18:46.528 user 0m23.212s 00:18:46.528 sys 0m2.541s 00:18:46.528 22:02:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:46.528 22:02:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.528 ************************************ 00:18:46.528 END TEST raid_superblock_test 00:18:46.529 ************************************ 00:18:46.529 22:02:05 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:46.529 22:02:05 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:18:46.529 22:02:05 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:46.529 22:02:05 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:46.529 22:02:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:46.529 ************************************ 00:18:46.529 START TEST raid_read_error_test 00:18:46.529 ************************************ 00:18:46.529 22:02:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 4 read 00:18:46.529 22:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:18:46.529 22:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:18:46.529 22:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:18:46.529 22:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:18:46.529 22:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:46.529 22:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:18:46.529 22:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:46.529 22:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:46.529 22:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:18:46.529 22:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:46.529 22:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:46.529 22:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:18:46.529 22:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:46.529 22:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:46.529 22:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:18:46.529 22:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:46.529 22:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:46.529 22:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:46.529 22:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:18:46.529 22:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:18:46.529 22:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:18:46.529 22:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:18:46.529 22:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:18:46.529 22:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:18:46.529 22:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:18:46.529 22:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:18:46.529 22:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:18:46.529 22:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:18:46.529 22:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.a8eRH8uH1q 00:18:46.529 22:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:46.529 22:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1424894 00:18:46.529 22:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1424894 /var/tmp/spdk-raid.sock 00:18:46.529 22:02:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 1424894 ']' 00:18:46.529 22:02:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:46.529 22:02:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:46.529 22:02:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:46.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:46.529 22:02:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:46.529 22:02:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.529 [2024-07-13 22:02:05.754454] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:46.529 [2024-07-13 22:02:05.754554] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1424894 ] 00:18:46.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:46.529 EAL: Requested device 0000:3d:01.0 cannot be used 00:18:46.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:46.529 EAL: Requested device 0000:3d:01.1 cannot be used 00:18:46.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:46.529 EAL: Requested device 0000:3d:01.2 cannot be used 00:18:46.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:46.529 EAL: Requested device 0000:3d:01.3 cannot be used 00:18:46.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:46.529 EAL: Requested device 0000:3d:01.4 cannot be used 00:18:46.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:46.529 EAL: Requested device 0000:3d:01.5 cannot be used 00:18:46.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:46.529 EAL: Requested device 0000:3d:01.6 cannot be used 00:18:46.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:46.529 EAL: Requested device 0000:3d:01.7 cannot be used 00:18:46.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:46.529 EAL: Requested device 0000:3d:02.0 cannot be used 00:18:46.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:46.529 EAL: Requested device 0000:3d:02.1 cannot be used 00:18:46.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:46.529 EAL: Requested device 0000:3d:02.2 cannot be used 00:18:46.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:46.529 EAL: Requested device 0000:3d:02.3 cannot be used 00:18:46.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:46.529 EAL: Requested device 0000:3d:02.4 cannot be used 00:18:46.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:46.529 EAL: Requested device 0000:3d:02.5 cannot be used 00:18:46.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:46.529 EAL: Requested device 0000:3d:02.6 cannot be used 00:18:46.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:46.529 EAL: Requested device 0000:3d:02.7 cannot be used 00:18:46.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:46.529 EAL: Requested device 0000:3f:01.0 cannot be used 00:18:46.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:46.529 EAL: Requested device 0000:3f:01.1 cannot be used 00:18:46.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:46.529 EAL: Requested device 0000:3f:01.2 cannot be used 00:18:46.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:46.529 EAL: Requested device 0000:3f:01.3 cannot be used 00:18:46.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:46.529 EAL: Requested device 0000:3f:01.4 cannot be used 00:18:46.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:46.529 EAL: Requested device 0000:3f:01.5 cannot be used 00:18:46.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:46.529 EAL: Requested device 0000:3f:01.6 cannot be used 00:18:46.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:46.529 EAL: Requested device 0000:3f:01.7 cannot be used 00:18:46.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:46.529 EAL: Requested device 0000:3f:02.0 cannot be used 00:18:46.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:46.529 EAL: Requested device 0000:3f:02.1 cannot be used 00:18:46.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:46.529 EAL: Requested device 0000:3f:02.2 cannot be used 00:18:46.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:46.529 EAL: Requested device 0000:3f:02.3 cannot be used 00:18:46.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:46.529 EAL: Requested device 0000:3f:02.4 cannot be used 00:18:46.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:46.529 EAL: Requested device 0000:3f:02.5 cannot be used 00:18:46.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:46.529 EAL: Requested device 0000:3f:02.6 cannot be used 00:18:46.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:46.529 EAL: Requested device 0000:3f:02.7 cannot be used 00:18:46.529 [2024-07-13 22:02:05.909884] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.788 [2024-07-13 22:02:06.114986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:47.045 [2024-07-13 22:02:06.361839] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:47.045 [2024-07-13 22:02:06.361869] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:47.304 22:02:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:47.304 22:02:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:18:47.304 22:02:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:47.304 22:02:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:47.563 BaseBdev1_malloc 00:18:47.563 22:02:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:18:47.563 true 00:18:47.563 22:02:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:47.821 [2024-07-13 22:02:07.054855] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:47.821 [2024-07-13 22:02:07.054931] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:47.821 [2024-07-13 22:02:07.054955] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:18:47.821 [2024-07-13 22:02:07.054972] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:47.821 [2024-07-13 22:02:07.057090] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:47.821 [2024-07-13 22:02:07.057124] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:47.821 BaseBdev1 00:18:47.821 22:02:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:47.821 22:02:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:48.080 BaseBdev2_malloc 00:18:48.080 22:02:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:18:48.080 true 00:18:48.080 22:02:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:48.339 [2024-07-13 22:02:07.578350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:48.339 [2024-07-13 22:02:07.578398] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:48.339 [2024-07-13 22:02:07.578436] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:18:48.339 [2024-07-13 22:02:07.578452] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:48.339 [2024-07-13 22:02:07.580526] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:48.339 [2024-07-13 22:02:07.580556] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:48.339 BaseBdev2 00:18:48.339 22:02:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:48.339 22:02:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:48.599 BaseBdev3_malloc 00:18:48.599 22:02:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:18:48.599 true 00:18:48.599 22:02:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:48.858 [2024-07-13 22:02:08.090677] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:48.858 [2024-07-13 22:02:08.090725] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:48.858 [2024-07-13 22:02:08.090776] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041780 00:18:48.858 [2024-07-13 22:02:08.090789] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:48.858 [2024-07-13 22:02:08.092893] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:48.858 [2024-07-13 22:02:08.092946] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:48.858 BaseBdev3 00:18:48.858 22:02:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:48.858 22:02:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:49.117 BaseBdev4_malloc 00:18:49.117 22:02:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:18:49.117 true 00:18:49.117 22:02:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:18:49.377 [2024-07-13 22:02:08.623222] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:18:49.377 [2024-07-13 22:02:08.623278] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:49.377 [2024-07-13 22:02:08.623321] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042680 00:18:49.377 [2024-07-13 22:02:08.623335] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:49.377 [2024-07-13 22:02:08.625433] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:49.377 [2024-07-13 22:02:08.625464] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:49.377 BaseBdev4 00:18:49.377 22:02:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:18:49.637 [2024-07-13 22:02:08.783680] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:49.637 [2024-07-13 22:02:08.785427] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:49.637 [2024-07-13 22:02:08.785499] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:49.637 [2024-07-13 22:02:08.785559] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:49.637 [2024-07-13 22:02:08.785772] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042c80 00:18:49.637 [2024-07-13 22:02:08.785788] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:49.637 [2024-07-13 22:02:08.786071] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:18:49.637 [2024-07-13 22:02:08.786264] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042c80 00:18:49.637 [2024-07-13 22:02:08.786278] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000042c80 00:18:49.637 [2024-07-13 22:02:08.786430] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:49.637 22:02:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:18:49.637 22:02:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:49.637 22:02:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:49.637 22:02:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:49.637 22:02:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:49.637 22:02:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:49.637 22:02:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:49.637 22:02:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:49.637 22:02:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:49.637 22:02:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:49.637 22:02:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:49.637 22:02:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.637 22:02:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:49.637 "name": "raid_bdev1", 00:18:49.637 "uuid": "fdbfc201-db37-4a33-8e18-d30f1e2e00ac", 00:18:49.637 "strip_size_kb": 64, 00:18:49.637 "state": "online", 00:18:49.637 "raid_level": "raid0", 00:18:49.637 "superblock": true, 00:18:49.637 "num_base_bdevs": 4, 00:18:49.637 "num_base_bdevs_discovered": 4, 00:18:49.637 "num_base_bdevs_operational": 4, 00:18:49.637 "base_bdevs_list": [ 00:18:49.637 { 00:18:49.637 "name": "BaseBdev1", 00:18:49.637 "uuid": "a61d8e83-3593-5a52-ad9b-96d67f9b68b9", 00:18:49.637 "is_configured": true, 00:18:49.637 "data_offset": 2048, 00:18:49.637 "data_size": 63488 00:18:49.637 }, 00:18:49.637 { 00:18:49.637 "name": "BaseBdev2", 00:18:49.637 "uuid": "fc4b918e-7769-5a52-899a-cdf35e8c3f0d", 00:18:49.637 "is_configured": true, 00:18:49.637 "data_offset": 2048, 00:18:49.637 "data_size": 63488 00:18:49.637 }, 00:18:49.637 { 00:18:49.637 "name": "BaseBdev3", 00:18:49.637 "uuid": "f0b04e21-1dcd-5b24-9b92-7c95251d4c78", 00:18:49.637 "is_configured": true, 00:18:49.637 "data_offset": 2048, 00:18:49.637 "data_size": 63488 00:18:49.637 }, 00:18:49.637 { 00:18:49.637 "name": "BaseBdev4", 00:18:49.637 "uuid": "c6ceda6f-367b-53a9-8f4d-ae9ba1cd439c", 00:18:49.637 "is_configured": true, 00:18:49.637 "data_offset": 2048, 00:18:49.637 "data_size": 63488 00:18:49.637 } 00:18:49.637 ] 00:18:49.637 }' 00:18:49.637 22:02:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:49.637 22:02:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.206 22:02:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:18:50.206 22:02:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:50.206 [2024-07-13 22:02:09.518833] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000108b0 00:18:51.144 22:02:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:18:51.404 22:02:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:18:51.404 22:02:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:18:51.404 22:02:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:18:51.404 22:02:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:18:51.404 22:02:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:51.404 22:02:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:51.404 22:02:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:51.404 22:02:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:51.404 22:02:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:51.404 22:02:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:51.404 22:02:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:51.404 22:02:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:51.404 22:02:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:51.404 22:02:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.404 22:02:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:51.663 22:02:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:51.663 "name": "raid_bdev1", 00:18:51.663 "uuid": "fdbfc201-db37-4a33-8e18-d30f1e2e00ac", 00:18:51.663 "strip_size_kb": 64, 00:18:51.663 "state": "online", 00:18:51.663 "raid_level": "raid0", 00:18:51.663 "superblock": true, 00:18:51.663 "num_base_bdevs": 4, 00:18:51.663 "num_base_bdevs_discovered": 4, 00:18:51.663 "num_base_bdevs_operational": 4, 00:18:51.663 "base_bdevs_list": [ 00:18:51.663 { 00:18:51.663 "name": "BaseBdev1", 00:18:51.663 "uuid": "a61d8e83-3593-5a52-ad9b-96d67f9b68b9", 00:18:51.663 "is_configured": true, 00:18:51.663 "data_offset": 2048, 00:18:51.663 "data_size": 63488 00:18:51.663 }, 00:18:51.663 { 00:18:51.663 "name": "BaseBdev2", 00:18:51.663 "uuid": "fc4b918e-7769-5a52-899a-cdf35e8c3f0d", 00:18:51.663 "is_configured": true, 00:18:51.663 "data_offset": 2048, 00:18:51.663 "data_size": 63488 00:18:51.663 }, 00:18:51.663 { 00:18:51.663 "name": "BaseBdev3", 00:18:51.663 "uuid": "f0b04e21-1dcd-5b24-9b92-7c95251d4c78", 00:18:51.663 "is_configured": true, 00:18:51.663 "data_offset": 2048, 00:18:51.663 "data_size": 63488 00:18:51.663 }, 00:18:51.663 { 00:18:51.663 "name": "BaseBdev4", 00:18:51.663 "uuid": "c6ceda6f-367b-53a9-8f4d-ae9ba1cd439c", 00:18:51.663 "is_configured": true, 00:18:51.663 "data_offset": 2048, 00:18:51.663 "data_size": 63488 00:18:51.663 } 00:18:51.663 ] 00:18:51.663 }' 00:18:51.663 22:02:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:51.663 22:02:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.922 22:02:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:52.180 [2024-07-13 22:02:11.411949] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:52.181 [2024-07-13 22:02:11.411994] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:52.181 [2024-07-13 22:02:11.414321] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:52.181 [2024-07-13 22:02:11.414367] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:52.181 [2024-07-13 22:02:11.414405] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:52.181 [2024-07-13 22:02:11.414424] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042c80 name raid_bdev1, state offline 00:18:52.181 0 00:18:52.181 22:02:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1424894 00:18:52.181 22:02:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 1424894 ']' 00:18:52.181 22:02:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 1424894 00:18:52.181 22:02:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:18:52.181 22:02:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:52.181 22:02:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1424894 00:18:52.181 22:02:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:52.181 22:02:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:52.181 22:02:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1424894' 00:18:52.181 killing process with pid 1424894 00:18:52.181 22:02:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 1424894 00:18:52.181 [2024-07-13 22:02:11.477786] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:52.181 22:02:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 1424894 00:18:52.440 [2024-07-13 22:02:11.735965] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:53.825 22:02:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.a8eRH8uH1q 00:18:53.825 22:02:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:18:53.825 22:02:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:18:53.825 22:02:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.53 00:18:53.825 22:02:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:18:53.825 22:02:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:53.825 22:02:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:53.825 22:02:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.53 != \0\.\0\0 ]] 00:18:53.825 00:18:53.825 real 0m7.334s 00:18:53.825 user 0m10.344s 00:18:53.825 sys 0m1.199s 00:18:53.825 22:02:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:53.825 22:02:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.825 ************************************ 00:18:53.825 END TEST raid_read_error_test 00:18:53.825 ************************************ 00:18:53.825 22:02:13 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:53.825 22:02:13 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:18:53.825 22:02:13 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:53.825 22:02:13 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:53.825 22:02:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:53.825 ************************************ 00:18:53.825 START TEST raid_write_error_test 00:18:53.825 ************************************ 00:18:53.825 22:02:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 4 write 00:18:53.825 22:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:18:53.825 22:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:18:53.825 22:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:18:53.825 22:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:18:53.825 22:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:53.825 22:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:18:53.825 22:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:53.825 22:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:53.825 22:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:18:53.825 22:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:53.825 22:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:53.825 22:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:18:53.825 22:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:53.825 22:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:53.825 22:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:18:53.825 22:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:53.825 22:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:53.825 22:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:53.825 22:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:18:53.825 22:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:18:53.825 22:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:18:53.825 22:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:18:53.825 22:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:18:53.825 22:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:18:53.825 22:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:18:53.825 22:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:18:53.825 22:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:18:53.825 22:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:18:53.825 22:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.iVMiCu1LxP 00:18:53.825 22:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1426125 00:18:53.825 22:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1426125 /var/tmp/spdk-raid.sock 00:18:53.825 22:02:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 1426125 ']' 00:18:53.825 22:02:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:53.825 22:02:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:53.825 22:02:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:53.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:53.825 22:02:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:53.825 22:02:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.825 22:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:53.825 [2024-07-13 22:02:13.187697] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:53.825 [2024-07-13 22:02:13.187819] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1426125 ] 00:18:54.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:54.084 EAL: Requested device 0000:3d:01.0 cannot be used 00:18:54.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:54.084 EAL: Requested device 0000:3d:01.1 cannot be used 00:18:54.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:54.084 EAL: Requested device 0000:3d:01.2 cannot be used 00:18:54.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:54.084 EAL: Requested device 0000:3d:01.3 cannot be used 00:18:54.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:54.084 EAL: Requested device 0000:3d:01.4 cannot be used 00:18:54.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:54.084 EAL: Requested device 0000:3d:01.5 cannot be used 00:18:54.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:54.084 EAL: Requested device 0000:3d:01.6 cannot be used 00:18:54.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:54.084 EAL: Requested device 0000:3d:01.7 cannot be used 00:18:54.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:54.084 EAL: Requested device 0000:3d:02.0 cannot be used 00:18:54.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:54.084 EAL: Requested device 0000:3d:02.1 cannot be used 00:18:54.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:54.084 EAL: Requested device 0000:3d:02.2 cannot be used 00:18:54.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:54.084 EAL: Requested device 0000:3d:02.3 cannot be used 00:18:54.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:54.084 EAL: Requested device 0000:3d:02.4 cannot be used 00:18:54.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:54.084 EAL: Requested device 0000:3d:02.5 cannot be used 00:18:54.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:54.084 EAL: Requested device 0000:3d:02.6 cannot be used 00:18:54.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:54.084 EAL: Requested device 0000:3d:02.7 cannot be used 00:18:54.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:54.084 EAL: Requested device 0000:3f:01.0 cannot be used 00:18:54.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:54.084 EAL: Requested device 0000:3f:01.1 cannot be used 00:18:54.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:54.084 EAL: Requested device 0000:3f:01.2 cannot be used 00:18:54.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:54.084 EAL: Requested device 0000:3f:01.3 cannot be used 00:18:54.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:54.084 EAL: Requested device 0000:3f:01.4 cannot be used 00:18:54.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:54.084 EAL: Requested device 0000:3f:01.5 cannot be used 00:18:54.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:54.084 EAL: Requested device 0000:3f:01.6 cannot be used 00:18:54.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:54.084 EAL: Requested device 0000:3f:01.7 cannot be used 00:18:54.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:54.084 EAL: Requested device 0000:3f:02.0 cannot be used 00:18:54.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:54.084 EAL: Requested device 0000:3f:02.1 cannot be used 00:18:54.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:54.084 EAL: Requested device 0000:3f:02.2 cannot be used 00:18:54.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:54.084 EAL: Requested device 0000:3f:02.3 cannot be used 00:18:54.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:54.084 EAL: Requested device 0000:3f:02.4 cannot be used 00:18:54.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:54.084 EAL: Requested device 0000:3f:02.5 cannot be used 00:18:54.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:54.084 EAL: Requested device 0000:3f:02.6 cannot be used 00:18:54.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:54.084 EAL: Requested device 0000:3f:02.7 cannot be used 00:18:54.084 [2024-07-13 22:02:13.351236] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.342 [2024-07-13 22:02:13.555392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:54.601 [2024-07-13 22:02:13.794733] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:54.601 [2024-07-13 22:02:13.794762] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:54.601 22:02:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:54.601 22:02:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:18:54.601 22:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:54.601 22:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:54.859 BaseBdev1_malloc 00:18:54.859 22:02:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:18:55.118 true 00:18:55.118 22:02:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:55.118 [2024-07-13 22:02:14.441105] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:55.118 [2024-07-13 22:02:14.441159] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:55.118 [2024-07-13 22:02:14.441181] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:18:55.118 [2024-07-13 22:02:14.441197] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:55.118 [2024-07-13 22:02:14.443295] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:55.118 [2024-07-13 22:02:14.443328] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:55.118 BaseBdev1 00:18:55.118 22:02:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:55.118 22:02:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:55.377 BaseBdev2_malloc 00:18:55.377 22:02:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:18:55.635 true 00:18:55.635 22:02:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:55.635 [2024-07-13 22:02:14.976171] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:55.635 [2024-07-13 22:02:14.976223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:55.635 [2024-07-13 22:02:14.976260] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:18:55.635 [2024-07-13 22:02:14.976276] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:55.635 [2024-07-13 22:02:14.978390] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:55.635 [2024-07-13 22:02:14.978423] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:55.635 BaseBdev2 00:18:55.635 22:02:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:55.635 22:02:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:55.893 BaseBdev3_malloc 00:18:55.893 22:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:18:56.152 true 00:18:56.152 22:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:56.152 [2024-07-13 22:02:15.511480] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:56.152 [2024-07-13 22:02:15.511530] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:56.152 [2024-07-13 22:02:15.511569] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041780 00:18:56.152 [2024-07-13 22:02:15.511583] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:56.152 [2024-07-13 22:02:15.513704] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:56.152 [2024-07-13 22:02:15.513734] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:56.152 BaseBdev3 00:18:56.152 22:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:56.152 22:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:56.411 BaseBdev4_malloc 00:18:56.411 22:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:18:56.669 true 00:18:56.669 22:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:18:56.929 [2024-07-13 22:02:16.060513] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:18:56.929 [2024-07-13 22:02:16.060571] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:56.929 [2024-07-13 22:02:16.060595] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042680 00:18:56.929 [2024-07-13 22:02:16.060609] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:56.929 [2024-07-13 22:02:16.062772] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:56.929 [2024-07-13 22:02:16.062802] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:56.929 BaseBdev4 00:18:56.929 22:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:18:56.929 [2024-07-13 22:02:16.212945] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:56.929 [2024-07-13 22:02:16.214731] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:56.929 [2024-07-13 22:02:16.214804] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:56.929 [2024-07-13 22:02:16.214862] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:56.929 [2024-07-13 22:02:16.215073] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042c80 00:18:56.929 [2024-07-13 22:02:16.215089] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:56.929 [2024-07-13 22:02:16.215340] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:18:56.929 [2024-07-13 22:02:16.215529] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042c80 00:18:56.929 [2024-07-13 22:02:16.215542] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000042c80 00:18:56.929 [2024-07-13 22:02:16.215691] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:56.929 22:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:18:56.929 22:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:56.929 22:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:56.929 22:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:56.929 22:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:56.929 22:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:56.929 22:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:56.929 22:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:56.929 22:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:56.929 22:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:56.929 22:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:56.929 22:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.188 22:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:57.188 "name": "raid_bdev1", 00:18:57.188 "uuid": "cf4619ce-2020-4a9d-8fdb-ab26df6e47d4", 00:18:57.188 "strip_size_kb": 64, 00:18:57.188 "state": "online", 00:18:57.188 "raid_level": "raid0", 00:18:57.188 "superblock": true, 00:18:57.188 "num_base_bdevs": 4, 00:18:57.188 "num_base_bdevs_discovered": 4, 00:18:57.188 "num_base_bdevs_operational": 4, 00:18:57.188 "base_bdevs_list": [ 00:18:57.188 { 00:18:57.188 "name": "BaseBdev1", 00:18:57.188 "uuid": "3b2de0e0-01e5-5edf-9a00-f796471c24bb", 00:18:57.188 "is_configured": true, 00:18:57.188 "data_offset": 2048, 00:18:57.188 "data_size": 63488 00:18:57.188 }, 00:18:57.188 { 00:18:57.188 "name": "BaseBdev2", 00:18:57.188 "uuid": "0a40618b-f7c0-5d6a-aa00-601d4f031e90", 00:18:57.188 "is_configured": true, 00:18:57.188 "data_offset": 2048, 00:18:57.188 "data_size": 63488 00:18:57.188 }, 00:18:57.188 { 00:18:57.188 "name": "BaseBdev3", 00:18:57.188 "uuid": "c36c817a-bd90-5afd-a7af-ed512cd46c8d", 00:18:57.188 "is_configured": true, 00:18:57.188 "data_offset": 2048, 00:18:57.188 "data_size": 63488 00:18:57.188 }, 00:18:57.188 { 00:18:57.188 "name": "BaseBdev4", 00:18:57.188 "uuid": "083bfe23-fe97-5ead-8651-576d027b81c9", 00:18:57.188 "is_configured": true, 00:18:57.188 "data_offset": 2048, 00:18:57.188 "data_size": 63488 00:18:57.188 } 00:18:57.188 ] 00:18:57.188 }' 00:18:57.188 22:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:57.188 22:02:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.758 22:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:57.758 22:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:18:57.758 [2024-07-13 22:02:16.944068] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000108b0 00:18:58.759 22:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:18:58.759 22:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:18:58.759 22:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:18:58.760 22:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:18:58.760 22:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:18:58.760 22:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:58.760 22:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:58.760 22:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:58.760 22:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:58.760 22:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:58.760 22:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:58.760 22:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:58.760 22:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:58.760 22:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:58.760 22:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:58.760 22:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.019 22:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:59.019 "name": "raid_bdev1", 00:18:59.019 "uuid": "cf4619ce-2020-4a9d-8fdb-ab26df6e47d4", 00:18:59.019 "strip_size_kb": 64, 00:18:59.019 "state": "online", 00:18:59.019 "raid_level": "raid0", 00:18:59.019 "superblock": true, 00:18:59.019 "num_base_bdevs": 4, 00:18:59.019 "num_base_bdevs_discovered": 4, 00:18:59.019 "num_base_bdevs_operational": 4, 00:18:59.019 "base_bdevs_list": [ 00:18:59.019 { 00:18:59.019 "name": "BaseBdev1", 00:18:59.019 "uuid": "3b2de0e0-01e5-5edf-9a00-f796471c24bb", 00:18:59.019 "is_configured": true, 00:18:59.019 "data_offset": 2048, 00:18:59.019 "data_size": 63488 00:18:59.019 }, 00:18:59.019 { 00:18:59.019 "name": "BaseBdev2", 00:18:59.019 "uuid": "0a40618b-f7c0-5d6a-aa00-601d4f031e90", 00:18:59.019 "is_configured": true, 00:18:59.019 "data_offset": 2048, 00:18:59.019 "data_size": 63488 00:18:59.019 }, 00:18:59.019 { 00:18:59.019 "name": "BaseBdev3", 00:18:59.019 "uuid": "c36c817a-bd90-5afd-a7af-ed512cd46c8d", 00:18:59.019 "is_configured": true, 00:18:59.019 "data_offset": 2048, 00:18:59.019 "data_size": 63488 00:18:59.019 }, 00:18:59.019 { 00:18:59.019 "name": "BaseBdev4", 00:18:59.019 "uuid": "083bfe23-fe97-5ead-8651-576d027b81c9", 00:18:59.019 "is_configured": true, 00:18:59.019 "data_offset": 2048, 00:18:59.019 "data_size": 63488 00:18:59.019 } 00:18:59.019 ] 00:18:59.019 }' 00:18:59.019 22:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:59.019 22:02:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.587 22:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:59.587 [2024-07-13 22:02:18.897583] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:59.587 [2024-07-13 22:02:18.897628] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:59.587 [2024-07-13 22:02:18.899872] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:59.587 [2024-07-13 22:02:18.899922] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:59.587 [2024-07-13 22:02:18.899979] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:59.587 [2024-07-13 22:02:18.899997] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042c80 name raid_bdev1, state offline 00:18:59.587 0 00:18:59.587 22:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1426125 00:18:59.587 22:02:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 1426125 ']' 00:18:59.587 22:02:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 1426125 00:18:59.587 22:02:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:18:59.587 22:02:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:59.587 22:02:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1426125 00:18:59.587 22:02:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:59.587 22:02:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:59.587 22:02:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1426125' 00:18:59.587 killing process with pid 1426125 00:18:59.587 22:02:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 1426125 00:18:59.587 [2024-07-13 22:02:18.971660] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:59.587 22:02:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 1426125 00:18:59.846 [2024-07-13 22:02:19.223531] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:01.224 22:02:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.iVMiCu1LxP 00:19:01.224 22:02:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:19:01.224 22:02:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:19:01.224 22:02:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.51 00:19:01.224 22:02:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:19:01.224 22:02:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:01.224 22:02:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:01.224 22:02:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.51 != \0\.\0\0 ]] 00:19:01.224 00:19:01.224 real 0m7.423s 00:19:01.224 user 0m10.508s 00:19:01.224 sys 0m1.201s 00:19:01.224 22:02:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:01.224 22:02:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.224 ************************************ 00:19:01.224 END TEST raid_write_error_test 00:19:01.224 ************************************ 00:19:01.224 22:02:20 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:01.224 22:02:20 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:19:01.224 22:02:20 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:19:01.224 22:02:20 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:19:01.224 22:02:20 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:01.224 22:02:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:01.224 ************************************ 00:19:01.224 START TEST raid_state_function_test 00:19:01.224 ************************************ 00:19:01.224 22:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 4 false 00:19:01.224 22:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:19:01.224 22:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:19:01.224 22:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:19:01.224 22:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:19:01.224 22:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:19:01.224 22:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:01.224 22:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:19:01.224 22:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:01.224 22:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:01.224 22:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:19:01.224 22:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:01.224 22:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:01.224 22:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:19:01.224 22:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:01.224 22:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:01.224 22:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:19:01.224 22:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:01.224 22:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:01.224 22:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:01.224 22:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:19:01.224 22:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:19:01.224 22:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:19:01.224 22:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:19:01.224 22:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:19:01.224 22:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:19:01.224 22:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:19:01.224 22:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:19:01.224 22:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:19:01.224 22:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:19:01.224 22:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=1427539 00:19:01.224 22:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1427539' 00:19:01.224 Process raid pid: 1427539 00:19:01.224 22:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 1427539 /var/tmp/spdk-raid.sock 00:19:01.224 22:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 1427539 ']' 00:19:01.224 22:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:01.224 22:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:01.224 22:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:01.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:01.224 22:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:01.224 22:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:01.224 22:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.484 [2024-07-13 22:02:20.682240] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:01.484 [2024-07-13 22:02:20.682349] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:01.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:01.484 EAL: Requested device 0000:3d:01.0 cannot be used 00:19:01.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:01.484 EAL: Requested device 0000:3d:01.1 cannot be used 00:19:01.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:01.484 EAL: Requested device 0000:3d:01.2 cannot be used 00:19:01.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:01.484 EAL: Requested device 0000:3d:01.3 cannot be used 00:19:01.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:01.484 EAL: Requested device 0000:3d:01.4 cannot be used 00:19:01.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:01.484 EAL: Requested device 0000:3d:01.5 cannot be used 00:19:01.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:01.484 EAL: Requested device 0000:3d:01.6 cannot be used 00:19:01.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:01.484 EAL: Requested device 0000:3d:01.7 cannot be used 00:19:01.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:01.484 EAL: Requested device 0000:3d:02.0 cannot be used 00:19:01.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:01.484 EAL: Requested device 0000:3d:02.1 cannot be used 00:19:01.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:01.484 EAL: Requested device 0000:3d:02.2 cannot be used 00:19:01.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:01.484 EAL: Requested device 0000:3d:02.3 cannot be used 00:19:01.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:01.484 EAL: Requested device 0000:3d:02.4 cannot be used 00:19:01.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:01.484 EAL: Requested device 0000:3d:02.5 cannot be used 00:19:01.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:01.484 EAL: Requested device 0000:3d:02.6 cannot be used 00:19:01.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:01.484 EAL: Requested device 0000:3d:02.7 cannot be used 00:19:01.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:01.484 EAL: Requested device 0000:3f:01.0 cannot be used 00:19:01.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:01.484 EAL: Requested device 0000:3f:01.1 cannot be used 00:19:01.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:01.484 EAL: Requested device 0000:3f:01.2 cannot be used 00:19:01.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:01.484 EAL: Requested device 0000:3f:01.3 cannot be used 00:19:01.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:01.484 EAL: Requested device 0000:3f:01.4 cannot be used 00:19:01.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:01.484 EAL: Requested device 0000:3f:01.5 cannot be used 00:19:01.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:01.484 EAL: Requested device 0000:3f:01.6 cannot be used 00:19:01.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:01.484 EAL: Requested device 0000:3f:01.7 cannot be used 00:19:01.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:01.484 EAL: Requested device 0000:3f:02.0 cannot be used 00:19:01.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:01.484 EAL: Requested device 0000:3f:02.1 cannot be used 00:19:01.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:01.484 EAL: Requested device 0000:3f:02.2 cannot be used 00:19:01.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:01.484 EAL: Requested device 0000:3f:02.3 cannot be used 00:19:01.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:01.484 EAL: Requested device 0000:3f:02.4 cannot be used 00:19:01.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:01.484 EAL: Requested device 0000:3f:02.5 cannot be used 00:19:01.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:01.484 EAL: Requested device 0000:3f:02.6 cannot be used 00:19:01.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:01.484 EAL: Requested device 0000:3f:02.7 cannot be used 00:19:01.484 [2024-07-13 22:02:20.843927] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.743 [2024-07-13 22:02:21.050541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.002 [2024-07-13 22:02:21.308945] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:02.002 [2024-07-13 22:02:21.308972] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:02.261 22:02:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:02.261 22:02:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:19:02.261 22:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:02.261 [2024-07-13 22:02:21.600008] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:02.261 [2024-07-13 22:02:21.600054] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:02.261 [2024-07-13 22:02:21.600064] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:02.261 [2024-07-13 22:02:21.600076] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:02.261 [2024-07-13 22:02:21.600084] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:02.261 [2024-07-13 22:02:21.600098] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:02.261 [2024-07-13 22:02:21.600106] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:02.261 [2024-07-13 22:02:21.600117] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:02.261 22:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:02.261 22:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:02.261 22:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:02.261 22:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:02.261 22:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:02.261 22:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:02.261 22:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:02.261 22:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:02.261 22:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:02.261 22:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:02.261 22:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.261 22:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:02.519 22:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:02.519 "name": "Existed_Raid", 00:19:02.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.519 "strip_size_kb": 64, 00:19:02.519 "state": "configuring", 00:19:02.519 "raid_level": "concat", 00:19:02.519 "superblock": false, 00:19:02.519 "num_base_bdevs": 4, 00:19:02.519 "num_base_bdevs_discovered": 0, 00:19:02.519 "num_base_bdevs_operational": 4, 00:19:02.519 "base_bdevs_list": [ 00:19:02.519 { 00:19:02.519 "name": "BaseBdev1", 00:19:02.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.519 "is_configured": false, 00:19:02.519 "data_offset": 0, 00:19:02.519 "data_size": 0 00:19:02.519 }, 00:19:02.519 { 00:19:02.519 "name": "BaseBdev2", 00:19:02.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.519 "is_configured": false, 00:19:02.519 "data_offset": 0, 00:19:02.519 "data_size": 0 00:19:02.519 }, 00:19:02.519 { 00:19:02.519 "name": "BaseBdev3", 00:19:02.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.519 "is_configured": false, 00:19:02.519 "data_offset": 0, 00:19:02.519 "data_size": 0 00:19:02.519 }, 00:19:02.519 { 00:19:02.520 "name": "BaseBdev4", 00:19:02.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.520 "is_configured": false, 00:19:02.520 "data_offset": 0, 00:19:02.520 "data_size": 0 00:19:02.520 } 00:19:02.520 ] 00:19:02.520 }' 00:19:02.520 22:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:02.520 22:02:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.086 22:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:03.086 [2024-07-13 22:02:22.430058] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:03.086 [2024-07-13 22:02:22.430092] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:19:03.086 22:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:03.345 [2024-07-13 22:02:22.598554] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:03.345 [2024-07-13 22:02:22.598594] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:03.345 [2024-07-13 22:02:22.598604] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:03.345 [2024-07-13 22:02:22.598621] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:03.345 [2024-07-13 22:02:22.598632] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:03.345 [2024-07-13 22:02:22.598643] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:03.345 [2024-07-13 22:02:22.598652] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:03.345 [2024-07-13 22:02:22.598663] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:03.345 22:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:03.603 [2024-07-13 22:02:22.811540] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:03.603 BaseBdev1 00:19:03.603 22:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:19:03.603 22:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:19:03.603 22:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:03.603 22:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:03.603 22:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:03.603 22:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:03.603 22:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:03.863 22:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:03.863 [ 00:19:03.863 { 00:19:03.863 "name": "BaseBdev1", 00:19:03.863 "aliases": [ 00:19:03.863 "875f44f9-9ea8-4337-8a91-e0feedf14c13" 00:19:03.863 ], 00:19:03.863 "product_name": "Malloc disk", 00:19:03.863 "block_size": 512, 00:19:03.863 "num_blocks": 65536, 00:19:03.863 "uuid": "875f44f9-9ea8-4337-8a91-e0feedf14c13", 00:19:03.863 "assigned_rate_limits": { 00:19:03.863 "rw_ios_per_sec": 0, 00:19:03.863 "rw_mbytes_per_sec": 0, 00:19:03.863 "r_mbytes_per_sec": 0, 00:19:03.863 "w_mbytes_per_sec": 0 00:19:03.863 }, 00:19:03.863 "claimed": true, 00:19:03.863 "claim_type": "exclusive_write", 00:19:03.863 "zoned": false, 00:19:03.863 "supported_io_types": { 00:19:03.863 "read": true, 00:19:03.863 "write": true, 00:19:03.863 "unmap": true, 00:19:03.863 "flush": true, 00:19:03.863 "reset": true, 00:19:03.863 "nvme_admin": false, 00:19:03.863 "nvme_io": false, 00:19:03.863 "nvme_io_md": false, 00:19:03.863 "write_zeroes": true, 00:19:03.863 "zcopy": true, 00:19:03.863 "get_zone_info": false, 00:19:03.863 "zone_management": false, 00:19:03.863 "zone_append": false, 00:19:03.863 "compare": false, 00:19:03.863 "compare_and_write": false, 00:19:03.863 "abort": true, 00:19:03.863 "seek_hole": false, 00:19:03.863 "seek_data": false, 00:19:03.863 "copy": true, 00:19:03.863 "nvme_iov_md": false 00:19:03.863 }, 00:19:03.863 "memory_domains": [ 00:19:03.863 { 00:19:03.863 "dma_device_id": "system", 00:19:03.863 "dma_device_type": 1 00:19:03.863 }, 00:19:03.863 { 00:19:03.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:03.863 "dma_device_type": 2 00:19:03.863 } 00:19:03.863 ], 00:19:03.863 "driver_specific": {} 00:19:03.863 } 00:19:03.863 ] 00:19:03.863 22:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:03.863 22:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:03.863 22:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:03.863 22:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:03.863 22:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:03.863 22:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:03.863 22:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:03.863 22:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:03.863 22:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:03.863 22:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:03.863 22:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:03.863 22:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:03.863 22:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:04.121 22:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:04.121 "name": "Existed_Raid", 00:19:04.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.121 "strip_size_kb": 64, 00:19:04.121 "state": "configuring", 00:19:04.121 "raid_level": "concat", 00:19:04.121 "superblock": false, 00:19:04.121 "num_base_bdevs": 4, 00:19:04.121 "num_base_bdevs_discovered": 1, 00:19:04.121 "num_base_bdevs_operational": 4, 00:19:04.121 "base_bdevs_list": [ 00:19:04.121 { 00:19:04.121 "name": "BaseBdev1", 00:19:04.121 "uuid": "875f44f9-9ea8-4337-8a91-e0feedf14c13", 00:19:04.121 "is_configured": true, 00:19:04.121 "data_offset": 0, 00:19:04.121 "data_size": 65536 00:19:04.121 }, 00:19:04.121 { 00:19:04.121 "name": "BaseBdev2", 00:19:04.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.121 "is_configured": false, 00:19:04.121 "data_offset": 0, 00:19:04.121 "data_size": 0 00:19:04.121 }, 00:19:04.121 { 00:19:04.121 "name": "BaseBdev3", 00:19:04.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.121 "is_configured": false, 00:19:04.121 "data_offset": 0, 00:19:04.121 "data_size": 0 00:19:04.121 }, 00:19:04.121 { 00:19:04.121 "name": "BaseBdev4", 00:19:04.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.121 "is_configured": false, 00:19:04.121 "data_offset": 0, 00:19:04.121 "data_size": 0 00:19:04.121 } 00:19:04.121 ] 00:19:04.121 }' 00:19:04.121 22:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:04.121 22:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.688 22:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:04.688 [2024-07-13 22:02:23.982664] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:04.688 [2024-07-13 22:02:23.982712] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:19:04.688 22:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:04.947 [2024-07-13 22:02:24.155180] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:04.947 [2024-07-13 22:02:24.156863] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:04.947 [2024-07-13 22:02:24.156899] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:04.947 [2024-07-13 22:02:24.156932] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:04.947 [2024-07-13 22:02:24.156944] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:04.947 [2024-07-13 22:02:24.156953] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:04.947 [2024-07-13 22:02:24.156967] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:04.947 22:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:19:04.947 22:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:04.947 22:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:04.947 22:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:04.947 22:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:04.947 22:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:04.947 22:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:04.947 22:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:04.947 22:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:04.947 22:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:04.947 22:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:04.947 22:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:04.947 22:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:04.947 22:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:05.205 22:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:05.205 "name": "Existed_Raid", 00:19:05.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.206 "strip_size_kb": 64, 00:19:05.206 "state": "configuring", 00:19:05.206 "raid_level": "concat", 00:19:05.206 "superblock": false, 00:19:05.206 "num_base_bdevs": 4, 00:19:05.206 "num_base_bdevs_discovered": 1, 00:19:05.206 "num_base_bdevs_operational": 4, 00:19:05.206 "base_bdevs_list": [ 00:19:05.206 { 00:19:05.206 "name": "BaseBdev1", 00:19:05.206 "uuid": "875f44f9-9ea8-4337-8a91-e0feedf14c13", 00:19:05.206 "is_configured": true, 00:19:05.206 "data_offset": 0, 00:19:05.206 "data_size": 65536 00:19:05.206 }, 00:19:05.206 { 00:19:05.206 "name": "BaseBdev2", 00:19:05.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.206 "is_configured": false, 00:19:05.206 "data_offset": 0, 00:19:05.206 "data_size": 0 00:19:05.206 }, 00:19:05.206 { 00:19:05.206 "name": "BaseBdev3", 00:19:05.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.206 "is_configured": false, 00:19:05.206 "data_offset": 0, 00:19:05.206 "data_size": 0 00:19:05.206 }, 00:19:05.206 { 00:19:05.206 "name": "BaseBdev4", 00:19:05.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.206 "is_configured": false, 00:19:05.206 "data_offset": 0, 00:19:05.206 "data_size": 0 00:19:05.206 } 00:19:05.206 ] 00:19:05.206 }' 00:19:05.206 22:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:05.206 22:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.464 22:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:05.723 [2024-07-13 22:02:24.996569] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:05.723 BaseBdev2 00:19:05.723 22:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:19:05.723 22:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:19:05.723 22:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:05.723 22:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:05.723 22:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:05.723 22:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:05.723 22:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:05.982 22:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:05.982 [ 00:19:05.982 { 00:19:05.982 "name": "BaseBdev2", 00:19:05.982 "aliases": [ 00:19:05.982 "52d4e81f-99d4-44e7-a6e8-c65a6ddbb7a5" 00:19:05.982 ], 00:19:05.982 "product_name": "Malloc disk", 00:19:05.982 "block_size": 512, 00:19:05.982 "num_blocks": 65536, 00:19:05.982 "uuid": "52d4e81f-99d4-44e7-a6e8-c65a6ddbb7a5", 00:19:05.982 "assigned_rate_limits": { 00:19:05.982 "rw_ios_per_sec": 0, 00:19:05.982 "rw_mbytes_per_sec": 0, 00:19:05.982 "r_mbytes_per_sec": 0, 00:19:05.982 "w_mbytes_per_sec": 0 00:19:05.982 }, 00:19:05.982 "claimed": true, 00:19:05.982 "claim_type": "exclusive_write", 00:19:05.982 "zoned": false, 00:19:05.982 "supported_io_types": { 00:19:05.982 "read": true, 00:19:05.982 "write": true, 00:19:05.982 "unmap": true, 00:19:05.982 "flush": true, 00:19:05.982 "reset": true, 00:19:05.982 "nvme_admin": false, 00:19:05.982 "nvme_io": false, 00:19:05.982 "nvme_io_md": false, 00:19:05.982 "write_zeroes": true, 00:19:05.982 "zcopy": true, 00:19:05.982 "get_zone_info": false, 00:19:05.982 "zone_management": false, 00:19:05.982 "zone_append": false, 00:19:05.982 "compare": false, 00:19:05.982 "compare_and_write": false, 00:19:05.982 "abort": true, 00:19:05.982 "seek_hole": false, 00:19:05.982 "seek_data": false, 00:19:05.982 "copy": true, 00:19:05.982 "nvme_iov_md": false 00:19:05.982 }, 00:19:05.982 "memory_domains": [ 00:19:05.982 { 00:19:05.982 "dma_device_id": "system", 00:19:05.982 "dma_device_type": 1 00:19:05.982 }, 00:19:05.982 { 00:19:05.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:05.982 "dma_device_type": 2 00:19:05.982 } 00:19:05.982 ], 00:19:05.982 "driver_specific": {} 00:19:05.982 } 00:19:05.982 ] 00:19:05.982 22:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:05.982 22:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:05.982 22:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:05.982 22:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:05.982 22:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:05.982 22:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:05.982 22:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:05.982 22:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:05.982 22:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:05.982 22:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:05.982 22:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:05.982 22:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:05.982 22:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:05.982 22:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:05.982 22:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:06.241 22:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:06.241 "name": "Existed_Raid", 00:19:06.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.241 "strip_size_kb": 64, 00:19:06.241 "state": "configuring", 00:19:06.241 "raid_level": "concat", 00:19:06.241 "superblock": false, 00:19:06.241 "num_base_bdevs": 4, 00:19:06.241 "num_base_bdevs_discovered": 2, 00:19:06.241 "num_base_bdevs_operational": 4, 00:19:06.241 "base_bdevs_list": [ 00:19:06.241 { 00:19:06.241 "name": "BaseBdev1", 00:19:06.241 "uuid": "875f44f9-9ea8-4337-8a91-e0feedf14c13", 00:19:06.241 "is_configured": true, 00:19:06.241 "data_offset": 0, 00:19:06.241 "data_size": 65536 00:19:06.241 }, 00:19:06.241 { 00:19:06.241 "name": "BaseBdev2", 00:19:06.241 "uuid": "52d4e81f-99d4-44e7-a6e8-c65a6ddbb7a5", 00:19:06.241 "is_configured": true, 00:19:06.241 "data_offset": 0, 00:19:06.241 "data_size": 65536 00:19:06.241 }, 00:19:06.241 { 00:19:06.241 "name": "BaseBdev3", 00:19:06.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.241 "is_configured": false, 00:19:06.241 "data_offset": 0, 00:19:06.241 "data_size": 0 00:19:06.241 }, 00:19:06.241 { 00:19:06.241 "name": "BaseBdev4", 00:19:06.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.241 "is_configured": false, 00:19:06.241 "data_offset": 0, 00:19:06.241 "data_size": 0 00:19:06.241 } 00:19:06.241 ] 00:19:06.241 }' 00:19:06.241 22:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:06.241 22:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.810 22:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:07.070 [2024-07-13 22:02:26.207096] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:07.070 BaseBdev3 00:19:07.070 22:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:19:07.070 22:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:19:07.070 22:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:07.070 22:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:07.070 22:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:07.070 22:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:07.070 22:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:07.070 22:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:07.329 [ 00:19:07.329 { 00:19:07.329 "name": "BaseBdev3", 00:19:07.329 "aliases": [ 00:19:07.329 "83a10b40-347b-47f2-bb48-0b4f3d0822cf" 00:19:07.329 ], 00:19:07.329 "product_name": "Malloc disk", 00:19:07.329 "block_size": 512, 00:19:07.329 "num_blocks": 65536, 00:19:07.329 "uuid": "83a10b40-347b-47f2-bb48-0b4f3d0822cf", 00:19:07.329 "assigned_rate_limits": { 00:19:07.329 "rw_ios_per_sec": 0, 00:19:07.329 "rw_mbytes_per_sec": 0, 00:19:07.329 "r_mbytes_per_sec": 0, 00:19:07.329 "w_mbytes_per_sec": 0 00:19:07.329 }, 00:19:07.329 "claimed": true, 00:19:07.329 "claim_type": "exclusive_write", 00:19:07.329 "zoned": false, 00:19:07.329 "supported_io_types": { 00:19:07.329 "read": true, 00:19:07.329 "write": true, 00:19:07.329 "unmap": true, 00:19:07.329 "flush": true, 00:19:07.329 "reset": true, 00:19:07.329 "nvme_admin": false, 00:19:07.329 "nvme_io": false, 00:19:07.329 "nvme_io_md": false, 00:19:07.329 "write_zeroes": true, 00:19:07.329 "zcopy": true, 00:19:07.329 "get_zone_info": false, 00:19:07.329 "zone_management": false, 00:19:07.329 "zone_append": false, 00:19:07.329 "compare": false, 00:19:07.329 "compare_and_write": false, 00:19:07.329 "abort": true, 00:19:07.329 "seek_hole": false, 00:19:07.329 "seek_data": false, 00:19:07.329 "copy": true, 00:19:07.329 "nvme_iov_md": false 00:19:07.329 }, 00:19:07.329 "memory_domains": [ 00:19:07.329 { 00:19:07.329 "dma_device_id": "system", 00:19:07.329 "dma_device_type": 1 00:19:07.329 }, 00:19:07.329 { 00:19:07.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.329 "dma_device_type": 2 00:19:07.329 } 00:19:07.329 ], 00:19:07.329 "driver_specific": {} 00:19:07.329 } 00:19:07.329 ] 00:19:07.329 22:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:07.329 22:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:07.329 22:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:07.329 22:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:07.329 22:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:07.329 22:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:07.329 22:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:07.329 22:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:07.329 22:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:07.329 22:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:07.329 22:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:07.329 22:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:07.329 22:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:07.329 22:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:07.329 22:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:07.589 22:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:07.589 "name": "Existed_Raid", 00:19:07.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.589 "strip_size_kb": 64, 00:19:07.589 "state": "configuring", 00:19:07.589 "raid_level": "concat", 00:19:07.589 "superblock": false, 00:19:07.589 "num_base_bdevs": 4, 00:19:07.589 "num_base_bdevs_discovered": 3, 00:19:07.589 "num_base_bdevs_operational": 4, 00:19:07.589 "base_bdevs_list": [ 00:19:07.589 { 00:19:07.589 "name": "BaseBdev1", 00:19:07.589 "uuid": "875f44f9-9ea8-4337-8a91-e0feedf14c13", 00:19:07.589 "is_configured": true, 00:19:07.589 "data_offset": 0, 00:19:07.589 "data_size": 65536 00:19:07.589 }, 00:19:07.589 { 00:19:07.589 "name": "BaseBdev2", 00:19:07.589 "uuid": "52d4e81f-99d4-44e7-a6e8-c65a6ddbb7a5", 00:19:07.589 "is_configured": true, 00:19:07.589 "data_offset": 0, 00:19:07.589 "data_size": 65536 00:19:07.589 }, 00:19:07.589 { 00:19:07.589 "name": "BaseBdev3", 00:19:07.589 "uuid": "83a10b40-347b-47f2-bb48-0b4f3d0822cf", 00:19:07.589 "is_configured": true, 00:19:07.589 "data_offset": 0, 00:19:07.589 "data_size": 65536 00:19:07.589 }, 00:19:07.589 { 00:19:07.589 "name": "BaseBdev4", 00:19:07.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.589 "is_configured": false, 00:19:07.589 "data_offset": 0, 00:19:07.589 "data_size": 0 00:19:07.589 } 00:19:07.589 ] 00:19:07.589 }' 00:19:07.589 22:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:07.589 22:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.848 22:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:08.107 [2024-07-13 22:02:27.395833] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:08.107 [2024-07-13 22:02:27.395874] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:19:08.107 [2024-07-13 22:02:27.395883] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:19:08.107 [2024-07-13 22:02:27.396143] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:19:08.107 [2024-07-13 22:02:27.396336] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:19:08.107 [2024-07-13 22:02:27.396349] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:19:08.107 [2024-07-13 22:02:27.396602] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:08.107 BaseBdev4 00:19:08.107 22:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:19:08.107 22:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:19:08.107 22:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:08.107 22:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:08.107 22:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:08.107 22:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:08.107 22:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:08.367 22:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:08.367 [ 00:19:08.367 { 00:19:08.367 "name": "BaseBdev4", 00:19:08.367 "aliases": [ 00:19:08.367 "f7ce3ad4-bcb3-43a7-8722-cabaddda82d0" 00:19:08.367 ], 00:19:08.367 "product_name": "Malloc disk", 00:19:08.367 "block_size": 512, 00:19:08.367 "num_blocks": 65536, 00:19:08.367 "uuid": "f7ce3ad4-bcb3-43a7-8722-cabaddda82d0", 00:19:08.367 "assigned_rate_limits": { 00:19:08.367 "rw_ios_per_sec": 0, 00:19:08.367 "rw_mbytes_per_sec": 0, 00:19:08.367 "r_mbytes_per_sec": 0, 00:19:08.367 "w_mbytes_per_sec": 0 00:19:08.367 }, 00:19:08.367 "claimed": true, 00:19:08.367 "claim_type": "exclusive_write", 00:19:08.367 "zoned": false, 00:19:08.367 "supported_io_types": { 00:19:08.367 "read": true, 00:19:08.367 "write": true, 00:19:08.367 "unmap": true, 00:19:08.367 "flush": true, 00:19:08.367 "reset": true, 00:19:08.367 "nvme_admin": false, 00:19:08.367 "nvme_io": false, 00:19:08.367 "nvme_io_md": false, 00:19:08.367 "write_zeroes": true, 00:19:08.367 "zcopy": true, 00:19:08.367 "get_zone_info": false, 00:19:08.367 "zone_management": false, 00:19:08.367 "zone_append": false, 00:19:08.367 "compare": false, 00:19:08.367 "compare_and_write": false, 00:19:08.367 "abort": true, 00:19:08.367 "seek_hole": false, 00:19:08.367 "seek_data": false, 00:19:08.367 "copy": true, 00:19:08.367 "nvme_iov_md": false 00:19:08.367 }, 00:19:08.367 "memory_domains": [ 00:19:08.367 { 00:19:08.367 "dma_device_id": "system", 00:19:08.367 "dma_device_type": 1 00:19:08.367 }, 00:19:08.367 { 00:19:08.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:08.367 "dma_device_type": 2 00:19:08.367 } 00:19:08.367 ], 00:19:08.367 "driver_specific": {} 00:19:08.367 } 00:19:08.367 ] 00:19:08.367 22:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:08.367 22:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:08.367 22:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:08.367 22:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:19:08.367 22:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:08.367 22:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:08.367 22:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:08.367 22:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:08.367 22:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:08.367 22:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:08.367 22:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:08.367 22:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:08.367 22:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:08.367 22:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:08.367 22:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:08.627 22:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:08.627 "name": "Existed_Raid", 00:19:08.627 "uuid": "caa3ceaa-aec3-45ca-9c31-7936647173f3", 00:19:08.627 "strip_size_kb": 64, 00:19:08.627 "state": "online", 00:19:08.627 "raid_level": "concat", 00:19:08.627 "superblock": false, 00:19:08.627 "num_base_bdevs": 4, 00:19:08.627 "num_base_bdevs_discovered": 4, 00:19:08.627 "num_base_bdevs_operational": 4, 00:19:08.627 "base_bdevs_list": [ 00:19:08.627 { 00:19:08.627 "name": "BaseBdev1", 00:19:08.627 "uuid": "875f44f9-9ea8-4337-8a91-e0feedf14c13", 00:19:08.627 "is_configured": true, 00:19:08.627 "data_offset": 0, 00:19:08.627 "data_size": 65536 00:19:08.627 }, 00:19:08.627 { 00:19:08.627 "name": "BaseBdev2", 00:19:08.627 "uuid": "52d4e81f-99d4-44e7-a6e8-c65a6ddbb7a5", 00:19:08.627 "is_configured": true, 00:19:08.627 "data_offset": 0, 00:19:08.627 "data_size": 65536 00:19:08.627 }, 00:19:08.627 { 00:19:08.627 "name": "BaseBdev3", 00:19:08.627 "uuid": "83a10b40-347b-47f2-bb48-0b4f3d0822cf", 00:19:08.627 "is_configured": true, 00:19:08.627 "data_offset": 0, 00:19:08.627 "data_size": 65536 00:19:08.627 }, 00:19:08.627 { 00:19:08.627 "name": "BaseBdev4", 00:19:08.627 "uuid": "f7ce3ad4-bcb3-43a7-8722-cabaddda82d0", 00:19:08.627 "is_configured": true, 00:19:08.627 "data_offset": 0, 00:19:08.627 "data_size": 65536 00:19:08.627 } 00:19:08.627 ] 00:19:08.627 }' 00:19:08.627 22:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:08.627 22:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.195 22:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:19:09.195 22:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:09.195 22:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:09.195 22:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:09.195 22:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:09.195 22:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:09.195 22:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:09.195 22:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:09.195 [2024-07-13 22:02:28.567304] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:09.455 22:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:09.455 "name": "Existed_Raid", 00:19:09.455 "aliases": [ 00:19:09.455 "caa3ceaa-aec3-45ca-9c31-7936647173f3" 00:19:09.455 ], 00:19:09.455 "product_name": "Raid Volume", 00:19:09.455 "block_size": 512, 00:19:09.455 "num_blocks": 262144, 00:19:09.455 "uuid": "caa3ceaa-aec3-45ca-9c31-7936647173f3", 00:19:09.455 "assigned_rate_limits": { 00:19:09.455 "rw_ios_per_sec": 0, 00:19:09.455 "rw_mbytes_per_sec": 0, 00:19:09.455 "r_mbytes_per_sec": 0, 00:19:09.455 "w_mbytes_per_sec": 0 00:19:09.455 }, 00:19:09.455 "claimed": false, 00:19:09.455 "zoned": false, 00:19:09.455 "supported_io_types": { 00:19:09.455 "read": true, 00:19:09.455 "write": true, 00:19:09.455 "unmap": true, 00:19:09.455 "flush": true, 00:19:09.455 "reset": true, 00:19:09.455 "nvme_admin": false, 00:19:09.455 "nvme_io": false, 00:19:09.455 "nvme_io_md": false, 00:19:09.455 "write_zeroes": true, 00:19:09.455 "zcopy": false, 00:19:09.455 "get_zone_info": false, 00:19:09.455 "zone_management": false, 00:19:09.455 "zone_append": false, 00:19:09.455 "compare": false, 00:19:09.455 "compare_and_write": false, 00:19:09.455 "abort": false, 00:19:09.455 "seek_hole": false, 00:19:09.455 "seek_data": false, 00:19:09.455 "copy": false, 00:19:09.455 "nvme_iov_md": false 00:19:09.455 }, 00:19:09.455 "memory_domains": [ 00:19:09.455 { 00:19:09.455 "dma_device_id": "system", 00:19:09.455 "dma_device_type": 1 00:19:09.455 }, 00:19:09.455 { 00:19:09.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:09.455 "dma_device_type": 2 00:19:09.455 }, 00:19:09.455 { 00:19:09.455 "dma_device_id": "system", 00:19:09.455 "dma_device_type": 1 00:19:09.455 }, 00:19:09.455 { 00:19:09.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:09.455 "dma_device_type": 2 00:19:09.455 }, 00:19:09.455 { 00:19:09.455 "dma_device_id": "system", 00:19:09.455 "dma_device_type": 1 00:19:09.455 }, 00:19:09.455 { 00:19:09.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:09.455 "dma_device_type": 2 00:19:09.455 }, 00:19:09.455 { 00:19:09.455 "dma_device_id": "system", 00:19:09.455 "dma_device_type": 1 00:19:09.455 }, 00:19:09.455 { 00:19:09.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:09.455 "dma_device_type": 2 00:19:09.455 } 00:19:09.455 ], 00:19:09.455 "driver_specific": { 00:19:09.455 "raid": { 00:19:09.455 "uuid": "caa3ceaa-aec3-45ca-9c31-7936647173f3", 00:19:09.455 "strip_size_kb": 64, 00:19:09.455 "state": "online", 00:19:09.455 "raid_level": "concat", 00:19:09.455 "superblock": false, 00:19:09.455 "num_base_bdevs": 4, 00:19:09.455 "num_base_bdevs_discovered": 4, 00:19:09.455 "num_base_bdevs_operational": 4, 00:19:09.455 "base_bdevs_list": [ 00:19:09.455 { 00:19:09.455 "name": "BaseBdev1", 00:19:09.455 "uuid": "875f44f9-9ea8-4337-8a91-e0feedf14c13", 00:19:09.455 "is_configured": true, 00:19:09.455 "data_offset": 0, 00:19:09.455 "data_size": 65536 00:19:09.455 }, 00:19:09.455 { 00:19:09.455 "name": "BaseBdev2", 00:19:09.455 "uuid": "52d4e81f-99d4-44e7-a6e8-c65a6ddbb7a5", 00:19:09.455 "is_configured": true, 00:19:09.455 "data_offset": 0, 00:19:09.455 "data_size": 65536 00:19:09.455 }, 00:19:09.455 { 00:19:09.455 "name": "BaseBdev3", 00:19:09.455 "uuid": "83a10b40-347b-47f2-bb48-0b4f3d0822cf", 00:19:09.455 "is_configured": true, 00:19:09.455 "data_offset": 0, 00:19:09.455 "data_size": 65536 00:19:09.455 }, 00:19:09.455 { 00:19:09.455 "name": "BaseBdev4", 00:19:09.455 "uuid": "f7ce3ad4-bcb3-43a7-8722-cabaddda82d0", 00:19:09.455 "is_configured": true, 00:19:09.455 "data_offset": 0, 00:19:09.455 "data_size": 65536 00:19:09.455 } 00:19:09.455 ] 00:19:09.455 } 00:19:09.455 } 00:19:09.455 }' 00:19:09.455 22:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:09.455 22:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:19:09.455 BaseBdev2 00:19:09.455 BaseBdev3 00:19:09.455 BaseBdev4' 00:19:09.455 22:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:09.455 22:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:19:09.455 22:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:09.455 22:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:09.455 "name": "BaseBdev1", 00:19:09.456 "aliases": [ 00:19:09.456 "875f44f9-9ea8-4337-8a91-e0feedf14c13" 00:19:09.456 ], 00:19:09.456 "product_name": "Malloc disk", 00:19:09.456 "block_size": 512, 00:19:09.456 "num_blocks": 65536, 00:19:09.456 "uuid": "875f44f9-9ea8-4337-8a91-e0feedf14c13", 00:19:09.456 "assigned_rate_limits": { 00:19:09.456 "rw_ios_per_sec": 0, 00:19:09.456 "rw_mbytes_per_sec": 0, 00:19:09.456 "r_mbytes_per_sec": 0, 00:19:09.456 "w_mbytes_per_sec": 0 00:19:09.456 }, 00:19:09.456 "claimed": true, 00:19:09.456 "claim_type": "exclusive_write", 00:19:09.456 "zoned": false, 00:19:09.456 "supported_io_types": { 00:19:09.456 "read": true, 00:19:09.456 "write": true, 00:19:09.456 "unmap": true, 00:19:09.456 "flush": true, 00:19:09.456 "reset": true, 00:19:09.456 "nvme_admin": false, 00:19:09.456 "nvme_io": false, 00:19:09.456 "nvme_io_md": false, 00:19:09.456 "write_zeroes": true, 00:19:09.456 "zcopy": true, 00:19:09.456 "get_zone_info": false, 00:19:09.456 "zone_management": false, 00:19:09.456 "zone_append": false, 00:19:09.456 "compare": false, 00:19:09.456 "compare_and_write": false, 00:19:09.456 "abort": true, 00:19:09.456 "seek_hole": false, 00:19:09.456 "seek_data": false, 00:19:09.456 "copy": true, 00:19:09.456 "nvme_iov_md": false 00:19:09.456 }, 00:19:09.456 "memory_domains": [ 00:19:09.456 { 00:19:09.456 "dma_device_id": "system", 00:19:09.456 "dma_device_type": 1 00:19:09.456 }, 00:19:09.456 { 00:19:09.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:09.456 "dma_device_type": 2 00:19:09.456 } 00:19:09.456 ], 00:19:09.456 "driver_specific": {} 00:19:09.456 }' 00:19:09.456 22:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:09.715 22:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:09.715 22:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:09.715 22:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:09.715 22:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:09.715 22:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:09.715 22:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:09.715 22:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:09.715 22:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:09.715 22:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:09.715 22:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:09.973 22:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:09.973 22:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:09.973 22:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:09.973 22:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:09.973 22:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:09.973 "name": "BaseBdev2", 00:19:09.973 "aliases": [ 00:19:09.973 "52d4e81f-99d4-44e7-a6e8-c65a6ddbb7a5" 00:19:09.973 ], 00:19:09.973 "product_name": "Malloc disk", 00:19:09.973 "block_size": 512, 00:19:09.973 "num_blocks": 65536, 00:19:09.973 "uuid": "52d4e81f-99d4-44e7-a6e8-c65a6ddbb7a5", 00:19:09.973 "assigned_rate_limits": { 00:19:09.973 "rw_ios_per_sec": 0, 00:19:09.973 "rw_mbytes_per_sec": 0, 00:19:09.973 "r_mbytes_per_sec": 0, 00:19:09.973 "w_mbytes_per_sec": 0 00:19:09.973 }, 00:19:09.973 "claimed": true, 00:19:09.973 "claim_type": "exclusive_write", 00:19:09.973 "zoned": false, 00:19:09.973 "supported_io_types": { 00:19:09.973 "read": true, 00:19:09.973 "write": true, 00:19:09.973 "unmap": true, 00:19:09.973 "flush": true, 00:19:09.973 "reset": true, 00:19:09.973 "nvme_admin": false, 00:19:09.973 "nvme_io": false, 00:19:09.973 "nvme_io_md": false, 00:19:09.973 "write_zeroes": true, 00:19:09.973 "zcopy": true, 00:19:09.973 "get_zone_info": false, 00:19:09.973 "zone_management": false, 00:19:09.973 "zone_append": false, 00:19:09.973 "compare": false, 00:19:09.973 "compare_and_write": false, 00:19:09.973 "abort": true, 00:19:09.973 "seek_hole": false, 00:19:09.973 "seek_data": false, 00:19:09.973 "copy": true, 00:19:09.973 "nvme_iov_md": false 00:19:09.973 }, 00:19:09.974 "memory_domains": [ 00:19:09.974 { 00:19:09.974 "dma_device_id": "system", 00:19:09.974 "dma_device_type": 1 00:19:09.974 }, 00:19:09.974 { 00:19:09.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:09.974 "dma_device_type": 2 00:19:09.974 } 00:19:09.974 ], 00:19:09.974 "driver_specific": {} 00:19:09.974 }' 00:19:09.974 22:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:09.974 22:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:10.232 22:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:10.232 22:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:10.232 22:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:10.232 22:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:10.232 22:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:10.232 22:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:10.232 22:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:10.232 22:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:10.232 22:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:10.492 22:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:10.492 22:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:10.492 22:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:10.492 22:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:10.492 22:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:10.492 "name": "BaseBdev3", 00:19:10.492 "aliases": [ 00:19:10.492 "83a10b40-347b-47f2-bb48-0b4f3d0822cf" 00:19:10.492 ], 00:19:10.492 "product_name": "Malloc disk", 00:19:10.492 "block_size": 512, 00:19:10.492 "num_blocks": 65536, 00:19:10.492 "uuid": "83a10b40-347b-47f2-bb48-0b4f3d0822cf", 00:19:10.492 "assigned_rate_limits": { 00:19:10.492 "rw_ios_per_sec": 0, 00:19:10.492 "rw_mbytes_per_sec": 0, 00:19:10.492 "r_mbytes_per_sec": 0, 00:19:10.492 "w_mbytes_per_sec": 0 00:19:10.492 }, 00:19:10.492 "claimed": true, 00:19:10.492 "claim_type": "exclusive_write", 00:19:10.492 "zoned": false, 00:19:10.492 "supported_io_types": { 00:19:10.492 "read": true, 00:19:10.492 "write": true, 00:19:10.492 "unmap": true, 00:19:10.492 "flush": true, 00:19:10.492 "reset": true, 00:19:10.492 "nvme_admin": false, 00:19:10.492 "nvme_io": false, 00:19:10.492 "nvme_io_md": false, 00:19:10.492 "write_zeroes": true, 00:19:10.492 "zcopy": true, 00:19:10.492 "get_zone_info": false, 00:19:10.492 "zone_management": false, 00:19:10.492 "zone_append": false, 00:19:10.492 "compare": false, 00:19:10.492 "compare_and_write": false, 00:19:10.492 "abort": true, 00:19:10.492 "seek_hole": false, 00:19:10.492 "seek_data": false, 00:19:10.492 "copy": true, 00:19:10.492 "nvme_iov_md": false 00:19:10.492 }, 00:19:10.492 "memory_domains": [ 00:19:10.492 { 00:19:10.492 "dma_device_id": "system", 00:19:10.492 "dma_device_type": 1 00:19:10.492 }, 00:19:10.492 { 00:19:10.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:10.492 "dma_device_type": 2 00:19:10.492 } 00:19:10.492 ], 00:19:10.492 "driver_specific": {} 00:19:10.492 }' 00:19:10.492 22:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:10.492 22:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:10.492 22:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:10.492 22:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:10.751 22:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:10.751 22:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:10.751 22:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:10.751 22:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:10.751 22:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:10.752 22:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:10.752 22:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:10.752 22:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:10.752 22:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:10.752 22:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:19:10.752 22:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:11.011 22:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:11.011 "name": "BaseBdev4", 00:19:11.011 "aliases": [ 00:19:11.011 "f7ce3ad4-bcb3-43a7-8722-cabaddda82d0" 00:19:11.011 ], 00:19:11.011 "product_name": "Malloc disk", 00:19:11.011 "block_size": 512, 00:19:11.011 "num_blocks": 65536, 00:19:11.011 "uuid": "f7ce3ad4-bcb3-43a7-8722-cabaddda82d0", 00:19:11.011 "assigned_rate_limits": { 00:19:11.011 "rw_ios_per_sec": 0, 00:19:11.011 "rw_mbytes_per_sec": 0, 00:19:11.011 "r_mbytes_per_sec": 0, 00:19:11.011 "w_mbytes_per_sec": 0 00:19:11.011 }, 00:19:11.011 "claimed": true, 00:19:11.011 "claim_type": "exclusive_write", 00:19:11.011 "zoned": false, 00:19:11.011 "supported_io_types": { 00:19:11.011 "read": true, 00:19:11.011 "write": true, 00:19:11.011 "unmap": true, 00:19:11.011 "flush": true, 00:19:11.011 "reset": true, 00:19:11.011 "nvme_admin": false, 00:19:11.011 "nvme_io": false, 00:19:11.011 "nvme_io_md": false, 00:19:11.011 "write_zeroes": true, 00:19:11.011 "zcopy": true, 00:19:11.011 "get_zone_info": false, 00:19:11.011 "zone_management": false, 00:19:11.011 "zone_append": false, 00:19:11.011 "compare": false, 00:19:11.011 "compare_and_write": false, 00:19:11.011 "abort": true, 00:19:11.011 "seek_hole": false, 00:19:11.011 "seek_data": false, 00:19:11.011 "copy": true, 00:19:11.011 "nvme_iov_md": false 00:19:11.011 }, 00:19:11.011 "memory_domains": [ 00:19:11.011 { 00:19:11.011 "dma_device_id": "system", 00:19:11.011 "dma_device_type": 1 00:19:11.011 }, 00:19:11.011 { 00:19:11.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:11.011 "dma_device_type": 2 00:19:11.011 } 00:19:11.011 ], 00:19:11.011 "driver_specific": {} 00:19:11.011 }' 00:19:11.011 22:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:11.011 22:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:11.011 22:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:11.011 22:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:11.011 22:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:11.285 22:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:11.285 22:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:11.285 22:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:11.285 22:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:11.285 22:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:11.285 22:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:11.286 22:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:11.286 22:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:11.589 [2024-07-13 22:02:30.700677] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:11.589 [2024-07-13 22:02:30.700705] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:11.589 [2024-07-13 22:02:30.700753] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:11.589 22:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:19:11.589 22:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:19:11.589 22:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:11.589 22:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:11.589 22:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:19:11.589 22:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:19:11.589 22:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:11.589 22:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:19:11.589 22:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:11.589 22:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:11.589 22:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:11.589 22:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:11.589 22:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:11.589 22:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:11.589 22:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:11.589 22:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:11.589 22:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:11.589 22:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:11.589 "name": "Existed_Raid", 00:19:11.589 "uuid": "caa3ceaa-aec3-45ca-9c31-7936647173f3", 00:19:11.589 "strip_size_kb": 64, 00:19:11.589 "state": "offline", 00:19:11.589 "raid_level": "concat", 00:19:11.589 "superblock": false, 00:19:11.589 "num_base_bdevs": 4, 00:19:11.589 "num_base_bdevs_discovered": 3, 00:19:11.589 "num_base_bdevs_operational": 3, 00:19:11.589 "base_bdevs_list": [ 00:19:11.589 { 00:19:11.589 "name": null, 00:19:11.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.589 "is_configured": false, 00:19:11.589 "data_offset": 0, 00:19:11.589 "data_size": 65536 00:19:11.589 }, 00:19:11.589 { 00:19:11.589 "name": "BaseBdev2", 00:19:11.590 "uuid": "52d4e81f-99d4-44e7-a6e8-c65a6ddbb7a5", 00:19:11.590 "is_configured": true, 00:19:11.590 "data_offset": 0, 00:19:11.590 "data_size": 65536 00:19:11.590 }, 00:19:11.590 { 00:19:11.590 "name": "BaseBdev3", 00:19:11.590 "uuid": "83a10b40-347b-47f2-bb48-0b4f3d0822cf", 00:19:11.590 "is_configured": true, 00:19:11.590 "data_offset": 0, 00:19:11.590 "data_size": 65536 00:19:11.590 }, 00:19:11.590 { 00:19:11.590 "name": "BaseBdev4", 00:19:11.590 "uuid": "f7ce3ad4-bcb3-43a7-8722-cabaddda82d0", 00:19:11.590 "is_configured": true, 00:19:11.590 "data_offset": 0, 00:19:11.590 "data_size": 65536 00:19:11.590 } 00:19:11.590 ] 00:19:11.590 }' 00:19:11.590 22:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:11.590 22:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.158 22:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:19:12.158 22:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:12.158 22:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:12.158 22:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:12.158 22:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:12.158 22:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:12.158 22:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:12.417 [2024-07-13 22:02:31.684165] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:12.417 22:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:12.417 22:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:12.417 22:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:12.417 22:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:12.676 22:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:12.676 22:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:12.676 22:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:12.935 [2024-07-13 22:02:32.107172] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:12.935 22:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:12.935 22:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:12.935 22:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:12.935 22:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:13.194 22:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:13.194 22:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:13.194 22:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:19:13.194 [2024-07-13 22:02:32.544845] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:13.194 [2024-07-13 22:02:32.544892] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:19:13.452 22:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:13.452 22:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:13.452 22:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:13.452 22:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:19:13.452 22:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:19:13.452 22:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:19:13.452 22:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:19:13.452 22:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:19:13.452 22:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:13.452 22:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:13.710 BaseBdev2 00:19:13.710 22:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:19:13.710 22:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:19:13.710 22:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:13.710 22:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:13.710 22:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:13.710 22:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:13.710 22:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:13.974 22:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:13.974 [ 00:19:13.974 { 00:19:13.974 "name": "BaseBdev2", 00:19:13.974 "aliases": [ 00:19:13.974 "7b960403-1c4e-4d91-a0a6-f01e9626d674" 00:19:13.974 ], 00:19:13.974 "product_name": "Malloc disk", 00:19:13.974 "block_size": 512, 00:19:13.974 "num_blocks": 65536, 00:19:13.974 "uuid": "7b960403-1c4e-4d91-a0a6-f01e9626d674", 00:19:13.974 "assigned_rate_limits": { 00:19:13.974 "rw_ios_per_sec": 0, 00:19:13.974 "rw_mbytes_per_sec": 0, 00:19:13.974 "r_mbytes_per_sec": 0, 00:19:13.974 "w_mbytes_per_sec": 0 00:19:13.974 }, 00:19:13.974 "claimed": false, 00:19:13.974 "zoned": false, 00:19:13.974 "supported_io_types": { 00:19:13.974 "read": true, 00:19:13.974 "write": true, 00:19:13.974 "unmap": true, 00:19:13.974 "flush": true, 00:19:13.974 "reset": true, 00:19:13.974 "nvme_admin": false, 00:19:13.974 "nvme_io": false, 00:19:13.974 "nvme_io_md": false, 00:19:13.974 "write_zeroes": true, 00:19:13.974 "zcopy": true, 00:19:13.974 "get_zone_info": false, 00:19:13.974 "zone_management": false, 00:19:13.974 "zone_append": false, 00:19:13.974 "compare": false, 00:19:13.974 "compare_and_write": false, 00:19:13.974 "abort": true, 00:19:13.974 "seek_hole": false, 00:19:13.974 "seek_data": false, 00:19:13.974 "copy": true, 00:19:13.974 "nvme_iov_md": false 00:19:13.974 }, 00:19:13.974 "memory_domains": [ 00:19:13.974 { 00:19:13.974 "dma_device_id": "system", 00:19:13.974 "dma_device_type": 1 00:19:13.974 }, 00:19:13.974 { 00:19:13.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:13.974 "dma_device_type": 2 00:19:13.974 } 00:19:13.974 ], 00:19:13.974 "driver_specific": {} 00:19:13.974 } 00:19:13.975 ] 00:19:13.975 22:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:13.975 22:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:13.975 22:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:13.975 22:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:14.232 BaseBdev3 00:19:14.232 22:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:19:14.232 22:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:19:14.232 22:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:14.232 22:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:14.232 22:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:14.232 22:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:14.232 22:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:14.491 22:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:14.749 [ 00:19:14.749 { 00:19:14.749 "name": "BaseBdev3", 00:19:14.749 "aliases": [ 00:19:14.749 "a49b1284-e60e-4202-bfa0-802d7f7b2675" 00:19:14.749 ], 00:19:14.749 "product_name": "Malloc disk", 00:19:14.749 "block_size": 512, 00:19:14.749 "num_blocks": 65536, 00:19:14.749 "uuid": "a49b1284-e60e-4202-bfa0-802d7f7b2675", 00:19:14.749 "assigned_rate_limits": { 00:19:14.749 "rw_ios_per_sec": 0, 00:19:14.749 "rw_mbytes_per_sec": 0, 00:19:14.749 "r_mbytes_per_sec": 0, 00:19:14.749 "w_mbytes_per_sec": 0 00:19:14.749 }, 00:19:14.749 "claimed": false, 00:19:14.749 "zoned": false, 00:19:14.749 "supported_io_types": { 00:19:14.749 "read": true, 00:19:14.749 "write": true, 00:19:14.749 "unmap": true, 00:19:14.749 "flush": true, 00:19:14.749 "reset": true, 00:19:14.749 "nvme_admin": false, 00:19:14.749 "nvme_io": false, 00:19:14.749 "nvme_io_md": false, 00:19:14.749 "write_zeroes": true, 00:19:14.749 "zcopy": true, 00:19:14.749 "get_zone_info": false, 00:19:14.749 "zone_management": false, 00:19:14.749 "zone_append": false, 00:19:14.749 "compare": false, 00:19:14.749 "compare_and_write": false, 00:19:14.749 "abort": true, 00:19:14.749 "seek_hole": false, 00:19:14.749 "seek_data": false, 00:19:14.749 "copy": true, 00:19:14.749 "nvme_iov_md": false 00:19:14.749 }, 00:19:14.749 "memory_domains": [ 00:19:14.749 { 00:19:14.749 "dma_device_id": "system", 00:19:14.749 "dma_device_type": 1 00:19:14.749 }, 00:19:14.750 { 00:19:14.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:14.750 "dma_device_type": 2 00:19:14.750 } 00:19:14.750 ], 00:19:14.750 "driver_specific": {} 00:19:14.750 } 00:19:14.750 ] 00:19:14.750 22:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:14.750 22:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:14.750 22:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:14.750 22:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:14.750 BaseBdev4 00:19:14.750 22:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:19:14.750 22:02:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:19:14.750 22:02:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:14.750 22:02:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:14.750 22:02:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:14.750 22:02:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:14.750 22:02:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:15.008 22:02:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:15.266 [ 00:19:15.266 { 00:19:15.266 "name": "BaseBdev4", 00:19:15.266 "aliases": [ 00:19:15.266 "a9546dee-63cb-4e45-a1f9-ed01ffa053a8" 00:19:15.266 ], 00:19:15.266 "product_name": "Malloc disk", 00:19:15.266 "block_size": 512, 00:19:15.266 "num_blocks": 65536, 00:19:15.266 "uuid": "a9546dee-63cb-4e45-a1f9-ed01ffa053a8", 00:19:15.266 "assigned_rate_limits": { 00:19:15.266 "rw_ios_per_sec": 0, 00:19:15.266 "rw_mbytes_per_sec": 0, 00:19:15.266 "r_mbytes_per_sec": 0, 00:19:15.266 "w_mbytes_per_sec": 0 00:19:15.266 }, 00:19:15.266 "claimed": false, 00:19:15.266 "zoned": false, 00:19:15.266 "supported_io_types": { 00:19:15.266 "read": true, 00:19:15.266 "write": true, 00:19:15.266 "unmap": true, 00:19:15.266 "flush": true, 00:19:15.266 "reset": true, 00:19:15.266 "nvme_admin": false, 00:19:15.266 "nvme_io": false, 00:19:15.266 "nvme_io_md": false, 00:19:15.266 "write_zeroes": true, 00:19:15.266 "zcopy": true, 00:19:15.266 "get_zone_info": false, 00:19:15.266 "zone_management": false, 00:19:15.266 "zone_append": false, 00:19:15.266 "compare": false, 00:19:15.266 "compare_and_write": false, 00:19:15.266 "abort": true, 00:19:15.266 "seek_hole": false, 00:19:15.266 "seek_data": false, 00:19:15.266 "copy": true, 00:19:15.266 "nvme_iov_md": false 00:19:15.266 }, 00:19:15.266 "memory_domains": [ 00:19:15.266 { 00:19:15.266 "dma_device_id": "system", 00:19:15.266 "dma_device_type": 1 00:19:15.266 }, 00:19:15.266 { 00:19:15.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:15.266 "dma_device_type": 2 00:19:15.266 } 00:19:15.266 ], 00:19:15.266 "driver_specific": {} 00:19:15.266 } 00:19:15.266 ] 00:19:15.266 22:02:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:15.266 22:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:15.266 22:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:15.266 22:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:15.266 [2024-07-13 22:02:34.603952] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:15.266 [2024-07-13 22:02:34.603992] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:15.266 [2024-07-13 22:02:34.604019] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:15.266 [2024-07-13 22:02:34.605751] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:15.266 [2024-07-13 22:02:34.605799] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:15.266 22:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:15.266 22:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:15.266 22:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:15.266 22:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:15.266 22:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:15.266 22:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:15.266 22:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:15.266 22:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:15.266 22:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:15.266 22:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:15.266 22:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:15.266 22:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:15.525 22:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:15.525 "name": "Existed_Raid", 00:19:15.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.525 "strip_size_kb": 64, 00:19:15.525 "state": "configuring", 00:19:15.525 "raid_level": "concat", 00:19:15.525 "superblock": false, 00:19:15.525 "num_base_bdevs": 4, 00:19:15.525 "num_base_bdevs_discovered": 3, 00:19:15.525 "num_base_bdevs_operational": 4, 00:19:15.525 "base_bdevs_list": [ 00:19:15.525 { 00:19:15.525 "name": "BaseBdev1", 00:19:15.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.525 "is_configured": false, 00:19:15.525 "data_offset": 0, 00:19:15.525 "data_size": 0 00:19:15.525 }, 00:19:15.525 { 00:19:15.525 "name": "BaseBdev2", 00:19:15.525 "uuid": "7b960403-1c4e-4d91-a0a6-f01e9626d674", 00:19:15.525 "is_configured": true, 00:19:15.525 "data_offset": 0, 00:19:15.525 "data_size": 65536 00:19:15.525 }, 00:19:15.525 { 00:19:15.525 "name": "BaseBdev3", 00:19:15.525 "uuid": "a49b1284-e60e-4202-bfa0-802d7f7b2675", 00:19:15.525 "is_configured": true, 00:19:15.525 "data_offset": 0, 00:19:15.525 "data_size": 65536 00:19:15.525 }, 00:19:15.525 { 00:19:15.525 "name": "BaseBdev4", 00:19:15.525 "uuid": "a9546dee-63cb-4e45-a1f9-ed01ffa053a8", 00:19:15.525 "is_configured": true, 00:19:15.525 "data_offset": 0, 00:19:15.525 "data_size": 65536 00:19:15.525 } 00:19:15.525 ] 00:19:15.525 }' 00:19:15.525 22:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:15.525 22:02:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.091 22:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:19:16.091 [2024-07-13 22:02:35.434077] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:16.091 22:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:16.091 22:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:16.091 22:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:16.091 22:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:16.091 22:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:16.091 22:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:16.091 22:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:16.091 22:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:16.091 22:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:16.091 22:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:16.091 22:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:16.091 22:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:16.350 22:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:16.350 "name": "Existed_Raid", 00:19:16.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.350 "strip_size_kb": 64, 00:19:16.350 "state": "configuring", 00:19:16.350 "raid_level": "concat", 00:19:16.350 "superblock": false, 00:19:16.350 "num_base_bdevs": 4, 00:19:16.350 "num_base_bdevs_discovered": 2, 00:19:16.350 "num_base_bdevs_operational": 4, 00:19:16.350 "base_bdevs_list": [ 00:19:16.350 { 00:19:16.350 "name": "BaseBdev1", 00:19:16.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.350 "is_configured": false, 00:19:16.350 "data_offset": 0, 00:19:16.350 "data_size": 0 00:19:16.350 }, 00:19:16.350 { 00:19:16.350 "name": null, 00:19:16.350 "uuid": "7b960403-1c4e-4d91-a0a6-f01e9626d674", 00:19:16.350 "is_configured": false, 00:19:16.350 "data_offset": 0, 00:19:16.350 "data_size": 65536 00:19:16.350 }, 00:19:16.350 { 00:19:16.350 "name": "BaseBdev3", 00:19:16.350 "uuid": "a49b1284-e60e-4202-bfa0-802d7f7b2675", 00:19:16.350 "is_configured": true, 00:19:16.350 "data_offset": 0, 00:19:16.350 "data_size": 65536 00:19:16.350 }, 00:19:16.350 { 00:19:16.350 "name": "BaseBdev4", 00:19:16.350 "uuid": "a9546dee-63cb-4e45-a1f9-ed01ffa053a8", 00:19:16.350 "is_configured": true, 00:19:16.350 "data_offset": 0, 00:19:16.350 "data_size": 65536 00:19:16.350 } 00:19:16.350 ] 00:19:16.350 }' 00:19:16.350 22:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:16.350 22:02:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.918 22:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:16.918 22:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:17.177 22:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:19:17.177 22:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:17.177 [2024-07-13 22:02:36.505132] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:17.177 BaseBdev1 00:19:17.177 22:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:19:17.177 22:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:19:17.177 22:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:17.177 22:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:17.177 22:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:17.177 22:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:17.177 22:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:17.436 22:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:17.696 [ 00:19:17.696 { 00:19:17.696 "name": "BaseBdev1", 00:19:17.696 "aliases": [ 00:19:17.696 "a48ad667-5728-44a1-a9d3-9c5ec15fca47" 00:19:17.696 ], 00:19:17.696 "product_name": "Malloc disk", 00:19:17.696 "block_size": 512, 00:19:17.696 "num_blocks": 65536, 00:19:17.696 "uuid": "a48ad667-5728-44a1-a9d3-9c5ec15fca47", 00:19:17.696 "assigned_rate_limits": { 00:19:17.696 "rw_ios_per_sec": 0, 00:19:17.696 "rw_mbytes_per_sec": 0, 00:19:17.696 "r_mbytes_per_sec": 0, 00:19:17.696 "w_mbytes_per_sec": 0 00:19:17.696 }, 00:19:17.696 "claimed": true, 00:19:17.696 "claim_type": "exclusive_write", 00:19:17.696 "zoned": false, 00:19:17.696 "supported_io_types": { 00:19:17.696 "read": true, 00:19:17.696 "write": true, 00:19:17.696 "unmap": true, 00:19:17.696 "flush": true, 00:19:17.696 "reset": true, 00:19:17.696 "nvme_admin": false, 00:19:17.696 "nvme_io": false, 00:19:17.696 "nvme_io_md": false, 00:19:17.696 "write_zeroes": true, 00:19:17.696 "zcopy": true, 00:19:17.696 "get_zone_info": false, 00:19:17.696 "zone_management": false, 00:19:17.696 "zone_append": false, 00:19:17.696 "compare": false, 00:19:17.696 "compare_and_write": false, 00:19:17.696 "abort": true, 00:19:17.696 "seek_hole": false, 00:19:17.696 "seek_data": false, 00:19:17.696 "copy": true, 00:19:17.696 "nvme_iov_md": false 00:19:17.696 }, 00:19:17.696 "memory_domains": [ 00:19:17.696 { 00:19:17.696 "dma_device_id": "system", 00:19:17.696 "dma_device_type": 1 00:19:17.696 }, 00:19:17.696 { 00:19:17.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:17.696 "dma_device_type": 2 00:19:17.696 } 00:19:17.696 ], 00:19:17.696 "driver_specific": {} 00:19:17.696 } 00:19:17.696 ] 00:19:17.696 22:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:17.696 22:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:17.696 22:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:17.696 22:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:17.696 22:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:17.696 22:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:17.696 22:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:17.696 22:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:17.696 22:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:17.696 22:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:17.696 22:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:17.696 22:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:17.696 22:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:17.696 22:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:17.696 "name": "Existed_Raid", 00:19:17.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.696 "strip_size_kb": 64, 00:19:17.696 "state": "configuring", 00:19:17.696 "raid_level": "concat", 00:19:17.696 "superblock": false, 00:19:17.696 "num_base_bdevs": 4, 00:19:17.696 "num_base_bdevs_discovered": 3, 00:19:17.696 "num_base_bdevs_operational": 4, 00:19:17.696 "base_bdevs_list": [ 00:19:17.696 { 00:19:17.696 "name": "BaseBdev1", 00:19:17.696 "uuid": "a48ad667-5728-44a1-a9d3-9c5ec15fca47", 00:19:17.696 "is_configured": true, 00:19:17.696 "data_offset": 0, 00:19:17.696 "data_size": 65536 00:19:17.696 }, 00:19:17.696 { 00:19:17.696 "name": null, 00:19:17.696 "uuid": "7b960403-1c4e-4d91-a0a6-f01e9626d674", 00:19:17.696 "is_configured": false, 00:19:17.696 "data_offset": 0, 00:19:17.696 "data_size": 65536 00:19:17.696 }, 00:19:17.696 { 00:19:17.696 "name": "BaseBdev3", 00:19:17.696 "uuid": "a49b1284-e60e-4202-bfa0-802d7f7b2675", 00:19:17.696 "is_configured": true, 00:19:17.696 "data_offset": 0, 00:19:17.696 "data_size": 65536 00:19:17.696 }, 00:19:17.696 { 00:19:17.696 "name": "BaseBdev4", 00:19:17.696 "uuid": "a9546dee-63cb-4e45-a1f9-ed01ffa053a8", 00:19:17.696 "is_configured": true, 00:19:17.696 "data_offset": 0, 00:19:17.696 "data_size": 65536 00:19:17.696 } 00:19:17.696 ] 00:19:17.696 }' 00:19:17.696 22:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:17.696 22:02:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.265 22:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.265 22:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:18.525 22:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:19:18.525 22:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:19:18.525 [2024-07-13 22:02:37.856746] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:18.525 22:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:18.525 22:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:18.525 22:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:18.525 22:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:18.525 22:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:18.525 22:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:18.525 22:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:18.525 22:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:18.525 22:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:18.525 22:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:18.525 22:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.525 22:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:18.783 22:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:18.783 "name": "Existed_Raid", 00:19:18.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.783 "strip_size_kb": 64, 00:19:18.783 "state": "configuring", 00:19:18.783 "raid_level": "concat", 00:19:18.783 "superblock": false, 00:19:18.783 "num_base_bdevs": 4, 00:19:18.783 "num_base_bdevs_discovered": 2, 00:19:18.783 "num_base_bdevs_operational": 4, 00:19:18.783 "base_bdevs_list": [ 00:19:18.783 { 00:19:18.783 "name": "BaseBdev1", 00:19:18.783 "uuid": "a48ad667-5728-44a1-a9d3-9c5ec15fca47", 00:19:18.783 "is_configured": true, 00:19:18.783 "data_offset": 0, 00:19:18.783 "data_size": 65536 00:19:18.783 }, 00:19:18.783 { 00:19:18.783 "name": null, 00:19:18.783 "uuid": "7b960403-1c4e-4d91-a0a6-f01e9626d674", 00:19:18.783 "is_configured": false, 00:19:18.783 "data_offset": 0, 00:19:18.783 "data_size": 65536 00:19:18.783 }, 00:19:18.783 { 00:19:18.783 "name": null, 00:19:18.783 "uuid": "a49b1284-e60e-4202-bfa0-802d7f7b2675", 00:19:18.783 "is_configured": false, 00:19:18.783 "data_offset": 0, 00:19:18.783 "data_size": 65536 00:19:18.783 }, 00:19:18.783 { 00:19:18.783 "name": "BaseBdev4", 00:19:18.783 "uuid": "a9546dee-63cb-4e45-a1f9-ed01ffa053a8", 00:19:18.783 "is_configured": true, 00:19:18.783 "data_offset": 0, 00:19:18.783 "data_size": 65536 00:19:18.783 } 00:19:18.783 ] 00:19:18.783 }' 00:19:18.783 22:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:18.783 22:02:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.350 22:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:19.350 22:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:19.350 22:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:19:19.350 22:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:19.609 [2024-07-13 22:02:38.855402] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:19.609 22:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:19.609 22:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:19.609 22:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:19.609 22:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:19.609 22:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:19.609 22:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:19.609 22:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:19.609 22:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:19.609 22:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:19.609 22:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:19.609 22:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:19.609 22:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:19.868 22:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:19.868 "name": "Existed_Raid", 00:19:19.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.868 "strip_size_kb": 64, 00:19:19.868 "state": "configuring", 00:19:19.868 "raid_level": "concat", 00:19:19.868 "superblock": false, 00:19:19.868 "num_base_bdevs": 4, 00:19:19.868 "num_base_bdevs_discovered": 3, 00:19:19.868 "num_base_bdevs_operational": 4, 00:19:19.868 "base_bdevs_list": [ 00:19:19.868 { 00:19:19.868 "name": "BaseBdev1", 00:19:19.868 "uuid": "a48ad667-5728-44a1-a9d3-9c5ec15fca47", 00:19:19.868 "is_configured": true, 00:19:19.868 "data_offset": 0, 00:19:19.868 "data_size": 65536 00:19:19.868 }, 00:19:19.868 { 00:19:19.868 "name": null, 00:19:19.868 "uuid": "7b960403-1c4e-4d91-a0a6-f01e9626d674", 00:19:19.868 "is_configured": false, 00:19:19.868 "data_offset": 0, 00:19:19.868 "data_size": 65536 00:19:19.868 }, 00:19:19.868 { 00:19:19.868 "name": "BaseBdev3", 00:19:19.868 "uuid": "a49b1284-e60e-4202-bfa0-802d7f7b2675", 00:19:19.868 "is_configured": true, 00:19:19.868 "data_offset": 0, 00:19:19.868 "data_size": 65536 00:19:19.868 }, 00:19:19.868 { 00:19:19.868 "name": "BaseBdev4", 00:19:19.868 "uuid": "a9546dee-63cb-4e45-a1f9-ed01ffa053a8", 00:19:19.868 "is_configured": true, 00:19:19.868 "data_offset": 0, 00:19:19.868 "data_size": 65536 00:19:19.868 } 00:19:19.868 ] 00:19:19.868 }' 00:19:19.868 22:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:19.868 22:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.127 22:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:20.127 22:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:20.386 22:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:19:20.386 22:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:20.646 [2024-07-13 22:02:39.817969] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:20.646 22:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:20.646 22:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:20.646 22:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:20.646 22:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:20.646 22:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:20.646 22:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:20.646 22:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:20.646 22:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:20.646 22:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:20.646 22:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:20.647 22:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:20.647 22:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:20.906 22:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:20.906 "name": "Existed_Raid", 00:19:20.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.906 "strip_size_kb": 64, 00:19:20.906 "state": "configuring", 00:19:20.906 "raid_level": "concat", 00:19:20.906 "superblock": false, 00:19:20.906 "num_base_bdevs": 4, 00:19:20.906 "num_base_bdevs_discovered": 2, 00:19:20.906 "num_base_bdevs_operational": 4, 00:19:20.906 "base_bdevs_list": [ 00:19:20.906 { 00:19:20.906 "name": null, 00:19:20.906 "uuid": "a48ad667-5728-44a1-a9d3-9c5ec15fca47", 00:19:20.906 "is_configured": false, 00:19:20.906 "data_offset": 0, 00:19:20.906 "data_size": 65536 00:19:20.906 }, 00:19:20.906 { 00:19:20.906 "name": null, 00:19:20.906 "uuid": "7b960403-1c4e-4d91-a0a6-f01e9626d674", 00:19:20.906 "is_configured": false, 00:19:20.906 "data_offset": 0, 00:19:20.906 "data_size": 65536 00:19:20.906 }, 00:19:20.906 { 00:19:20.906 "name": "BaseBdev3", 00:19:20.906 "uuid": "a49b1284-e60e-4202-bfa0-802d7f7b2675", 00:19:20.906 "is_configured": true, 00:19:20.906 "data_offset": 0, 00:19:20.906 "data_size": 65536 00:19:20.906 }, 00:19:20.906 { 00:19:20.906 "name": "BaseBdev4", 00:19:20.906 "uuid": "a9546dee-63cb-4e45-a1f9-ed01ffa053a8", 00:19:20.906 "is_configured": true, 00:19:20.906 "data_offset": 0, 00:19:20.906 "data_size": 65536 00:19:20.906 } 00:19:20.906 ] 00:19:20.906 }' 00:19:20.906 22:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:20.906 22:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.473 22:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:21.473 22:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:21.473 22:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:19:21.473 22:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:21.732 [2024-07-13 22:02:40.942374] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:21.732 22:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:21.732 22:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:21.732 22:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:21.732 22:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:21.732 22:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:21.732 22:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:21.732 22:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:21.732 22:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:21.732 22:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:21.732 22:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:21.732 22:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:21.732 22:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:21.992 22:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:21.992 "name": "Existed_Raid", 00:19:21.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.992 "strip_size_kb": 64, 00:19:21.992 "state": "configuring", 00:19:21.992 "raid_level": "concat", 00:19:21.992 "superblock": false, 00:19:21.992 "num_base_bdevs": 4, 00:19:21.992 "num_base_bdevs_discovered": 3, 00:19:21.992 "num_base_bdevs_operational": 4, 00:19:21.992 "base_bdevs_list": [ 00:19:21.992 { 00:19:21.992 "name": null, 00:19:21.992 "uuid": "a48ad667-5728-44a1-a9d3-9c5ec15fca47", 00:19:21.992 "is_configured": false, 00:19:21.992 "data_offset": 0, 00:19:21.992 "data_size": 65536 00:19:21.992 }, 00:19:21.992 { 00:19:21.992 "name": "BaseBdev2", 00:19:21.992 "uuid": "7b960403-1c4e-4d91-a0a6-f01e9626d674", 00:19:21.992 "is_configured": true, 00:19:21.992 "data_offset": 0, 00:19:21.992 "data_size": 65536 00:19:21.992 }, 00:19:21.992 { 00:19:21.992 "name": "BaseBdev3", 00:19:21.992 "uuid": "a49b1284-e60e-4202-bfa0-802d7f7b2675", 00:19:21.992 "is_configured": true, 00:19:21.992 "data_offset": 0, 00:19:21.992 "data_size": 65536 00:19:21.992 }, 00:19:21.992 { 00:19:21.992 "name": "BaseBdev4", 00:19:21.992 "uuid": "a9546dee-63cb-4e45-a1f9-ed01ffa053a8", 00:19:21.992 "is_configured": true, 00:19:21.992 "data_offset": 0, 00:19:21.992 "data_size": 65536 00:19:21.992 } 00:19:21.992 ] 00:19:21.992 }' 00:19:21.992 22:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:21.992 22:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.251 22:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.251 22:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:22.511 22:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:19:22.511 22:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.511 22:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:22.511 22:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u a48ad667-5728-44a1-a9d3-9c5ec15fca47 00:19:22.769 [2024-07-13 22:02:42.085001] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:22.769 [2024-07-13 22:02:42.085042] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042080 00:19:22.769 [2024-07-13 22:02:42.085050] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:19:22.769 [2024-07-13 22:02:42.085306] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010b20 00:19:22.769 [2024-07-13 22:02:42.085468] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042080 00:19:22.769 [2024-07-13 22:02:42.085480] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000042080 00:19:22.769 [2024-07-13 22:02:42.085712] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:22.769 NewBaseBdev 00:19:22.769 22:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:19:22.769 22:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:19:22.769 22:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:22.769 22:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:22.769 22:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:22.769 22:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:22.769 22:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:23.028 22:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:23.028 [ 00:19:23.028 { 00:19:23.028 "name": "NewBaseBdev", 00:19:23.028 "aliases": [ 00:19:23.029 "a48ad667-5728-44a1-a9d3-9c5ec15fca47" 00:19:23.029 ], 00:19:23.029 "product_name": "Malloc disk", 00:19:23.029 "block_size": 512, 00:19:23.029 "num_blocks": 65536, 00:19:23.029 "uuid": "a48ad667-5728-44a1-a9d3-9c5ec15fca47", 00:19:23.029 "assigned_rate_limits": { 00:19:23.029 "rw_ios_per_sec": 0, 00:19:23.029 "rw_mbytes_per_sec": 0, 00:19:23.029 "r_mbytes_per_sec": 0, 00:19:23.029 "w_mbytes_per_sec": 0 00:19:23.029 }, 00:19:23.029 "claimed": true, 00:19:23.029 "claim_type": "exclusive_write", 00:19:23.029 "zoned": false, 00:19:23.029 "supported_io_types": { 00:19:23.029 "read": true, 00:19:23.029 "write": true, 00:19:23.029 "unmap": true, 00:19:23.029 "flush": true, 00:19:23.029 "reset": true, 00:19:23.029 "nvme_admin": false, 00:19:23.029 "nvme_io": false, 00:19:23.029 "nvme_io_md": false, 00:19:23.029 "write_zeroes": true, 00:19:23.029 "zcopy": true, 00:19:23.029 "get_zone_info": false, 00:19:23.029 "zone_management": false, 00:19:23.029 "zone_append": false, 00:19:23.029 "compare": false, 00:19:23.029 "compare_and_write": false, 00:19:23.029 "abort": true, 00:19:23.029 "seek_hole": false, 00:19:23.029 "seek_data": false, 00:19:23.029 "copy": true, 00:19:23.029 "nvme_iov_md": false 00:19:23.029 }, 00:19:23.029 "memory_domains": [ 00:19:23.029 { 00:19:23.029 "dma_device_id": "system", 00:19:23.029 "dma_device_type": 1 00:19:23.029 }, 00:19:23.029 { 00:19:23.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:23.029 "dma_device_type": 2 00:19:23.029 } 00:19:23.029 ], 00:19:23.029 "driver_specific": {} 00:19:23.029 } 00:19:23.029 ] 00:19:23.029 22:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:23.029 22:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:19:23.029 22:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:23.029 22:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:23.029 22:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:23.029 22:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:23.288 22:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:23.288 22:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:23.288 22:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:23.288 22:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:23.288 22:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:23.288 22:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:23.288 22:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:23.288 22:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:23.288 "name": "Existed_Raid", 00:19:23.288 "uuid": "ca98886f-65c3-407e-ad53-5a76f7d52d74", 00:19:23.288 "strip_size_kb": 64, 00:19:23.288 "state": "online", 00:19:23.288 "raid_level": "concat", 00:19:23.288 "superblock": false, 00:19:23.288 "num_base_bdevs": 4, 00:19:23.288 "num_base_bdevs_discovered": 4, 00:19:23.288 "num_base_bdevs_operational": 4, 00:19:23.288 "base_bdevs_list": [ 00:19:23.288 { 00:19:23.288 "name": "NewBaseBdev", 00:19:23.288 "uuid": "a48ad667-5728-44a1-a9d3-9c5ec15fca47", 00:19:23.288 "is_configured": true, 00:19:23.288 "data_offset": 0, 00:19:23.288 "data_size": 65536 00:19:23.288 }, 00:19:23.288 { 00:19:23.288 "name": "BaseBdev2", 00:19:23.288 "uuid": "7b960403-1c4e-4d91-a0a6-f01e9626d674", 00:19:23.288 "is_configured": true, 00:19:23.288 "data_offset": 0, 00:19:23.288 "data_size": 65536 00:19:23.288 }, 00:19:23.288 { 00:19:23.288 "name": "BaseBdev3", 00:19:23.288 "uuid": "a49b1284-e60e-4202-bfa0-802d7f7b2675", 00:19:23.288 "is_configured": true, 00:19:23.288 "data_offset": 0, 00:19:23.288 "data_size": 65536 00:19:23.288 }, 00:19:23.288 { 00:19:23.288 "name": "BaseBdev4", 00:19:23.288 "uuid": "a9546dee-63cb-4e45-a1f9-ed01ffa053a8", 00:19:23.288 "is_configured": true, 00:19:23.288 "data_offset": 0, 00:19:23.288 "data_size": 65536 00:19:23.288 } 00:19:23.288 ] 00:19:23.288 }' 00:19:23.288 22:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:23.288 22:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.855 22:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:19:23.855 22:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:23.855 22:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:23.855 22:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:23.855 22:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:23.855 22:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:23.855 22:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:23.855 22:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:23.855 [2024-07-13 22:02:43.224340] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:24.115 22:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:24.115 "name": "Existed_Raid", 00:19:24.115 "aliases": [ 00:19:24.115 "ca98886f-65c3-407e-ad53-5a76f7d52d74" 00:19:24.115 ], 00:19:24.115 "product_name": "Raid Volume", 00:19:24.115 "block_size": 512, 00:19:24.115 "num_blocks": 262144, 00:19:24.115 "uuid": "ca98886f-65c3-407e-ad53-5a76f7d52d74", 00:19:24.115 "assigned_rate_limits": { 00:19:24.115 "rw_ios_per_sec": 0, 00:19:24.115 "rw_mbytes_per_sec": 0, 00:19:24.115 "r_mbytes_per_sec": 0, 00:19:24.115 "w_mbytes_per_sec": 0 00:19:24.115 }, 00:19:24.115 "claimed": false, 00:19:24.115 "zoned": false, 00:19:24.115 "supported_io_types": { 00:19:24.115 "read": true, 00:19:24.115 "write": true, 00:19:24.115 "unmap": true, 00:19:24.115 "flush": true, 00:19:24.115 "reset": true, 00:19:24.115 "nvme_admin": false, 00:19:24.115 "nvme_io": false, 00:19:24.115 "nvme_io_md": false, 00:19:24.115 "write_zeroes": true, 00:19:24.115 "zcopy": false, 00:19:24.115 "get_zone_info": false, 00:19:24.115 "zone_management": false, 00:19:24.115 "zone_append": false, 00:19:24.115 "compare": false, 00:19:24.115 "compare_and_write": false, 00:19:24.115 "abort": false, 00:19:24.115 "seek_hole": false, 00:19:24.115 "seek_data": false, 00:19:24.115 "copy": false, 00:19:24.115 "nvme_iov_md": false 00:19:24.115 }, 00:19:24.115 "memory_domains": [ 00:19:24.115 { 00:19:24.115 "dma_device_id": "system", 00:19:24.115 "dma_device_type": 1 00:19:24.115 }, 00:19:24.115 { 00:19:24.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:24.115 "dma_device_type": 2 00:19:24.115 }, 00:19:24.115 { 00:19:24.115 "dma_device_id": "system", 00:19:24.115 "dma_device_type": 1 00:19:24.115 }, 00:19:24.115 { 00:19:24.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:24.115 "dma_device_type": 2 00:19:24.115 }, 00:19:24.115 { 00:19:24.115 "dma_device_id": "system", 00:19:24.115 "dma_device_type": 1 00:19:24.115 }, 00:19:24.115 { 00:19:24.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:24.115 "dma_device_type": 2 00:19:24.115 }, 00:19:24.115 { 00:19:24.115 "dma_device_id": "system", 00:19:24.115 "dma_device_type": 1 00:19:24.115 }, 00:19:24.115 { 00:19:24.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:24.115 "dma_device_type": 2 00:19:24.115 } 00:19:24.115 ], 00:19:24.115 "driver_specific": { 00:19:24.115 "raid": { 00:19:24.115 "uuid": "ca98886f-65c3-407e-ad53-5a76f7d52d74", 00:19:24.115 "strip_size_kb": 64, 00:19:24.115 "state": "online", 00:19:24.115 "raid_level": "concat", 00:19:24.115 "superblock": false, 00:19:24.115 "num_base_bdevs": 4, 00:19:24.115 "num_base_bdevs_discovered": 4, 00:19:24.115 "num_base_bdevs_operational": 4, 00:19:24.115 "base_bdevs_list": [ 00:19:24.115 { 00:19:24.115 "name": "NewBaseBdev", 00:19:24.115 "uuid": "a48ad667-5728-44a1-a9d3-9c5ec15fca47", 00:19:24.115 "is_configured": true, 00:19:24.115 "data_offset": 0, 00:19:24.115 "data_size": 65536 00:19:24.115 }, 00:19:24.115 { 00:19:24.115 "name": "BaseBdev2", 00:19:24.115 "uuid": "7b960403-1c4e-4d91-a0a6-f01e9626d674", 00:19:24.115 "is_configured": true, 00:19:24.115 "data_offset": 0, 00:19:24.115 "data_size": 65536 00:19:24.115 }, 00:19:24.115 { 00:19:24.115 "name": "BaseBdev3", 00:19:24.115 "uuid": "a49b1284-e60e-4202-bfa0-802d7f7b2675", 00:19:24.115 "is_configured": true, 00:19:24.115 "data_offset": 0, 00:19:24.115 "data_size": 65536 00:19:24.115 }, 00:19:24.115 { 00:19:24.115 "name": "BaseBdev4", 00:19:24.115 "uuid": "a9546dee-63cb-4e45-a1f9-ed01ffa053a8", 00:19:24.115 "is_configured": true, 00:19:24.115 "data_offset": 0, 00:19:24.115 "data_size": 65536 00:19:24.115 } 00:19:24.115 ] 00:19:24.115 } 00:19:24.115 } 00:19:24.115 }' 00:19:24.115 22:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:24.115 22:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:19:24.115 BaseBdev2 00:19:24.115 BaseBdev3 00:19:24.115 BaseBdev4' 00:19:24.115 22:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:24.115 22:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:19:24.115 22:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:24.115 22:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:24.115 "name": "NewBaseBdev", 00:19:24.115 "aliases": [ 00:19:24.115 "a48ad667-5728-44a1-a9d3-9c5ec15fca47" 00:19:24.115 ], 00:19:24.115 "product_name": "Malloc disk", 00:19:24.115 "block_size": 512, 00:19:24.115 "num_blocks": 65536, 00:19:24.115 "uuid": "a48ad667-5728-44a1-a9d3-9c5ec15fca47", 00:19:24.115 "assigned_rate_limits": { 00:19:24.115 "rw_ios_per_sec": 0, 00:19:24.115 "rw_mbytes_per_sec": 0, 00:19:24.115 "r_mbytes_per_sec": 0, 00:19:24.115 "w_mbytes_per_sec": 0 00:19:24.115 }, 00:19:24.115 "claimed": true, 00:19:24.115 "claim_type": "exclusive_write", 00:19:24.115 "zoned": false, 00:19:24.115 "supported_io_types": { 00:19:24.115 "read": true, 00:19:24.115 "write": true, 00:19:24.115 "unmap": true, 00:19:24.115 "flush": true, 00:19:24.115 "reset": true, 00:19:24.115 "nvme_admin": false, 00:19:24.115 "nvme_io": false, 00:19:24.115 "nvme_io_md": false, 00:19:24.115 "write_zeroes": true, 00:19:24.115 "zcopy": true, 00:19:24.115 "get_zone_info": false, 00:19:24.115 "zone_management": false, 00:19:24.115 "zone_append": false, 00:19:24.115 "compare": false, 00:19:24.115 "compare_and_write": false, 00:19:24.115 "abort": true, 00:19:24.115 "seek_hole": false, 00:19:24.115 "seek_data": false, 00:19:24.115 "copy": true, 00:19:24.115 "nvme_iov_md": false 00:19:24.115 }, 00:19:24.115 "memory_domains": [ 00:19:24.115 { 00:19:24.115 "dma_device_id": "system", 00:19:24.115 "dma_device_type": 1 00:19:24.115 }, 00:19:24.115 { 00:19:24.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:24.115 "dma_device_type": 2 00:19:24.115 } 00:19:24.115 ], 00:19:24.115 "driver_specific": {} 00:19:24.115 }' 00:19:24.115 22:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:24.115 22:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:24.415 22:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:24.415 22:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:24.415 22:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:24.415 22:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:24.415 22:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:24.415 22:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:24.415 22:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:24.415 22:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:24.415 22:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:24.415 22:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:24.415 22:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:24.415 22:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:24.415 22:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:24.674 22:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:24.674 "name": "BaseBdev2", 00:19:24.674 "aliases": [ 00:19:24.674 "7b960403-1c4e-4d91-a0a6-f01e9626d674" 00:19:24.674 ], 00:19:24.674 "product_name": "Malloc disk", 00:19:24.674 "block_size": 512, 00:19:24.674 "num_blocks": 65536, 00:19:24.674 "uuid": "7b960403-1c4e-4d91-a0a6-f01e9626d674", 00:19:24.674 "assigned_rate_limits": { 00:19:24.674 "rw_ios_per_sec": 0, 00:19:24.674 "rw_mbytes_per_sec": 0, 00:19:24.674 "r_mbytes_per_sec": 0, 00:19:24.674 "w_mbytes_per_sec": 0 00:19:24.674 }, 00:19:24.674 "claimed": true, 00:19:24.674 "claim_type": "exclusive_write", 00:19:24.674 "zoned": false, 00:19:24.674 "supported_io_types": { 00:19:24.674 "read": true, 00:19:24.674 "write": true, 00:19:24.674 "unmap": true, 00:19:24.674 "flush": true, 00:19:24.674 "reset": true, 00:19:24.674 "nvme_admin": false, 00:19:24.674 "nvme_io": false, 00:19:24.674 "nvme_io_md": false, 00:19:24.674 "write_zeroes": true, 00:19:24.674 "zcopy": true, 00:19:24.674 "get_zone_info": false, 00:19:24.674 "zone_management": false, 00:19:24.674 "zone_append": false, 00:19:24.674 "compare": false, 00:19:24.674 "compare_and_write": false, 00:19:24.674 "abort": true, 00:19:24.674 "seek_hole": false, 00:19:24.674 "seek_data": false, 00:19:24.674 "copy": true, 00:19:24.674 "nvme_iov_md": false 00:19:24.674 }, 00:19:24.674 "memory_domains": [ 00:19:24.674 { 00:19:24.674 "dma_device_id": "system", 00:19:24.674 "dma_device_type": 1 00:19:24.674 }, 00:19:24.674 { 00:19:24.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:24.674 "dma_device_type": 2 00:19:24.674 } 00:19:24.674 ], 00:19:24.674 "driver_specific": {} 00:19:24.674 }' 00:19:24.674 22:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:24.674 22:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:24.674 22:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:24.674 22:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:24.674 22:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:24.933 22:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:24.933 22:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:24.933 22:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:24.933 22:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:24.933 22:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:24.933 22:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:24.933 22:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:24.933 22:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:24.933 22:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:24.933 22:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:25.191 22:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:25.191 "name": "BaseBdev3", 00:19:25.191 "aliases": [ 00:19:25.191 "a49b1284-e60e-4202-bfa0-802d7f7b2675" 00:19:25.191 ], 00:19:25.191 "product_name": "Malloc disk", 00:19:25.191 "block_size": 512, 00:19:25.191 "num_blocks": 65536, 00:19:25.191 "uuid": "a49b1284-e60e-4202-bfa0-802d7f7b2675", 00:19:25.191 "assigned_rate_limits": { 00:19:25.191 "rw_ios_per_sec": 0, 00:19:25.191 "rw_mbytes_per_sec": 0, 00:19:25.191 "r_mbytes_per_sec": 0, 00:19:25.191 "w_mbytes_per_sec": 0 00:19:25.191 }, 00:19:25.191 "claimed": true, 00:19:25.191 "claim_type": "exclusive_write", 00:19:25.191 "zoned": false, 00:19:25.191 "supported_io_types": { 00:19:25.191 "read": true, 00:19:25.191 "write": true, 00:19:25.191 "unmap": true, 00:19:25.191 "flush": true, 00:19:25.191 "reset": true, 00:19:25.191 "nvme_admin": false, 00:19:25.191 "nvme_io": false, 00:19:25.191 "nvme_io_md": false, 00:19:25.191 "write_zeroes": true, 00:19:25.191 "zcopy": true, 00:19:25.191 "get_zone_info": false, 00:19:25.191 "zone_management": false, 00:19:25.191 "zone_append": false, 00:19:25.191 "compare": false, 00:19:25.191 "compare_and_write": false, 00:19:25.191 "abort": true, 00:19:25.191 "seek_hole": false, 00:19:25.191 "seek_data": false, 00:19:25.191 "copy": true, 00:19:25.191 "nvme_iov_md": false 00:19:25.191 }, 00:19:25.191 "memory_domains": [ 00:19:25.191 { 00:19:25.191 "dma_device_id": "system", 00:19:25.191 "dma_device_type": 1 00:19:25.191 }, 00:19:25.191 { 00:19:25.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:25.191 "dma_device_type": 2 00:19:25.191 } 00:19:25.191 ], 00:19:25.191 "driver_specific": {} 00:19:25.191 }' 00:19:25.191 22:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:25.191 22:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:25.191 22:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:25.191 22:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:25.191 22:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:25.191 22:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:25.191 22:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:25.191 22:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:25.450 22:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:25.450 22:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:25.450 22:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:25.450 22:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:25.450 22:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:25.450 22:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:25.450 22:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:19:25.450 22:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:25.450 "name": "BaseBdev4", 00:19:25.450 "aliases": [ 00:19:25.450 "a9546dee-63cb-4e45-a1f9-ed01ffa053a8" 00:19:25.450 ], 00:19:25.450 "product_name": "Malloc disk", 00:19:25.450 "block_size": 512, 00:19:25.450 "num_blocks": 65536, 00:19:25.450 "uuid": "a9546dee-63cb-4e45-a1f9-ed01ffa053a8", 00:19:25.450 "assigned_rate_limits": { 00:19:25.450 "rw_ios_per_sec": 0, 00:19:25.450 "rw_mbytes_per_sec": 0, 00:19:25.450 "r_mbytes_per_sec": 0, 00:19:25.450 "w_mbytes_per_sec": 0 00:19:25.450 }, 00:19:25.450 "claimed": true, 00:19:25.450 "claim_type": "exclusive_write", 00:19:25.450 "zoned": false, 00:19:25.450 "supported_io_types": { 00:19:25.450 "read": true, 00:19:25.450 "write": true, 00:19:25.450 "unmap": true, 00:19:25.450 "flush": true, 00:19:25.450 "reset": true, 00:19:25.450 "nvme_admin": false, 00:19:25.450 "nvme_io": false, 00:19:25.450 "nvme_io_md": false, 00:19:25.450 "write_zeroes": true, 00:19:25.450 "zcopy": true, 00:19:25.450 "get_zone_info": false, 00:19:25.450 "zone_management": false, 00:19:25.450 "zone_append": false, 00:19:25.450 "compare": false, 00:19:25.450 "compare_and_write": false, 00:19:25.450 "abort": true, 00:19:25.450 "seek_hole": false, 00:19:25.450 "seek_data": false, 00:19:25.450 "copy": true, 00:19:25.450 "nvme_iov_md": false 00:19:25.450 }, 00:19:25.450 "memory_domains": [ 00:19:25.450 { 00:19:25.450 "dma_device_id": "system", 00:19:25.450 "dma_device_type": 1 00:19:25.450 }, 00:19:25.450 { 00:19:25.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:25.450 "dma_device_type": 2 00:19:25.450 } 00:19:25.450 ], 00:19:25.450 "driver_specific": {} 00:19:25.450 }' 00:19:25.450 22:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:25.708 22:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:25.708 22:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:25.708 22:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:25.708 22:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:25.708 22:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:25.708 22:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:25.708 22:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:25.708 22:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:25.708 22:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:25.967 22:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:25.967 22:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:25.967 22:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:25.967 [2024-07-13 22:02:45.293585] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:25.967 [2024-07-13 22:02:45.293612] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:25.967 [2024-07-13 22:02:45.293681] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:25.967 [2024-07-13 22:02:45.293759] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:25.967 [2024-07-13 22:02:45.293770] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042080 name Existed_Raid, state offline 00:19:25.967 22:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 1427539 00:19:25.967 22:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 1427539 ']' 00:19:25.967 22:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 1427539 00:19:25.967 22:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:19:25.967 22:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:25.967 22:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1427539 00:19:26.226 22:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:26.226 22:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:26.226 22:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1427539' 00:19:26.226 killing process with pid 1427539 00:19:26.226 22:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 1427539 00:19:26.226 [2024-07-13 22:02:45.362133] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:26.226 22:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 1427539 00:19:26.485 [2024-07-13 22:02:45.675268] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:27.863 22:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:19:27.863 00:19:27.863 real 0m26.297s 00:19:27.863 user 0m46.261s 00:19:27.863 sys 0m4.830s 00:19:27.863 22:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:27.863 22:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.863 ************************************ 00:19:27.863 END TEST raid_state_function_test 00:19:27.863 ************************************ 00:19:27.863 22:02:46 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:27.864 22:02:46 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:19:27.864 22:02:46 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:19:27.864 22:02:46 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:27.864 22:02:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:27.864 ************************************ 00:19:27.864 START TEST raid_state_function_test_sb 00:19:27.864 ************************************ 00:19:27.864 22:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 4 true 00:19:27.864 22:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:19:27.864 22:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:19:27.864 22:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:19:27.864 22:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:19:27.864 22:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:19:27.864 22:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:27.864 22:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:19:27.864 22:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:27.864 22:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:27.864 22:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:19:27.864 22:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:27.864 22:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:27.864 22:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:19:27.864 22:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:27.864 22:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:27.864 22:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:19:27.864 22:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:27.864 22:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:27.864 22:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:27.864 22:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:19:27.864 22:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:19:27.864 22:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:19:27.864 22:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:19:27.864 22:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:19:27.864 22:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:19:27.864 22:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:19:27.864 22:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:19:27.864 22:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:19:27.864 22:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:19:27.864 22:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=1432709 00:19:27.864 22:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1432709' 00:19:27.864 Process raid pid: 1432709 00:19:27.864 22:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:27.864 22:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 1432709 /var/tmp/spdk-raid.sock 00:19:27.864 22:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 1432709 ']' 00:19:27.864 22:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:27.864 22:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:27.864 22:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:27.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:27.864 22:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:27.864 22:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.864 [2024-07-13 22:02:47.060065] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:27.864 [2024-07-13 22:02:47.060171] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:27.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.864 EAL: Requested device 0000:3d:01.0 cannot be used 00:19:27.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.864 EAL: Requested device 0000:3d:01.1 cannot be used 00:19:27.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.864 EAL: Requested device 0000:3d:01.2 cannot be used 00:19:27.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.864 EAL: Requested device 0000:3d:01.3 cannot be used 00:19:27.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.864 EAL: Requested device 0000:3d:01.4 cannot be used 00:19:27.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.864 EAL: Requested device 0000:3d:01.5 cannot be used 00:19:27.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.864 EAL: Requested device 0000:3d:01.6 cannot be used 00:19:27.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.864 EAL: Requested device 0000:3d:01.7 cannot be used 00:19:27.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.864 EAL: Requested device 0000:3d:02.0 cannot be used 00:19:27.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.864 EAL: Requested device 0000:3d:02.1 cannot be used 00:19:27.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.864 EAL: Requested device 0000:3d:02.2 cannot be used 00:19:27.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.864 EAL: Requested device 0000:3d:02.3 cannot be used 00:19:27.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.864 EAL: Requested device 0000:3d:02.4 cannot be used 00:19:27.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.864 EAL: Requested device 0000:3d:02.5 cannot be used 00:19:27.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.864 EAL: Requested device 0000:3d:02.6 cannot be used 00:19:27.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.864 EAL: Requested device 0000:3d:02.7 cannot be used 00:19:27.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.864 EAL: Requested device 0000:3f:01.0 cannot be used 00:19:27.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.864 EAL: Requested device 0000:3f:01.1 cannot be used 00:19:27.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.864 EAL: Requested device 0000:3f:01.2 cannot be used 00:19:27.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.864 EAL: Requested device 0000:3f:01.3 cannot be used 00:19:27.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.864 EAL: Requested device 0000:3f:01.4 cannot be used 00:19:27.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.864 EAL: Requested device 0000:3f:01.5 cannot be used 00:19:27.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.864 EAL: Requested device 0000:3f:01.6 cannot be used 00:19:27.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.864 EAL: Requested device 0000:3f:01.7 cannot be used 00:19:27.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.864 EAL: Requested device 0000:3f:02.0 cannot be used 00:19:27.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.864 EAL: Requested device 0000:3f:02.1 cannot be used 00:19:27.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.864 EAL: Requested device 0000:3f:02.2 cannot be used 00:19:27.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.864 EAL: Requested device 0000:3f:02.3 cannot be used 00:19:27.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.864 EAL: Requested device 0000:3f:02.4 cannot be used 00:19:27.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.864 EAL: Requested device 0000:3f:02.5 cannot be used 00:19:27.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.864 EAL: Requested device 0000:3f:02.6 cannot be used 00:19:27.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.864 EAL: Requested device 0000:3f:02.7 cannot be used 00:19:27.864 [2024-07-13 22:02:47.223574] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.124 [2024-07-13 22:02:47.428834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.384 [2024-07-13 22:02:47.676003] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:28.384 [2024-07-13 22:02:47.676030] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:28.644 22:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:28.644 22:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:19:28.644 22:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:28.644 [2024-07-13 22:02:47.981019] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:28.644 [2024-07-13 22:02:47.981066] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:28.644 [2024-07-13 22:02:47.981076] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:28.644 [2024-07-13 22:02:47.981104] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:28.644 [2024-07-13 22:02:47.981112] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:28.644 [2024-07-13 22:02:47.981124] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:28.644 [2024-07-13 22:02:47.981133] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:28.644 [2024-07-13 22:02:47.981144] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:28.644 22:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:28.644 22:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:28.644 22:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:28.644 22:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:28.644 22:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:28.644 22:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:28.644 22:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:28.644 22:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:28.644 22:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:28.644 22:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:28.644 22:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:28.644 22:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:28.904 22:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:28.904 "name": "Existed_Raid", 00:19:28.904 "uuid": "89ece253-1561-43e1-b501-6726e44a52f2", 00:19:28.904 "strip_size_kb": 64, 00:19:28.904 "state": "configuring", 00:19:28.904 "raid_level": "concat", 00:19:28.904 "superblock": true, 00:19:28.904 "num_base_bdevs": 4, 00:19:28.904 "num_base_bdevs_discovered": 0, 00:19:28.904 "num_base_bdevs_operational": 4, 00:19:28.904 "base_bdevs_list": [ 00:19:28.904 { 00:19:28.904 "name": "BaseBdev1", 00:19:28.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.904 "is_configured": false, 00:19:28.904 "data_offset": 0, 00:19:28.904 "data_size": 0 00:19:28.904 }, 00:19:28.904 { 00:19:28.904 "name": "BaseBdev2", 00:19:28.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.904 "is_configured": false, 00:19:28.904 "data_offset": 0, 00:19:28.904 "data_size": 0 00:19:28.904 }, 00:19:28.904 { 00:19:28.904 "name": "BaseBdev3", 00:19:28.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.904 "is_configured": false, 00:19:28.904 "data_offset": 0, 00:19:28.904 "data_size": 0 00:19:28.904 }, 00:19:28.904 { 00:19:28.904 "name": "BaseBdev4", 00:19:28.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.904 "is_configured": false, 00:19:28.904 "data_offset": 0, 00:19:28.904 "data_size": 0 00:19:28.904 } 00:19:28.904 ] 00:19:28.904 }' 00:19:28.904 22:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:28.904 22:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.472 22:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:29.472 [2024-07-13 22:02:48.787016] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:29.472 [2024-07-13 22:02:48.787052] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:19:29.472 22:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:29.732 [2024-07-13 22:02:48.955518] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:29.732 [2024-07-13 22:02:48.955562] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:29.732 [2024-07-13 22:02:48.955572] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:29.732 [2024-07-13 22:02:48.955606] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:29.732 [2024-07-13 22:02:48.955614] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:29.732 [2024-07-13 22:02:48.955625] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:29.732 [2024-07-13 22:02:48.955633] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:29.732 [2024-07-13 22:02:48.955645] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:29.732 22:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:29.991 [2024-07-13 22:02:49.160914] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:29.991 BaseBdev1 00:19:29.991 22:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:19:29.991 22:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:19:29.991 22:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:29.991 22:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:29.991 22:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:29.991 22:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:29.991 22:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:29.991 22:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:30.251 [ 00:19:30.251 { 00:19:30.251 "name": "BaseBdev1", 00:19:30.251 "aliases": [ 00:19:30.251 "e83c53b1-5f0e-4d8e-8c24-e5a295b03363" 00:19:30.251 ], 00:19:30.251 "product_name": "Malloc disk", 00:19:30.251 "block_size": 512, 00:19:30.251 "num_blocks": 65536, 00:19:30.251 "uuid": "e83c53b1-5f0e-4d8e-8c24-e5a295b03363", 00:19:30.251 "assigned_rate_limits": { 00:19:30.251 "rw_ios_per_sec": 0, 00:19:30.251 "rw_mbytes_per_sec": 0, 00:19:30.251 "r_mbytes_per_sec": 0, 00:19:30.251 "w_mbytes_per_sec": 0 00:19:30.251 }, 00:19:30.251 "claimed": true, 00:19:30.251 "claim_type": "exclusive_write", 00:19:30.251 "zoned": false, 00:19:30.251 "supported_io_types": { 00:19:30.251 "read": true, 00:19:30.251 "write": true, 00:19:30.251 "unmap": true, 00:19:30.251 "flush": true, 00:19:30.251 "reset": true, 00:19:30.251 "nvme_admin": false, 00:19:30.251 "nvme_io": false, 00:19:30.251 "nvme_io_md": false, 00:19:30.251 "write_zeroes": true, 00:19:30.251 "zcopy": true, 00:19:30.251 "get_zone_info": false, 00:19:30.251 "zone_management": false, 00:19:30.251 "zone_append": false, 00:19:30.251 "compare": false, 00:19:30.251 "compare_and_write": false, 00:19:30.251 "abort": true, 00:19:30.251 "seek_hole": false, 00:19:30.251 "seek_data": false, 00:19:30.251 "copy": true, 00:19:30.251 "nvme_iov_md": false 00:19:30.251 }, 00:19:30.251 "memory_domains": [ 00:19:30.251 { 00:19:30.251 "dma_device_id": "system", 00:19:30.251 "dma_device_type": 1 00:19:30.251 }, 00:19:30.251 { 00:19:30.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.251 "dma_device_type": 2 00:19:30.251 } 00:19:30.251 ], 00:19:30.251 "driver_specific": {} 00:19:30.251 } 00:19:30.251 ] 00:19:30.251 22:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:30.251 22:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:30.251 22:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:30.251 22:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:30.251 22:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:30.251 22:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:30.251 22:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:30.251 22:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:30.251 22:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:30.251 22:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:30.251 22:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:30.251 22:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:30.251 22:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:30.510 22:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:30.510 "name": "Existed_Raid", 00:19:30.510 "uuid": "1c4d3a53-c168-4c11-9d76-ec10c8f05a1e", 00:19:30.510 "strip_size_kb": 64, 00:19:30.510 "state": "configuring", 00:19:30.510 "raid_level": "concat", 00:19:30.510 "superblock": true, 00:19:30.510 "num_base_bdevs": 4, 00:19:30.510 "num_base_bdevs_discovered": 1, 00:19:30.510 "num_base_bdevs_operational": 4, 00:19:30.510 "base_bdevs_list": [ 00:19:30.510 { 00:19:30.510 "name": "BaseBdev1", 00:19:30.510 "uuid": "e83c53b1-5f0e-4d8e-8c24-e5a295b03363", 00:19:30.510 "is_configured": true, 00:19:30.510 "data_offset": 2048, 00:19:30.510 "data_size": 63488 00:19:30.510 }, 00:19:30.510 { 00:19:30.510 "name": "BaseBdev2", 00:19:30.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.510 "is_configured": false, 00:19:30.510 "data_offset": 0, 00:19:30.510 "data_size": 0 00:19:30.510 }, 00:19:30.510 { 00:19:30.510 "name": "BaseBdev3", 00:19:30.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.510 "is_configured": false, 00:19:30.510 "data_offset": 0, 00:19:30.510 "data_size": 0 00:19:30.510 }, 00:19:30.510 { 00:19:30.510 "name": "BaseBdev4", 00:19:30.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.510 "is_configured": false, 00:19:30.510 "data_offset": 0, 00:19:30.510 "data_size": 0 00:19:30.510 } 00:19:30.510 ] 00:19:30.510 }' 00:19:30.510 22:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:30.510 22:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.079 22:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:31.079 [2024-07-13 22:02:50.360091] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:31.079 [2024-07-13 22:02:50.360140] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:19:31.079 22:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:31.338 [2024-07-13 22:02:50.532611] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:31.338 [2024-07-13 22:02:50.534291] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:31.338 [2024-07-13 22:02:50.534324] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:31.338 [2024-07-13 22:02:50.534335] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:31.338 [2024-07-13 22:02:50.534346] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:31.338 [2024-07-13 22:02:50.534354] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:31.338 [2024-07-13 22:02:50.534367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:31.338 22:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:19:31.338 22:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:31.338 22:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:31.338 22:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:31.338 22:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:31.338 22:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:31.338 22:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:31.338 22:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:31.338 22:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:31.338 22:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:31.338 22:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:31.338 22:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:31.338 22:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:31.338 22:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:31.598 22:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:31.598 "name": "Existed_Raid", 00:19:31.598 "uuid": "5141a397-75b2-4871-b716-7cb0c5cfeb0a", 00:19:31.598 "strip_size_kb": 64, 00:19:31.598 "state": "configuring", 00:19:31.598 "raid_level": "concat", 00:19:31.598 "superblock": true, 00:19:31.598 "num_base_bdevs": 4, 00:19:31.598 "num_base_bdevs_discovered": 1, 00:19:31.598 "num_base_bdevs_operational": 4, 00:19:31.598 "base_bdevs_list": [ 00:19:31.598 { 00:19:31.598 "name": "BaseBdev1", 00:19:31.598 "uuid": "e83c53b1-5f0e-4d8e-8c24-e5a295b03363", 00:19:31.598 "is_configured": true, 00:19:31.598 "data_offset": 2048, 00:19:31.598 "data_size": 63488 00:19:31.598 }, 00:19:31.598 { 00:19:31.598 "name": "BaseBdev2", 00:19:31.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.598 "is_configured": false, 00:19:31.598 "data_offset": 0, 00:19:31.598 "data_size": 0 00:19:31.598 }, 00:19:31.598 { 00:19:31.598 "name": "BaseBdev3", 00:19:31.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.598 "is_configured": false, 00:19:31.598 "data_offset": 0, 00:19:31.598 "data_size": 0 00:19:31.598 }, 00:19:31.598 { 00:19:31.598 "name": "BaseBdev4", 00:19:31.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.598 "is_configured": false, 00:19:31.598 "data_offset": 0, 00:19:31.598 "data_size": 0 00:19:31.598 } 00:19:31.598 ] 00:19:31.598 }' 00:19:31.598 22:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:31.598 22:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.856 22:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:32.113 [2024-07-13 22:02:51.356726] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:32.113 BaseBdev2 00:19:32.113 22:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:19:32.113 22:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:19:32.113 22:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:32.113 22:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:32.113 22:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:32.113 22:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:32.113 22:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:32.371 22:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:32.371 [ 00:19:32.371 { 00:19:32.371 "name": "BaseBdev2", 00:19:32.371 "aliases": [ 00:19:32.371 "2ed6ce89-35dc-4cfe-8407-ec9b757b4857" 00:19:32.371 ], 00:19:32.371 "product_name": "Malloc disk", 00:19:32.371 "block_size": 512, 00:19:32.371 "num_blocks": 65536, 00:19:32.371 "uuid": "2ed6ce89-35dc-4cfe-8407-ec9b757b4857", 00:19:32.371 "assigned_rate_limits": { 00:19:32.371 "rw_ios_per_sec": 0, 00:19:32.371 "rw_mbytes_per_sec": 0, 00:19:32.371 "r_mbytes_per_sec": 0, 00:19:32.371 "w_mbytes_per_sec": 0 00:19:32.371 }, 00:19:32.371 "claimed": true, 00:19:32.371 "claim_type": "exclusive_write", 00:19:32.371 "zoned": false, 00:19:32.371 "supported_io_types": { 00:19:32.371 "read": true, 00:19:32.371 "write": true, 00:19:32.371 "unmap": true, 00:19:32.371 "flush": true, 00:19:32.371 "reset": true, 00:19:32.371 "nvme_admin": false, 00:19:32.371 "nvme_io": false, 00:19:32.371 "nvme_io_md": false, 00:19:32.371 "write_zeroes": true, 00:19:32.371 "zcopy": true, 00:19:32.371 "get_zone_info": false, 00:19:32.371 "zone_management": false, 00:19:32.371 "zone_append": false, 00:19:32.371 "compare": false, 00:19:32.371 "compare_and_write": false, 00:19:32.371 "abort": true, 00:19:32.371 "seek_hole": false, 00:19:32.371 "seek_data": false, 00:19:32.371 "copy": true, 00:19:32.371 "nvme_iov_md": false 00:19:32.371 }, 00:19:32.371 "memory_domains": [ 00:19:32.371 { 00:19:32.371 "dma_device_id": "system", 00:19:32.371 "dma_device_type": 1 00:19:32.371 }, 00:19:32.371 { 00:19:32.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:32.371 "dma_device_type": 2 00:19:32.371 } 00:19:32.371 ], 00:19:32.371 "driver_specific": {} 00:19:32.371 } 00:19:32.371 ] 00:19:32.371 22:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:32.371 22:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:32.371 22:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:32.371 22:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:32.371 22:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:32.371 22:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:32.371 22:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:32.371 22:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:32.371 22:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:32.371 22:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:32.371 22:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:32.371 22:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:32.371 22:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:32.371 22:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:32.371 22:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:32.629 22:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:32.629 "name": "Existed_Raid", 00:19:32.629 "uuid": "5141a397-75b2-4871-b716-7cb0c5cfeb0a", 00:19:32.629 "strip_size_kb": 64, 00:19:32.629 "state": "configuring", 00:19:32.629 "raid_level": "concat", 00:19:32.629 "superblock": true, 00:19:32.629 "num_base_bdevs": 4, 00:19:32.629 "num_base_bdevs_discovered": 2, 00:19:32.629 "num_base_bdevs_operational": 4, 00:19:32.629 "base_bdevs_list": [ 00:19:32.629 { 00:19:32.629 "name": "BaseBdev1", 00:19:32.629 "uuid": "e83c53b1-5f0e-4d8e-8c24-e5a295b03363", 00:19:32.629 "is_configured": true, 00:19:32.629 "data_offset": 2048, 00:19:32.629 "data_size": 63488 00:19:32.629 }, 00:19:32.629 { 00:19:32.629 "name": "BaseBdev2", 00:19:32.629 "uuid": "2ed6ce89-35dc-4cfe-8407-ec9b757b4857", 00:19:32.629 "is_configured": true, 00:19:32.629 "data_offset": 2048, 00:19:32.629 "data_size": 63488 00:19:32.629 }, 00:19:32.629 { 00:19:32.629 "name": "BaseBdev3", 00:19:32.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.629 "is_configured": false, 00:19:32.629 "data_offset": 0, 00:19:32.629 "data_size": 0 00:19:32.629 }, 00:19:32.629 { 00:19:32.629 "name": "BaseBdev4", 00:19:32.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.629 "is_configured": false, 00:19:32.629 "data_offset": 0, 00:19:32.629 "data_size": 0 00:19:32.629 } 00:19:32.629 ] 00:19:32.629 }' 00:19:32.629 22:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:32.629 22:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.197 22:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:33.197 [2024-07-13 22:02:52.538395] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:33.197 BaseBdev3 00:19:33.197 22:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:19:33.197 22:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:19:33.197 22:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:33.197 22:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:33.197 22:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:33.197 22:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:33.197 22:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:33.457 22:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:33.716 [ 00:19:33.716 { 00:19:33.716 "name": "BaseBdev3", 00:19:33.716 "aliases": [ 00:19:33.716 "02cd833f-0d70-424d-9976-4a902f11a7ce" 00:19:33.716 ], 00:19:33.716 "product_name": "Malloc disk", 00:19:33.716 "block_size": 512, 00:19:33.716 "num_blocks": 65536, 00:19:33.716 "uuid": "02cd833f-0d70-424d-9976-4a902f11a7ce", 00:19:33.716 "assigned_rate_limits": { 00:19:33.716 "rw_ios_per_sec": 0, 00:19:33.716 "rw_mbytes_per_sec": 0, 00:19:33.716 "r_mbytes_per_sec": 0, 00:19:33.716 "w_mbytes_per_sec": 0 00:19:33.716 }, 00:19:33.716 "claimed": true, 00:19:33.716 "claim_type": "exclusive_write", 00:19:33.716 "zoned": false, 00:19:33.716 "supported_io_types": { 00:19:33.716 "read": true, 00:19:33.716 "write": true, 00:19:33.716 "unmap": true, 00:19:33.716 "flush": true, 00:19:33.716 "reset": true, 00:19:33.716 "nvme_admin": false, 00:19:33.716 "nvme_io": false, 00:19:33.716 "nvme_io_md": false, 00:19:33.716 "write_zeroes": true, 00:19:33.716 "zcopy": true, 00:19:33.716 "get_zone_info": false, 00:19:33.716 "zone_management": false, 00:19:33.716 "zone_append": false, 00:19:33.716 "compare": false, 00:19:33.716 "compare_and_write": false, 00:19:33.716 "abort": true, 00:19:33.716 "seek_hole": false, 00:19:33.716 "seek_data": false, 00:19:33.716 "copy": true, 00:19:33.716 "nvme_iov_md": false 00:19:33.716 }, 00:19:33.716 "memory_domains": [ 00:19:33.716 { 00:19:33.716 "dma_device_id": "system", 00:19:33.716 "dma_device_type": 1 00:19:33.716 }, 00:19:33.716 { 00:19:33.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:33.716 "dma_device_type": 2 00:19:33.716 } 00:19:33.716 ], 00:19:33.716 "driver_specific": {} 00:19:33.716 } 00:19:33.716 ] 00:19:33.716 22:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:33.716 22:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:33.716 22:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:33.716 22:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:33.716 22:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:33.716 22:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:33.716 22:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:33.716 22:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:33.716 22:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:33.716 22:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:33.716 22:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:33.716 22:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:33.716 22:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:33.716 22:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:33.716 22:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:33.716 22:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:33.716 "name": "Existed_Raid", 00:19:33.716 "uuid": "5141a397-75b2-4871-b716-7cb0c5cfeb0a", 00:19:33.716 "strip_size_kb": 64, 00:19:33.716 "state": "configuring", 00:19:33.716 "raid_level": "concat", 00:19:33.716 "superblock": true, 00:19:33.716 "num_base_bdevs": 4, 00:19:33.716 "num_base_bdevs_discovered": 3, 00:19:33.716 "num_base_bdevs_operational": 4, 00:19:33.716 "base_bdevs_list": [ 00:19:33.716 { 00:19:33.716 "name": "BaseBdev1", 00:19:33.716 "uuid": "e83c53b1-5f0e-4d8e-8c24-e5a295b03363", 00:19:33.716 "is_configured": true, 00:19:33.716 "data_offset": 2048, 00:19:33.716 "data_size": 63488 00:19:33.716 }, 00:19:33.716 { 00:19:33.716 "name": "BaseBdev2", 00:19:33.716 "uuid": "2ed6ce89-35dc-4cfe-8407-ec9b757b4857", 00:19:33.716 "is_configured": true, 00:19:33.716 "data_offset": 2048, 00:19:33.716 "data_size": 63488 00:19:33.716 }, 00:19:33.716 { 00:19:33.716 "name": "BaseBdev3", 00:19:33.716 "uuid": "02cd833f-0d70-424d-9976-4a902f11a7ce", 00:19:33.716 "is_configured": true, 00:19:33.716 "data_offset": 2048, 00:19:33.716 "data_size": 63488 00:19:33.716 }, 00:19:33.716 { 00:19:33.716 "name": "BaseBdev4", 00:19:33.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.716 "is_configured": false, 00:19:33.716 "data_offset": 0, 00:19:33.716 "data_size": 0 00:19:33.716 } 00:19:33.716 ] 00:19:33.716 }' 00:19:33.716 22:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:33.716 22:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.284 22:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:34.543 [2024-07-13 22:02:53.752270] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:34.543 [2024-07-13 22:02:53.752497] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:19:34.543 [2024-07-13 22:02:53.752519] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:34.543 [2024-07-13 22:02:53.752755] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:19:34.543 [2024-07-13 22:02:53.752956] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:19:34.543 [2024-07-13 22:02:53.752970] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:19:34.543 [2024-07-13 22:02:53.753104] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:34.543 BaseBdev4 00:19:34.543 22:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:19:34.543 22:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:19:34.543 22:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:34.543 22:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:34.543 22:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:34.543 22:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:34.543 22:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:34.802 22:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:34.802 [ 00:19:34.802 { 00:19:34.802 "name": "BaseBdev4", 00:19:34.802 "aliases": [ 00:19:34.802 "ba067fe6-ab6f-4e82-93a1-601249348cd6" 00:19:34.802 ], 00:19:34.802 "product_name": "Malloc disk", 00:19:34.802 "block_size": 512, 00:19:34.802 "num_blocks": 65536, 00:19:34.802 "uuid": "ba067fe6-ab6f-4e82-93a1-601249348cd6", 00:19:34.802 "assigned_rate_limits": { 00:19:34.802 "rw_ios_per_sec": 0, 00:19:34.802 "rw_mbytes_per_sec": 0, 00:19:34.802 "r_mbytes_per_sec": 0, 00:19:34.802 "w_mbytes_per_sec": 0 00:19:34.802 }, 00:19:34.802 "claimed": true, 00:19:34.802 "claim_type": "exclusive_write", 00:19:34.802 "zoned": false, 00:19:34.802 "supported_io_types": { 00:19:34.802 "read": true, 00:19:34.802 "write": true, 00:19:34.802 "unmap": true, 00:19:34.802 "flush": true, 00:19:34.802 "reset": true, 00:19:34.802 "nvme_admin": false, 00:19:34.802 "nvme_io": false, 00:19:34.802 "nvme_io_md": false, 00:19:34.802 "write_zeroes": true, 00:19:34.802 "zcopy": true, 00:19:34.802 "get_zone_info": false, 00:19:34.802 "zone_management": false, 00:19:34.802 "zone_append": false, 00:19:34.802 "compare": false, 00:19:34.802 "compare_and_write": false, 00:19:34.802 "abort": true, 00:19:34.802 "seek_hole": false, 00:19:34.802 "seek_data": false, 00:19:34.802 "copy": true, 00:19:34.802 "nvme_iov_md": false 00:19:34.802 }, 00:19:34.802 "memory_domains": [ 00:19:34.802 { 00:19:34.802 "dma_device_id": "system", 00:19:34.802 "dma_device_type": 1 00:19:34.802 }, 00:19:34.802 { 00:19:34.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:34.802 "dma_device_type": 2 00:19:34.802 } 00:19:34.802 ], 00:19:34.802 "driver_specific": {} 00:19:34.802 } 00:19:34.802 ] 00:19:34.802 22:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:34.802 22:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:34.802 22:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:34.802 22:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:19:34.802 22:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:34.802 22:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:34.802 22:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:34.802 22:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:34.802 22:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:34.802 22:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:34.802 22:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:34.802 22:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:34.802 22:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:34.802 22:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:34.802 22:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:35.061 22:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:35.061 "name": "Existed_Raid", 00:19:35.061 "uuid": "5141a397-75b2-4871-b716-7cb0c5cfeb0a", 00:19:35.061 "strip_size_kb": 64, 00:19:35.061 "state": "online", 00:19:35.061 "raid_level": "concat", 00:19:35.061 "superblock": true, 00:19:35.061 "num_base_bdevs": 4, 00:19:35.061 "num_base_bdevs_discovered": 4, 00:19:35.061 "num_base_bdevs_operational": 4, 00:19:35.061 "base_bdevs_list": [ 00:19:35.061 { 00:19:35.061 "name": "BaseBdev1", 00:19:35.061 "uuid": "e83c53b1-5f0e-4d8e-8c24-e5a295b03363", 00:19:35.061 "is_configured": true, 00:19:35.061 "data_offset": 2048, 00:19:35.061 "data_size": 63488 00:19:35.061 }, 00:19:35.061 { 00:19:35.061 "name": "BaseBdev2", 00:19:35.061 "uuid": "2ed6ce89-35dc-4cfe-8407-ec9b757b4857", 00:19:35.061 "is_configured": true, 00:19:35.061 "data_offset": 2048, 00:19:35.061 "data_size": 63488 00:19:35.061 }, 00:19:35.061 { 00:19:35.061 "name": "BaseBdev3", 00:19:35.061 "uuid": "02cd833f-0d70-424d-9976-4a902f11a7ce", 00:19:35.061 "is_configured": true, 00:19:35.061 "data_offset": 2048, 00:19:35.061 "data_size": 63488 00:19:35.061 }, 00:19:35.061 { 00:19:35.061 "name": "BaseBdev4", 00:19:35.061 "uuid": "ba067fe6-ab6f-4e82-93a1-601249348cd6", 00:19:35.061 "is_configured": true, 00:19:35.061 "data_offset": 2048, 00:19:35.061 "data_size": 63488 00:19:35.061 } 00:19:35.061 ] 00:19:35.061 }' 00:19:35.061 22:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:35.061 22:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:35.630 22:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:19:35.630 22:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:35.630 22:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:35.630 22:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:35.630 22:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:35.630 22:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:19:35.630 22:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:35.630 22:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:35.630 [2024-07-13 22:02:54.891590] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:35.630 22:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:35.630 "name": "Existed_Raid", 00:19:35.630 "aliases": [ 00:19:35.630 "5141a397-75b2-4871-b716-7cb0c5cfeb0a" 00:19:35.630 ], 00:19:35.630 "product_name": "Raid Volume", 00:19:35.630 "block_size": 512, 00:19:35.630 "num_blocks": 253952, 00:19:35.630 "uuid": "5141a397-75b2-4871-b716-7cb0c5cfeb0a", 00:19:35.630 "assigned_rate_limits": { 00:19:35.630 "rw_ios_per_sec": 0, 00:19:35.630 "rw_mbytes_per_sec": 0, 00:19:35.630 "r_mbytes_per_sec": 0, 00:19:35.630 "w_mbytes_per_sec": 0 00:19:35.630 }, 00:19:35.630 "claimed": false, 00:19:35.630 "zoned": false, 00:19:35.630 "supported_io_types": { 00:19:35.630 "read": true, 00:19:35.630 "write": true, 00:19:35.630 "unmap": true, 00:19:35.630 "flush": true, 00:19:35.630 "reset": true, 00:19:35.630 "nvme_admin": false, 00:19:35.630 "nvme_io": false, 00:19:35.630 "nvme_io_md": false, 00:19:35.630 "write_zeroes": true, 00:19:35.630 "zcopy": false, 00:19:35.630 "get_zone_info": false, 00:19:35.630 "zone_management": false, 00:19:35.630 "zone_append": false, 00:19:35.630 "compare": false, 00:19:35.630 "compare_and_write": false, 00:19:35.630 "abort": false, 00:19:35.630 "seek_hole": false, 00:19:35.630 "seek_data": false, 00:19:35.630 "copy": false, 00:19:35.630 "nvme_iov_md": false 00:19:35.630 }, 00:19:35.630 "memory_domains": [ 00:19:35.630 { 00:19:35.630 "dma_device_id": "system", 00:19:35.630 "dma_device_type": 1 00:19:35.630 }, 00:19:35.630 { 00:19:35.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:35.630 "dma_device_type": 2 00:19:35.630 }, 00:19:35.630 { 00:19:35.630 "dma_device_id": "system", 00:19:35.630 "dma_device_type": 1 00:19:35.630 }, 00:19:35.630 { 00:19:35.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:35.630 "dma_device_type": 2 00:19:35.630 }, 00:19:35.630 { 00:19:35.630 "dma_device_id": "system", 00:19:35.630 "dma_device_type": 1 00:19:35.630 }, 00:19:35.630 { 00:19:35.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:35.630 "dma_device_type": 2 00:19:35.630 }, 00:19:35.630 { 00:19:35.630 "dma_device_id": "system", 00:19:35.630 "dma_device_type": 1 00:19:35.630 }, 00:19:35.630 { 00:19:35.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:35.630 "dma_device_type": 2 00:19:35.630 } 00:19:35.630 ], 00:19:35.630 "driver_specific": { 00:19:35.630 "raid": { 00:19:35.630 "uuid": "5141a397-75b2-4871-b716-7cb0c5cfeb0a", 00:19:35.630 "strip_size_kb": 64, 00:19:35.630 "state": "online", 00:19:35.630 "raid_level": "concat", 00:19:35.630 "superblock": true, 00:19:35.630 "num_base_bdevs": 4, 00:19:35.630 "num_base_bdevs_discovered": 4, 00:19:35.630 "num_base_bdevs_operational": 4, 00:19:35.630 "base_bdevs_list": [ 00:19:35.630 { 00:19:35.630 "name": "BaseBdev1", 00:19:35.630 "uuid": "e83c53b1-5f0e-4d8e-8c24-e5a295b03363", 00:19:35.630 "is_configured": true, 00:19:35.630 "data_offset": 2048, 00:19:35.630 "data_size": 63488 00:19:35.630 }, 00:19:35.630 { 00:19:35.630 "name": "BaseBdev2", 00:19:35.630 "uuid": "2ed6ce89-35dc-4cfe-8407-ec9b757b4857", 00:19:35.630 "is_configured": true, 00:19:35.630 "data_offset": 2048, 00:19:35.630 "data_size": 63488 00:19:35.630 }, 00:19:35.630 { 00:19:35.630 "name": "BaseBdev3", 00:19:35.630 "uuid": "02cd833f-0d70-424d-9976-4a902f11a7ce", 00:19:35.630 "is_configured": true, 00:19:35.630 "data_offset": 2048, 00:19:35.630 "data_size": 63488 00:19:35.630 }, 00:19:35.630 { 00:19:35.630 "name": "BaseBdev4", 00:19:35.630 "uuid": "ba067fe6-ab6f-4e82-93a1-601249348cd6", 00:19:35.630 "is_configured": true, 00:19:35.630 "data_offset": 2048, 00:19:35.630 "data_size": 63488 00:19:35.630 } 00:19:35.630 ] 00:19:35.630 } 00:19:35.630 } 00:19:35.630 }' 00:19:35.630 22:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:35.630 22:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:19:35.631 BaseBdev2 00:19:35.631 BaseBdev3 00:19:35.631 BaseBdev4' 00:19:35.631 22:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:35.631 22:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:35.631 22:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:19:35.890 22:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:35.890 "name": "BaseBdev1", 00:19:35.890 "aliases": [ 00:19:35.890 "e83c53b1-5f0e-4d8e-8c24-e5a295b03363" 00:19:35.890 ], 00:19:35.890 "product_name": "Malloc disk", 00:19:35.890 "block_size": 512, 00:19:35.890 "num_blocks": 65536, 00:19:35.890 "uuid": "e83c53b1-5f0e-4d8e-8c24-e5a295b03363", 00:19:35.890 "assigned_rate_limits": { 00:19:35.890 "rw_ios_per_sec": 0, 00:19:35.890 "rw_mbytes_per_sec": 0, 00:19:35.890 "r_mbytes_per_sec": 0, 00:19:35.890 "w_mbytes_per_sec": 0 00:19:35.890 }, 00:19:35.890 "claimed": true, 00:19:35.890 "claim_type": "exclusive_write", 00:19:35.890 "zoned": false, 00:19:35.890 "supported_io_types": { 00:19:35.890 "read": true, 00:19:35.890 "write": true, 00:19:35.890 "unmap": true, 00:19:35.890 "flush": true, 00:19:35.890 "reset": true, 00:19:35.890 "nvme_admin": false, 00:19:35.890 "nvme_io": false, 00:19:35.890 "nvme_io_md": false, 00:19:35.890 "write_zeroes": true, 00:19:35.890 "zcopy": true, 00:19:35.890 "get_zone_info": false, 00:19:35.890 "zone_management": false, 00:19:35.890 "zone_append": false, 00:19:35.890 "compare": false, 00:19:35.890 "compare_and_write": false, 00:19:35.890 "abort": true, 00:19:35.890 "seek_hole": false, 00:19:35.890 "seek_data": false, 00:19:35.890 "copy": true, 00:19:35.890 "nvme_iov_md": false 00:19:35.890 }, 00:19:35.890 "memory_domains": [ 00:19:35.890 { 00:19:35.890 "dma_device_id": "system", 00:19:35.890 "dma_device_type": 1 00:19:35.890 }, 00:19:35.890 { 00:19:35.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:35.890 "dma_device_type": 2 00:19:35.890 } 00:19:35.890 ], 00:19:35.890 "driver_specific": {} 00:19:35.890 }' 00:19:35.890 22:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:35.890 22:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:35.890 22:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:35.890 22:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:35.890 22:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:36.149 22:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:36.149 22:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:36.149 22:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:36.149 22:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:36.149 22:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:36.149 22:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:36.149 22:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:36.149 22:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:36.149 22:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:36.149 22:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:36.408 22:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:36.408 "name": "BaseBdev2", 00:19:36.408 "aliases": [ 00:19:36.408 "2ed6ce89-35dc-4cfe-8407-ec9b757b4857" 00:19:36.408 ], 00:19:36.408 "product_name": "Malloc disk", 00:19:36.408 "block_size": 512, 00:19:36.408 "num_blocks": 65536, 00:19:36.408 "uuid": "2ed6ce89-35dc-4cfe-8407-ec9b757b4857", 00:19:36.408 "assigned_rate_limits": { 00:19:36.408 "rw_ios_per_sec": 0, 00:19:36.408 "rw_mbytes_per_sec": 0, 00:19:36.408 "r_mbytes_per_sec": 0, 00:19:36.408 "w_mbytes_per_sec": 0 00:19:36.408 }, 00:19:36.408 "claimed": true, 00:19:36.408 "claim_type": "exclusive_write", 00:19:36.408 "zoned": false, 00:19:36.408 "supported_io_types": { 00:19:36.408 "read": true, 00:19:36.408 "write": true, 00:19:36.408 "unmap": true, 00:19:36.408 "flush": true, 00:19:36.408 "reset": true, 00:19:36.408 "nvme_admin": false, 00:19:36.408 "nvme_io": false, 00:19:36.408 "nvme_io_md": false, 00:19:36.408 "write_zeroes": true, 00:19:36.408 "zcopy": true, 00:19:36.408 "get_zone_info": false, 00:19:36.408 "zone_management": false, 00:19:36.408 "zone_append": false, 00:19:36.408 "compare": false, 00:19:36.408 "compare_and_write": false, 00:19:36.408 "abort": true, 00:19:36.408 "seek_hole": false, 00:19:36.408 "seek_data": false, 00:19:36.408 "copy": true, 00:19:36.408 "nvme_iov_md": false 00:19:36.408 }, 00:19:36.408 "memory_domains": [ 00:19:36.408 { 00:19:36.408 "dma_device_id": "system", 00:19:36.408 "dma_device_type": 1 00:19:36.408 }, 00:19:36.408 { 00:19:36.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:36.408 "dma_device_type": 2 00:19:36.408 } 00:19:36.408 ], 00:19:36.408 "driver_specific": {} 00:19:36.408 }' 00:19:36.408 22:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:36.408 22:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:36.408 22:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:36.408 22:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:36.408 22:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:36.408 22:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:36.408 22:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:36.408 22:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:36.408 22:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:36.408 22:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:36.667 22:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:36.667 22:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:36.667 22:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:36.667 22:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:36.667 22:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:36.667 22:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:36.667 "name": "BaseBdev3", 00:19:36.667 "aliases": [ 00:19:36.667 "02cd833f-0d70-424d-9976-4a902f11a7ce" 00:19:36.667 ], 00:19:36.667 "product_name": "Malloc disk", 00:19:36.667 "block_size": 512, 00:19:36.667 "num_blocks": 65536, 00:19:36.667 "uuid": "02cd833f-0d70-424d-9976-4a902f11a7ce", 00:19:36.667 "assigned_rate_limits": { 00:19:36.667 "rw_ios_per_sec": 0, 00:19:36.667 "rw_mbytes_per_sec": 0, 00:19:36.667 "r_mbytes_per_sec": 0, 00:19:36.667 "w_mbytes_per_sec": 0 00:19:36.667 }, 00:19:36.667 "claimed": true, 00:19:36.667 "claim_type": "exclusive_write", 00:19:36.667 "zoned": false, 00:19:36.667 "supported_io_types": { 00:19:36.668 "read": true, 00:19:36.668 "write": true, 00:19:36.668 "unmap": true, 00:19:36.668 "flush": true, 00:19:36.668 "reset": true, 00:19:36.668 "nvme_admin": false, 00:19:36.668 "nvme_io": false, 00:19:36.668 "nvme_io_md": false, 00:19:36.668 "write_zeroes": true, 00:19:36.668 "zcopy": true, 00:19:36.668 "get_zone_info": false, 00:19:36.668 "zone_management": false, 00:19:36.668 "zone_append": false, 00:19:36.668 "compare": false, 00:19:36.668 "compare_and_write": false, 00:19:36.668 "abort": true, 00:19:36.668 "seek_hole": false, 00:19:36.668 "seek_data": false, 00:19:36.668 "copy": true, 00:19:36.668 "nvme_iov_md": false 00:19:36.668 }, 00:19:36.668 "memory_domains": [ 00:19:36.668 { 00:19:36.668 "dma_device_id": "system", 00:19:36.668 "dma_device_type": 1 00:19:36.668 }, 00:19:36.668 { 00:19:36.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:36.668 "dma_device_type": 2 00:19:36.668 } 00:19:36.668 ], 00:19:36.668 "driver_specific": {} 00:19:36.668 }' 00:19:36.668 22:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:36.668 22:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:36.927 22:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:36.927 22:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:36.927 22:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:36.927 22:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:36.927 22:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:36.927 22:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:36.927 22:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:36.927 22:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:36.927 22:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:36.927 22:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:36.927 22:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:36.927 22:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:19:36.927 22:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:37.186 22:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:37.186 "name": "BaseBdev4", 00:19:37.186 "aliases": [ 00:19:37.186 "ba067fe6-ab6f-4e82-93a1-601249348cd6" 00:19:37.186 ], 00:19:37.186 "product_name": "Malloc disk", 00:19:37.186 "block_size": 512, 00:19:37.186 "num_blocks": 65536, 00:19:37.186 "uuid": "ba067fe6-ab6f-4e82-93a1-601249348cd6", 00:19:37.186 "assigned_rate_limits": { 00:19:37.186 "rw_ios_per_sec": 0, 00:19:37.186 "rw_mbytes_per_sec": 0, 00:19:37.186 "r_mbytes_per_sec": 0, 00:19:37.186 "w_mbytes_per_sec": 0 00:19:37.186 }, 00:19:37.186 "claimed": true, 00:19:37.186 "claim_type": "exclusive_write", 00:19:37.186 "zoned": false, 00:19:37.186 "supported_io_types": { 00:19:37.186 "read": true, 00:19:37.186 "write": true, 00:19:37.186 "unmap": true, 00:19:37.186 "flush": true, 00:19:37.186 "reset": true, 00:19:37.186 "nvme_admin": false, 00:19:37.186 "nvme_io": false, 00:19:37.186 "nvme_io_md": false, 00:19:37.186 "write_zeroes": true, 00:19:37.186 "zcopy": true, 00:19:37.186 "get_zone_info": false, 00:19:37.186 "zone_management": false, 00:19:37.186 "zone_append": false, 00:19:37.186 "compare": false, 00:19:37.186 "compare_and_write": false, 00:19:37.186 "abort": true, 00:19:37.186 "seek_hole": false, 00:19:37.186 "seek_data": false, 00:19:37.186 "copy": true, 00:19:37.186 "nvme_iov_md": false 00:19:37.186 }, 00:19:37.186 "memory_domains": [ 00:19:37.186 { 00:19:37.186 "dma_device_id": "system", 00:19:37.186 "dma_device_type": 1 00:19:37.186 }, 00:19:37.186 { 00:19:37.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:37.186 "dma_device_type": 2 00:19:37.186 } 00:19:37.186 ], 00:19:37.186 "driver_specific": {} 00:19:37.186 }' 00:19:37.186 22:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:37.186 22:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:37.186 22:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:37.186 22:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:37.186 22:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:37.186 22:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:37.186 22:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:37.486 22:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:37.486 22:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:37.486 22:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:37.486 22:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:37.486 22:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:37.486 22:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:37.486 [2024-07-13 22:02:56.856528] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:37.487 [2024-07-13 22:02:56.856559] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:37.487 [2024-07-13 22:02:56.856606] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:37.746 22:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:19:37.746 22:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:19:37.746 22:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:37.746 22:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:19:37.746 22:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:19:37.746 22:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:19:37.746 22:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:37.746 22:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:19:37.746 22:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:37.746 22:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:37.746 22:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:37.746 22:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:37.746 22:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:37.746 22:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:37.746 22:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:37.746 22:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:37.746 22:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:37.746 22:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:37.746 "name": "Existed_Raid", 00:19:37.746 "uuid": "5141a397-75b2-4871-b716-7cb0c5cfeb0a", 00:19:37.746 "strip_size_kb": 64, 00:19:37.746 "state": "offline", 00:19:37.746 "raid_level": "concat", 00:19:37.746 "superblock": true, 00:19:37.746 "num_base_bdevs": 4, 00:19:37.746 "num_base_bdevs_discovered": 3, 00:19:37.746 "num_base_bdevs_operational": 3, 00:19:37.746 "base_bdevs_list": [ 00:19:37.746 { 00:19:37.746 "name": null, 00:19:37.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.746 "is_configured": false, 00:19:37.746 "data_offset": 2048, 00:19:37.746 "data_size": 63488 00:19:37.746 }, 00:19:37.746 { 00:19:37.746 "name": "BaseBdev2", 00:19:37.746 "uuid": "2ed6ce89-35dc-4cfe-8407-ec9b757b4857", 00:19:37.746 "is_configured": true, 00:19:37.746 "data_offset": 2048, 00:19:37.746 "data_size": 63488 00:19:37.746 }, 00:19:37.746 { 00:19:37.746 "name": "BaseBdev3", 00:19:37.746 "uuid": "02cd833f-0d70-424d-9976-4a902f11a7ce", 00:19:37.746 "is_configured": true, 00:19:37.746 "data_offset": 2048, 00:19:37.746 "data_size": 63488 00:19:37.746 }, 00:19:37.746 { 00:19:37.746 "name": "BaseBdev4", 00:19:37.746 "uuid": "ba067fe6-ab6f-4e82-93a1-601249348cd6", 00:19:37.746 "is_configured": true, 00:19:37.746 "data_offset": 2048, 00:19:37.746 "data_size": 63488 00:19:37.746 } 00:19:37.746 ] 00:19:37.746 }' 00:19:37.746 22:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:37.746 22:02:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.315 22:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:19:38.315 22:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:38.315 22:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:38.315 22:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:38.315 22:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:38.574 22:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:38.574 22:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:38.574 [2024-07-13 22:02:57.855939] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:38.833 22:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:38.833 22:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:38.833 22:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:38.833 22:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:38.833 22:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:38.833 22:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:38.833 22:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:39.092 [2024-07-13 22:02:58.293201] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:39.092 22:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:39.092 22:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:39.092 22:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:39.092 22:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:39.352 22:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:39.352 22:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:39.352 22:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:19:39.352 [2024-07-13 22:02:58.717087] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:39.352 [2024-07-13 22:02:58.717133] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:19:39.611 22:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:39.611 22:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:39.611 22:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:19:39.611 22:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:39.611 22:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:19:39.611 22:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:19:39.611 22:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:19:39.611 22:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:19:39.611 22:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:39.611 22:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:39.871 BaseBdev2 00:19:39.871 22:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:19:39.871 22:02:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:19:39.871 22:02:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:39.871 22:02:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:39.871 22:02:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:39.871 22:02:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:39.871 22:02:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:40.130 22:02:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:40.130 [ 00:19:40.130 { 00:19:40.130 "name": "BaseBdev2", 00:19:40.130 "aliases": [ 00:19:40.130 "37e5b7f7-f0e9-4159-9345-465e1f69e8a7" 00:19:40.130 ], 00:19:40.130 "product_name": "Malloc disk", 00:19:40.130 "block_size": 512, 00:19:40.130 "num_blocks": 65536, 00:19:40.130 "uuid": "37e5b7f7-f0e9-4159-9345-465e1f69e8a7", 00:19:40.130 "assigned_rate_limits": { 00:19:40.130 "rw_ios_per_sec": 0, 00:19:40.130 "rw_mbytes_per_sec": 0, 00:19:40.130 "r_mbytes_per_sec": 0, 00:19:40.130 "w_mbytes_per_sec": 0 00:19:40.130 }, 00:19:40.130 "claimed": false, 00:19:40.130 "zoned": false, 00:19:40.130 "supported_io_types": { 00:19:40.130 "read": true, 00:19:40.130 "write": true, 00:19:40.130 "unmap": true, 00:19:40.130 "flush": true, 00:19:40.130 "reset": true, 00:19:40.130 "nvme_admin": false, 00:19:40.130 "nvme_io": false, 00:19:40.130 "nvme_io_md": false, 00:19:40.130 "write_zeroes": true, 00:19:40.130 "zcopy": true, 00:19:40.130 "get_zone_info": false, 00:19:40.130 "zone_management": false, 00:19:40.130 "zone_append": false, 00:19:40.130 "compare": false, 00:19:40.130 "compare_and_write": false, 00:19:40.130 "abort": true, 00:19:40.130 "seek_hole": false, 00:19:40.130 "seek_data": false, 00:19:40.130 "copy": true, 00:19:40.130 "nvme_iov_md": false 00:19:40.130 }, 00:19:40.130 "memory_domains": [ 00:19:40.130 { 00:19:40.130 "dma_device_id": "system", 00:19:40.130 "dma_device_type": 1 00:19:40.130 }, 00:19:40.130 { 00:19:40.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:40.130 "dma_device_type": 2 00:19:40.130 } 00:19:40.130 ], 00:19:40.130 "driver_specific": {} 00:19:40.130 } 00:19:40.130 ] 00:19:40.130 22:02:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:40.130 22:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:40.130 22:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:40.130 22:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:40.389 BaseBdev3 00:19:40.389 22:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:19:40.389 22:02:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:19:40.389 22:02:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:40.389 22:02:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:40.389 22:02:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:40.389 22:02:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:40.389 22:02:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:40.648 22:02:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:40.907 [ 00:19:40.907 { 00:19:40.907 "name": "BaseBdev3", 00:19:40.907 "aliases": [ 00:19:40.907 "e4dd0321-a794-4dd7-88b3-2851811f4829" 00:19:40.907 ], 00:19:40.907 "product_name": "Malloc disk", 00:19:40.907 "block_size": 512, 00:19:40.907 "num_blocks": 65536, 00:19:40.907 "uuid": "e4dd0321-a794-4dd7-88b3-2851811f4829", 00:19:40.907 "assigned_rate_limits": { 00:19:40.907 "rw_ios_per_sec": 0, 00:19:40.907 "rw_mbytes_per_sec": 0, 00:19:40.907 "r_mbytes_per_sec": 0, 00:19:40.907 "w_mbytes_per_sec": 0 00:19:40.907 }, 00:19:40.907 "claimed": false, 00:19:40.907 "zoned": false, 00:19:40.907 "supported_io_types": { 00:19:40.907 "read": true, 00:19:40.907 "write": true, 00:19:40.907 "unmap": true, 00:19:40.907 "flush": true, 00:19:40.907 "reset": true, 00:19:40.907 "nvme_admin": false, 00:19:40.907 "nvme_io": false, 00:19:40.907 "nvme_io_md": false, 00:19:40.907 "write_zeroes": true, 00:19:40.907 "zcopy": true, 00:19:40.907 "get_zone_info": false, 00:19:40.907 "zone_management": false, 00:19:40.907 "zone_append": false, 00:19:40.907 "compare": false, 00:19:40.907 "compare_and_write": false, 00:19:40.907 "abort": true, 00:19:40.907 "seek_hole": false, 00:19:40.907 "seek_data": false, 00:19:40.907 "copy": true, 00:19:40.907 "nvme_iov_md": false 00:19:40.907 }, 00:19:40.907 "memory_domains": [ 00:19:40.907 { 00:19:40.907 "dma_device_id": "system", 00:19:40.907 "dma_device_type": 1 00:19:40.907 }, 00:19:40.907 { 00:19:40.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:40.907 "dma_device_type": 2 00:19:40.907 } 00:19:40.907 ], 00:19:40.907 "driver_specific": {} 00:19:40.907 } 00:19:40.907 ] 00:19:40.907 22:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:40.907 22:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:40.907 22:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:40.907 22:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:40.907 BaseBdev4 00:19:40.907 22:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:19:40.907 22:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:19:40.907 22:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:40.907 22:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:40.907 22:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:40.907 22:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:40.907 22:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:41.166 22:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:41.425 [ 00:19:41.425 { 00:19:41.425 "name": "BaseBdev4", 00:19:41.425 "aliases": [ 00:19:41.425 "53e13ec3-6a5e-4cfe-b1a4-8799d40e34bd" 00:19:41.425 ], 00:19:41.425 "product_name": "Malloc disk", 00:19:41.425 "block_size": 512, 00:19:41.425 "num_blocks": 65536, 00:19:41.425 "uuid": "53e13ec3-6a5e-4cfe-b1a4-8799d40e34bd", 00:19:41.425 "assigned_rate_limits": { 00:19:41.425 "rw_ios_per_sec": 0, 00:19:41.425 "rw_mbytes_per_sec": 0, 00:19:41.425 "r_mbytes_per_sec": 0, 00:19:41.425 "w_mbytes_per_sec": 0 00:19:41.425 }, 00:19:41.425 "claimed": false, 00:19:41.425 "zoned": false, 00:19:41.425 "supported_io_types": { 00:19:41.425 "read": true, 00:19:41.425 "write": true, 00:19:41.425 "unmap": true, 00:19:41.425 "flush": true, 00:19:41.425 "reset": true, 00:19:41.425 "nvme_admin": false, 00:19:41.425 "nvme_io": false, 00:19:41.425 "nvme_io_md": false, 00:19:41.425 "write_zeroes": true, 00:19:41.425 "zcopy": true, 00:19:41.425 "get_zone_info": false, 00:19:41.426 "zone_management": false, 00:19:41.426 "zone_append": false, 00:19:41.426 "compare": false, 00:19:41.426 "compare_and_write": false, 00:19:41.426 "abort": true, 00:19:41.426 "seek_hole": false, 00:19:41.426 "seek_data": false, 00:19:41.426 "copy": true, 00:19:41.426 "nvme_iov_md": false 00:19:41.426 }, 00:19:41.426 "memory_domains": [ 00:19:41.426 { 00:19:41.426 "dma_device_id": "system", 00:19:41.426 "dma_device_type": 1 00:19:41.426 }, 00:19:41.426 { 00:19:41.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:41.426 "dma_device_type": 2 00:19:41.426 } 00:19:41.426 ], 00:19:41.426 "driver_specific": {} 00:19:41.426 } 00:19:41.426 ] 00:19:41.426 22:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:41.426 22:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:41.426 22:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:41.426 22:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:41.426 [2024-07-13 22:03:00.771836] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:41.426 [2024-07-13 22:03:00.771878] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:41.426 [2024-07-13 22:03:00.771925] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:41.426 [2024-07-13 22:03:00.773670] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:41.426 [2024-07-13 22:03:00.773717] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:41.426 22:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:41.426 22:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:41.426 22:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:41.426 22:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:41.426 22:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:41.426 22:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:41.426 22:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:41.426 22:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:41.426 22:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:41.426 22:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:41.426 22:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:41.426 22:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:41.685 22:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:41.685 "name": "Existed_Raid", 00:19:41.685 "uuid": "63f3f2da-885c-4514-8929-3b15181cc420", 00:19:41.685 "strip_size_kb": 64, 00:19:41.685 "state": "configuring", 00:19:41.685 "raid_level": "concat", 00:19:41.685 "superblock": true, 00:19:41.685 "num_base_bdevs": 4, 00:19:41.685 "num_base_bdevs_discovered": 3, 00:19:41.685 "num_base_bdevs_operational": 4, 00:19:41.685 "base_bdevs_list": [ 00:19:41.685 { 00:19:41.685 "name": "BaseBdev1", 00:19:41.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.685 "is_configured": false, 00:19:41.685 "data_offset": 0, 00:19:41.685 "data_size": 0 00:19:41.685 }, 00:19:41.685 { 00:19:41.685 "name": "BaseBdev2", 00:19:41.685 "uuid": "37e5b7f7-f0e9-4159-9345-465e1f69e8a7", 00:19:41.685 "is_configured": true, 00:19:41.685 "data_offset": 2048, 00:19:41.685 "data_size": 63488 00:19:41.685 }, 00:19:41.685 { 00:19:41.685 "name": "BaseBdev3", 00:19:41.685 "uuid": "e4dd0321-a794-4dd7-88b3-2851811f4829", 00:19:41.685 "is_configured": true, 00:19:41.685 "data_offset": 2048, 00:19:41.685 "data_size": 63488 00:19:41.685 }, 00:19:41.685 { 00:19:41.685 "name": "BaseBdev4", 00:19:41.685 "uuid": "53e13ec3-6a5e-4cfe-b1a4-8799d40e34bd", 00:19:41.685 "is_configured": true, 00:19:41.685 "data_offset": 2048, 00:19:41.685 "data_size": 63488 00:19:41.685 } 00:19:41.685 ] 00:19:41.685 }' 00:19:41.685 22:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:41.685 22:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.253 22:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:19:42.253 [2024-07-13 22:03:01.573920] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:42.253 22:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:42.253 22:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:42.253 22:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:42.253 22:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:42.253 22:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:42.253 22:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:42.253 22:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:42.253 22:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:42.253 22:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:42.253 22:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:42.253 22:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:42.253 22:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:42.512 22:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:42.512 "name": "Existed_Raid", 00:19:42.512 "uuid": "63f3f2da-885c-4514-8929-3b15181cc420", 00:19:42.512 "strip_size_kb": 64, 00:19:42.512 "state": "configuring", 00:19:42.512 "raid_level": "concat", 00:19:42.512 "superblock": true, 00:19:42.512 "num_base_bdevs": 4, 00:19:42.512 "num_base_bdevs_discovered": 2, 00:19:42.512 "num_base_bdevs_operational": 4, 00:19:42.512 "base_bdevs_list": [ 00:19:42.512 { 00:19:42.512 "name": "BaseBdev1", 00:19:42.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.512 "is_configured": false, 00:19:42.512 "data_offset": 0, 00:19:42.512 "data_size": 0 00:19:42.512 }, 00:19:42.512 { 00:19:42.512 "name": null, 00:19:42.512 "uuid": "37e5b7f7-f0e9-4159-9345-465e1f69e8a7", 00:19:42.512 "is_configured": false, 00:19:42.512 "data_offset": 2048, 00:19:42.512 "data_size": 63488 00:19:42.512 }, 00:19:42.512 { 00:19:42.512 "name": "BaseBdev3", 00:19:42.512 "uuid": "e4dd0321-a794-4dd7-88b3-2851811f4829", 00:19:42.512 "is_configured": true, 00:19:42.512 "data_offset": 2048, 00:19:42.512 "data_size": 63488 00:19:42.512 }, 00:19:42.512 { 00:19:42.512 "name": "BaseBdev4", 00:19:42.512 "uuid": "53e13ec3-6a5e-4cfe-b1a4-8799d40e34bd", 00:19:42.512 "is_configured": true, 00:19:42.512 "data_offset": 2048, 00:19:42.512 "data_size": 63488 00:19:42.512 } 00:19:42.512 ] 00:19:42.512 }' 00:19:42.512 22:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:42.512 22:03:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.078 22:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:43.078 22:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.078 22:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:19:43.078 22:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:43.336 [2024-07-13 22:03:02.605225] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:43.336 BaseBdev1 00:19:43.336 22:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:19:43.336 22:03:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:19:43.336 22:03:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:43.336 22:03:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:43.336 22:03:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:43.336 22:03:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:43.336 22:03:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:43.595 22:03:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:43.595 [ 00:19:43.595 { 00:19:43.595 "name": "BaseBdev1", 00:19:43.595 "aliases": [ 00:19:43.595 "074f13f0-5854-4582-b638-65dcb6d95c57" 00:19:43.595 ], 00:19:43.595 "product_name": "Malloc disk", 00:19:43.595 "block_size": 512, 00:19:43.595 "num_blocks": 65536, 00:19:43.595 "uuid": "074f13f0-5854-4582-b638-65dcb6d95c57", 00:19:43.595 "assigned_rate_limits": { 00:19:43.595 "rw_ios_per_sec": 0, 00:19:43.595 "rw_mbytes_per_sec": 0, 00:19:43.595 "r_mbytes_per_sec": 0, 00:19:43.595 "w_mbytes_per_sec": 0 00:19:43.595 }, 00:19:43.595 "claimed": true, 00:19:43.595 "claim_type": "exclusive_write", 00:19:43.595 "zoned": false, 00:19:43.595 "supported_io_types": { 00:19:43.595 "read": true, 00:19:43.595 "write": true, 00:19:43.595 "unmap": true, 00:19:43.595 "flush": true, 00:19:43.595 "reset": true, 00:19:43.595 "nvme_admin": false, 00:19:43.595 "nvme_io": false, 00:19:43.595 "nvme_io_md": false, 00:19:43.595 "write_zeroes": true, 00:19:43.595 "zcopy": true, 00:19:43.595 "get_zone_info": false, 00:19:43.595 "zone_management": false, 00:19:43.595 "zone_append": false, 00:19:43.595 "compare": false, 00:19:43.595 "compare_and_write": false, 00:19:43.595 "abort": true, 00:19:43.595 "seek_hole": false, 00:19:43.595 "seek_data": false, 00:19:43.595 "copy": true, 00:19:43.595 "nvme_iov_md": false 00:19:43.595 }, 00:19:43.595 "memory_domains": [ 00:19:43.595 { 00:19:43.595 "dma_device_id": "system", 00:19:43.595 "dma_device_type": 1 00:19:43.595 }, 00:19:43.595 { 00:19:43.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:43.595 "dma_device_type": 2 00:19:43.595 } 00:19:43.595 ], 00:19:43.595 "driver_specific": {} 00:19:43.595 } 00:19:43.595 ] 00:19:43.595 22:03:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:43.595 22:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:43.595 22:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:43.595 22:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:43.595 22:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:43.595 22:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:43.595 22:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:43.595 22:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:43.595 22:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:43.595 22:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:43.595 22:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:43.595 22:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.595 22:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:43.854 22:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:43.854 "name": "Existed_Raid", 00:19:43.854 "uuid": "63f3f2da-885c-4514-8929-3b15181cc420", 00:19:43.854 "strip_size_kb": 64, 00:19:43.854 "state": "configuring", 00:19:43.854 "raid_level": "concat", 00:19:43.854 "superblock": true, 00:19:43.854 "num_base_bdevs": 4, 00:19:43.854 "num_base_bdevs_discovered": 3, 00:19:43.854 "num_base_bdevs_operational": 4, 00:19:43.854 "base_bdevs_list": [ 00:19:43.854 { 00:19:43.854 "name": "BaseBdev1", 00:19:43.854 "uuid": "074f13f0-5854-4582-b638-65dcb6d95c57", 00:19:43.854 "is_configured": true, 00:19:43.854 "data_offset": 2048, 00:19:43.854 "data_size": 63488 00:19:43.854 }, 00:19:43.854 { 00:19:43.854 "name": null, 00:19:43.854 "uuid": "37e5b7f7-f0e9-4159-9345-465e1f69e8a7", 00:19:43.854 "is_configured": false, 00:19:43.854 "data_offset": 2048, 00:19:43.854 "data_size": 63488 00:19:43.854 }, 00:19:43.854 { 00:19:43.854 "name": "BaseBdev3", 00:19:43.854 "uuid": "e4dd0321-a794-4dd7-88b3-2851811f4829", 00:19:43.854 "is_configured": true, 00:19:43.854 "data_offset": 2048, 00:19:43.854 "data_size": 63488 00:19:43.854 }, 00:19:43.854 { 00:19:43.854 "name": "BaseBdev4", 00:19:43.854 "uuid": "53e13ec3-6a5e-4cfe-b1a4-8799d40e34bd", 00:19:43.854 "is_configured": true, 00:19:43.854 "data_offset": 2048, 00:19:43.854 "data_size": 63488 00:19:43.854 } 00:19:43.854 ] 00:19:43.854 }' 00:19:43.854 22:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:43.854 22:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.420 22:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:44.420 22:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:44.420 22:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:19:44.420 22:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:19:44.678 [2024-07-13 22:03:03.948837] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:44.678 22:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:44.678 22:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:44.678 22:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:44.678 22:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:44.678 22:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:44.678 22:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:44.678 22:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:44.678 22:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:44.678 22:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:44.678 22:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:44.678 22:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:44.678 22:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:44.937 22:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:44.937 "name": "Existed_Raid", 00:19:44.937 "uuid": "63f3f2da-885c-4514-8929-3b15181cc420", 00:19:44.937 "strip_size_kb": 64, 00:19:44.937 "state": "configuring", 00:19:44.937 "raid_level": "concat", 00:19:44.937 "superblock": true, 00:19:44.937 "num_base_bdevs": 4, 00:19:44.937 "num_base_bdevs_discovered": 2, 00:19:44.937 "num_base_bdevs_operational": 4, 00:19:44.937 "base_bdevs_list": [ 00:19:44.937 { 00:19:44.937 "name": "BaseBdev1", 00:19:44.937 "uuid": "074f13f0-5854-4582-b638-65dcb6d95c57", 00:19:44.937 "is_configured": true, 00:19:44.937 "data_offset": 2048, 00:19:44.937 "data_size": 63488 00:19:44.937 }, 00:19:44.937 { 00:19:44.937 "name": null, 00:19:44.937 "uuid": "37e5b7f7-f0e9-4159-9345-465e1f69e8a7", 00:19:44.937 "is_configured": false, 00:19:44.937 "data_offset": 2048, 00:19:44.937 "data_size": 63488 00:19:44.937 }, 00:19:44.937 { 00:19:44.937 "name": null, 00:19:44.937 "uuid": "e4dd0321-a794-4dd7-88b3-2851811f4829", 00:19:44.937 "is_configured": false, 00:19:44.937 "data_offset": 2048, 00:19:44.937 "data_size": 63488 00:19:44.937 }, 00:19:44.937 { 00:19:44.937 "name": "BaseBdev4", 00:19:44.937 "uuid": "53e13ec3-6a5e-4cfe-b1a4-8799d40e34bd", 00:19:44.937 "is_configured": true, 00:19:44.937 "data_offset": 2048, 00:19:44.937 "data_size": 63488 00:19:44.937 } 00:19:44.937 ] 00:19:44.937 }' 00:19:44.937 22:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:44.937 22:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.504 22:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:45.504 22:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:45.504 22:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:19:45.504 22:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:45.763 [2024-07-13 22:03:04.959488] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:45.763 22:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:45.763 22:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:45.763 22:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:45.763 22:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:45.763 22:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:45.763 22:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:45.763 22:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:45.763 22:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:45.763 22:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:45.763 22:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:45.763 22:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:45.763 22:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:45.763 22:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:45.763 "name": "Existed_Raid", 00:19:45.763 "uuid": "63f3f2da-885c-4514-8929-3b15181cc420", 00:19:45.763 "strip_size_kb": 64, 00:19:45.763 "state": "configuring", 00:19:45.763 "raid_level": "concat", 00:19:45.763 "superblock": true, 00:19:45.763 "num_base_bdevs": 4, 00:19:45.763 "num_base_bdevs_discovered": 3, 00:19:45.763 "num_base_bdevs_operational": 4, 00:19:45.763 "base_bdevs_list": [ 00:19:45.763 { 00:19:45.763 "name": "BaseBdev1", 00:19:45.763 "uuid": "074f13f0-5854-4582-b638-65dcb6d95c57", 00:19:45.763 "is_configured": true, 00:19:45.763 "data_offset": 2048, 00:19:45.763 "data_size": 63488 00:19:45.763 }, 00:19:45.763 { 00:19:45.763 "name": null, 00:19:45.763 "uuid": "37e5b7f7-f0e9-4159-9345-465e1f69e8a7", 00:19:45.763 "is_configured": false, 00:19:45.763 "data_offset": 2048, 00:19:45.763 "data_size": 63488 00:19:45.763 }, 00:19:45.763 { 00:19:45.763 "name": "BaseBdev3", 00:19:45.763 "uuid": "e4dd0321-a794-4dd7-88b3-2851811f4829", 00:19:45.763 "is_configured": true, 00:19:45.763 "data_offset": 2048, 00:19:45.763 "data_size": 63488 00:19:45.763 }, 00:19:45.763 { 00:19:45.763 "name": "BaseBdev4", 00:19:45.763 "uuid": "53e13ec3-6a5e-4cfe-b1a4-8799d40e34bd", 00:19:45.763 "is_configured": true, 00:19:45.763 "data_offset": 2048, 00:19:45.763 "data_size": 63488 00:19:45.763 } 00:19:45.763 ] 00:19:45.763 }' 00:19:45.763 22:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:45.763 22:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.330 22:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:46.330 22:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:46.589 22:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:19:46.589 22:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:46.589 [2024-07-13 22:03:05.946141] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:46.848 22:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:46.848 22:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:46.848 22:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:46.848 22:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:46.848 22:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:46.848 22:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:46.848 22:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:46.848 22:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:46.848 22:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:46.848 22:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:46.848 22:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:46.848 22:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:46.848 22:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:46.848 "name": "Existed_Raid", 00:19:46.848 "uuid": "63f3f2da-885c-4514-8929-3b15181cc420", 00:19:46.848 "strip_size_kb": 64, 00:19:46.848 "state": "configuring", 00:19:46.848 "raid_level": "concat", 00:19:46.848 "superblock": true, 00:19:46.848 "num_base_bdevs": 4, 00:19:46.848 "num_base_bdevs_discovered": 2, 00:19:46.848 "num_base_bdevs_operational": 4, 00:19:46.848 "base_bdevs_list": [ 00:19:46.848 { 00:19:46.848 "name": null, 00:19:46.848 "uuid": "074f13f0-5854-4582-b638-65dcb6d95c57", 00:19:46.848 "is_configured": false, 00:19:46.848 "data_offset": 2048, 00:19:46.848 "data_size": 63488 00:19:46.848 }, 00:19:46.848 { 00:19:46.848 "name": null, 00:19:46.848 "uuid": "37e5b7f7-f0e9-4159-9345-465e1f69e8a7", 00:19:46.848 "is_configured": false, 00:19:46.848 "data_offset": 2048, 00:19:46.848 "data_size": 63488 00:19:46.848 }, 00:19:46.848 { 00:19:46.848 "name": "BaseBdev3", 00:19:46.848 "uuid": "e4dd0321-a794-4dd7-88b3-2851811f4829", 00:19:46.848 "is_configured": true, 00:19:46.848 "data_offset": 2048, 00:19:46.848 "data_size": 63488 00:19:46.848 }, 00:19:46.848 { 00:19:46.848 "name": "BaseBdev4", 00:19:46.848 "uuid": "53e13ec3-6a5e-4cfe-b1a4-8799d40e34bd", 00:19:46.848 "is_configured": true, 00:19:46.848 "data_offset": 2048, 00:19:46.848 "data_size": 63488 00:19:46.848 } 00:19:46.848 ] 00:19:46.848 }' 00:19:46.848 22:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:46.848 22:03:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.414 22:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.414 22:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:47.674 22:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:19:47.674 22:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:47.674 [2024-07-13 22:03:07.011371] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:47.674 22:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:47.674 22:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:47.674 22:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:47.674 22:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:47.674 22:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:47.674 22:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:47.674 22:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:47.674 22:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:47.674 22:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:47.674 22:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:47.674 22:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.674 22:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:47.933 22:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:47.933 "name": "Existed_Raid", 00:19:47.933 "uuid": "63f3f2da-885c-4514-8929-3b15181cc420", 00:19:47.933 "strip_size_kb": 64, 00:19:47.933 "state": "configuring", 00:19:47.933 "raid_level": "concat", 00:19:47.933 "superblock": true, 00:19:47.933 "num_base_bdevs": 4, 00:19:47.933 "num_base_bdevs_discovered": 3, 00:19:47.933 "num_base_bdevs_operational": 4, 00:19:47.933 "base_bdevs_list": [ 00:19:47.933 { 00:19:47.933 "name": null, 00:19:47.933 "uuid": "074f13f0-5854-4582-b638-65dcb6d95c57", 00:19:47.933 "is_configured": false, 00:19:47.933 "data_offset": 2048, 00:19:47.933 "data_size": 63488 00:19:47.933 }, 00:19:47.933 { 00:19:47.933 "name": "BaseBdev2", 00:19:47.933 "uuid": "37e5b7f7-f0e9-4159-9345-465e1f69e8a7", 00:19:47.933 "is_configured": true, 00:19:47.933 "data_offset": 2048, 00:19:47.933 "data_size": 63488 00:19:47.933 }, 00:19:47.933 { 00:19:47.933 "name": "BaseBdev3", 00:19:47.933 "uuid": "e4dd0321-a794-4dd7-88b3-2851811f4829", 00:19:47.933 "is_configured": true, 00:19:47.933 "data_offset": 2048, 00:19:47.933 "data_size": 63488 00:19:47.933 }, 00:19:47.933 { 00:19:47.933 "name": "BaseBdev4", 00:19:47.933 "uuid": "53e13ec3-6a5e-4cfe-b1a4-8799d40e34bd", 00:19:47.933 "is_configured": true, 00:19:47.933 "data_offset": 2048, 00:19:47.933 "data_size": 63488 00:19:47.933 } 00:19:47.933 ] 00:19:47.933 }' 00:19:47.933 22:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:47.933 22:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.501 22:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:48.501 22:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:48.501 22:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:19:48.501 22:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:48.501 22:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:48.791 22:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 074f13f0-5854-4582-b638-65dcb6d95c57 00:19:49.050 [2024-07-13 22:03:08.209179] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:49.050 [2024-07-13 22:03:08.209390] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042080 00:19:49.050 [2024-07-13 22:03:08.209407] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:49.050 [2024-07-13 22:03:08.209704] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010b20 00:19:49.050 [2024-07-13 22:03:08.209864] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042080 00:19:49.050 [2024-07-13 22:03:08.209878] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000042080 00:19:49.050 NewBaseBdev 00:19:49.050 [2024-07-13 22:03:08.210006] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:49.050 22:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:19:49.050 22:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:19:49.050 22:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:49.050 22:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:49.050 22:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:49.051 22:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:49.051 22:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:49.051 22:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:49.308 [ 00:19:49.308 { 00:19:49.308 "name": "NewBaseBdev", 00:19:49.308 "aliases": [ 00:19:49.308 "074f13f0-5854-4582-b638-65dcb6d95c57" 00:19:49.308 ], 00:19:49.308 "product_name": "Malloc disk", 00:19:49.308 "block_size": 512, 00:19:49.308 "num_blocks": 65536, 00:19:49.308 "uuid": "074f13f0-5854-4582-b638-65dcb6d95c57", 00:19:49.308 "assigned_rate_limits": { 00:19:49.308 "rw_ios_per_sec": 0, 00:19:49.308 "rw_mbytes_per_sec": 0, 00:19:49.308 "r_mbytes_per_sec": 0, 00:19:49.308 "w_mbytes_per_sec": 0 00:19:49.308 }, 00:19:49.308 "claimed": true, 00:19:49.308 "claim_type": "exclusive_write", 00:19:49.308 "zoned": false, 00:19:49.308 "supported_io_types": { 00:19:49.308 "read": true, 00:19:49.308 "write": true, 00:19:49.308 "unmap": true, 00:19:49.308 "flush": true, 00:19:49.308 "reset": true, 00:19:49.308 "nvme_admin": false, 00:19:49.308 "nvme_io": false, 00:19:49.308 "nvme_io_md": false, 00:19:49.308 "write_zeroes": true, 00:19:49.308 "zcopy": true, 00:19:49.308 "get_zone_info": false, 00:19:49.308 "zone_management": false, 00:19:49.308 "zone_append": false, 00:19:49.308 "compare": false, 00:19:49.308 "compare_and_write": false, 00:19:49.308 "abort": true, 00:19:49.308 "seek_hole": false, 00:19:49.308 "seek_data": false, 00:19:49.308 "copy": true, 00:19:49.308 "nvme_iov_md": false 00:19:49.308 }, 00:19:49.308 "memory_domains": [ 00:19:49.308 { 00:19:49.308 "dma_device_id": "system", 00:19:49.308 "dma_device_type": 1 00:19:49.308 }, 00:19:49.308 { 00:19:49.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:49.308 "dma_device_type": 2 00:19:49.308 } 00:19:49.308 ], 00:19:49.308 "driver_specific": {} 00:19:49.308 } 00:19:49.308 ] 00:19:49.308 22:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:49.308 22:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:19:49.308 22:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:49.308 22:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:49.308 22:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:49.308 22:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:49.308 22:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:49.308 22:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:49.308 22:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:49.309 22:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:49.309 22:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:49.309 22:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:49.309 22:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:49.567 22:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:49.567 "name": "Existed_Raid", 00:19:49.567 "uuid": "63f3f2da-885c-4514-8929-3b15181cc420", 00:19:49.567 "strip_size_kb": 64, 00:19:49.567 "state": "online", 00:19:49.567 "raid_level": "concat", 00:19:49.567 "superblock": true, 00:19:49.567 "num_base_bdevs": 4, 00:19:49.567 "num_base_bdevs_discovered": 4, 00:19:49.567 "num_base_bdevs_operational": 4, 00:19:49.567 "base_bdevs_list": [ 00:19:49.567 { 00:19:49.567 "name": "NewBaseBdev", 00:19:49.567 "uuid": "074f13f0-5854-4582-b638-65dcb6d95c57", 00:19:49.567 "is_configured": true, 00:19:49.567 "data_offset": 2048, 00:19:49.567 "data_size": 63488 00:19:49.567 }, 00:19:49.567 { 00:19:49.567 "name": "BaseBdev2", 00:19:49.567 "uuid": "37e5b7f7-f0e9-4159-9345-465e1f69e8a7", 00:19:49.567 "is_configured": true, 00:19:49.567 "data_offset": 2048, 00:19:49.567 "data_size": 63488 00:19:49.567 }, 00:19:49.567 { 00:19:49.567 "name": "BaseBdev3", 00:19:49.567 "uuid": "e4dd0321-a794-4dd7-88b3-2851811f4829", 00:19:49.567 "is_configured": true, 00:19:49.567 "data_offset": 2048, 00:19:49.567 "data_size": 63488 00:19:49.567 }, 00:19:49.567 { 00:19:49.567 "name": "BaseBdev4", 00:19:49.567 "uuid": "53e13ec3-6a5e-4cfe-b1a4-8799d40e34bd", 00:19:49.567 "is_configured": true, 00:19:49.567 "data_offset": 2048, 00:19:49.567 "data_size": 63488 00:19:49.567 } 00:19:49.567 ] 00:19:49.567 }' 00:19:49.567 22:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:49.567 22:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.134 22:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:19:50.134 22:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:50.134 22:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:50.134 22:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:50.134 22:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:50.134 22:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:19:50.134 22:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:50.134 22:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:50.134 [2024-07-13 22:03:09.360642] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:50.134 22:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:50.134 "name": "Existed_Raid", 00:19:50.134 "aliases": [ 00:19:50.134 "63f3f2da-885c-4514-8929-3b15181cc420" 00:19:50.134 ], 00:19:50.134 "product_name": "Raid Volume", 00:19:50.134 "block_size": 512, 00:19:50.134 "num_blocks": 253952, 00:19:50.134 "uuid": "63f3f2da-885c-4514-8929-3b15181cc420", 00:19:50.134 "assigned_rate_limits": { 00:19:50.134 "rw_ios_per_sec": 0, 00:19:50.134 "rw_mbytes_per_sec": 0, 00:19:50.134 "r_mbytes_per_sec": 0, 00:19:50.134 "w_mbytes_per_sec": 0 00:19:50.134 }, 00:19:50.134 "claimed": false, 00:19:50.134 "zoned": false, 00:19:50.134 "supported_io_types": { 00:19:50.134 "read": true, 00:19:50.134 "write": true, 00:19:50.134 "unmap": true, 00:19:50.134 "flush": true, 00:19:50.134 "reset": true, 00:19:50.134 "nvme_admin": false, 00:19:50.134 "nvme_io": false, 00:19:50.134 "nvme_io_md": false, 00:19:50.134 "write_zeroes": true, 00:19:50.134 "zcopy": false, 00:19:50.134 "get_zone_info": false, 00:19:50.134 "zone_management": false, 00:19:50.134 "zone_append": false, 00:19:50.134 "compare": false, 00:19:50.134 "compare_and_write": false, 00:19:50.134 "abort": false, 00:19:50.134 "seek_hole": false, 00:19:50.134 "seek_data": false, 00:19:50.134 "copy": false, 00:19:50.134 "nvme_iov_md": false 00:19:50.134 }, 00:19:50.134 "memory_domains": [ 00:19:50.134 { 00:19:50.134 "dma_device_id": "system", 00:19:50.134 "dma_device_type": 1 00:19:50.134 }, 00:19:50.134 { 00:19:50.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:50.134 "dma_device_type": 2 00:19:50.134 }, 00:19:50.134 { 00:19:50.134 "dma_device_id": "system", 00:19:50.134 "dma_device_type": 1 00:19:50.134 }, 00:19:50.134 { 00:19:50.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:50.134 "dma_device_type": 2 00:19:50.134 }, 00:19:50.134 { 00:19:50.134 "dma_device_id": "system", 00:19:50.134 "dma_device_type": 1 00:19:50.134 }, 00:19:50.134 { 00:19:50.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:50.134 "dma_device_type": 2 00:19:50.134 }, 00:19:50.134 { 00:19:50.134 "dma_device_id": "system", 00:19:50.134 "dma_device_type": 1 00:19:50.134 }, 00:19:50.134 { 00:19:50.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:50.134 "dma_device_type": 2 00:19:50.134 } 00:19:50.134 ], 00:19:50.134 "driver_specific": { 00:19:50.134 "raid": { 00:19:50.134 "uuid": "63f3f2da-885c-4514-8929-3b15181cc420", 00:19:50.134 "strip_size_kb": 64, 00:19:50.134 "state": "online", 00:19:50.134 "raid_level": "concat", 00:19:50.134 "superblock": true, 00:19:50.134 "num_base_bdevs": 4, 00:19:50.134 "num_base_bdevs_discovered": 4, 00:19:50.134 "num_base_bdevs_operational": 4, 00:19:50.134 "base_bdevs_list": [ 00:19:50.134 { 00:19:50.134 "name": "NewBaseBdev", 00:19:50.134 "uuid": "074f13f0-5854-4582-b638-65dcb6d95c57", 00:19:50.134 "is_configured": true, 00:19:50.134 "data_offset": 2048, 00:19:50.134 "data_size": 63488 00:19:50.134 }, 00:19:50.134 { 00:19:50.134 "name": "BaseBdev2", 00:19:50.134 "uuid": "37e5b7f7-f0e9-4159-9345-465e1f69e8a7", 00:19:50.134 "is_configured": true, 00:19:50.134 "data_offset": 2048, 00:19:50.134 "data_size": 63488 00:19:50.134 }, 00:19:50.134 { 00:19:50.134 "name": "BaseBdev3", 00:19:50.134 "uuid": "e4dd0321-a794-4dd7-88b3-2851811f4829", 00:19:50.134 "is_configured": true, 00:19:50.135 "data_offset": 2048, 00:19:50.135 "data_size": 63488 00:19:50.135 }, 00:19:50.135 { 00:19:50.135 "name": "BaseBdev4", 00:19:50.135 "uuid": "53e13ec3-6a5e-4cfe-b1a4-8799d40e34bd", 00:19:50.135 "is_configured": true, 00:19:50.135 "data_offset": 2048, 00:19:50.135 "data_size": 63488 00:19:50.135 } 00:19:50.135 ] 00:19:50.135 } 00:19:50.135 } 00:19:50.135 }' 00:19:50.135 22:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:50.135 22:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:19:50.135 BaseBdev2 00:19:50.135 BaseBdev3 00:19:50.135 BaseBdev4' 00:19:50.135 22:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:50.135 22:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:50.135 22:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:19:50.392 22:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:50.392 "name": "NewBaseBdev", 00:19:50.392 "aliases": [ 00:19:50.392 "074f13f0-5854-4582-b638-65dcb6d95c57" 00:19:50.392 ], 00:19:50.393 "product_name": "Malloc disk", 00:19:50.393 "block_size": 512, 00:19:50.393 "num_blocks": 65536, 00:19:50.393 "uuid": "074f13f0-5854-4582-b638-65dcb6d95c57", 00:19:50.393 "assigned_rate_limits": { 00:19:50.393 "rw_ios_per_sec": 0, 00:19:50.393 "rw_mbytes_per_sec": 0, 00:19:50.393 "r_mbytes_per_sec": 0, 00:19:50.393 "w_mbytes_per_sec": 0 00:19:50.393 }, 00:19:50.393 "claimed": true, 00:19:50.393 "claim_type": "exclusive_write", 00:19:50.393 "zoned": false, 00:19:50.393 "supported_io_types": { 00:19:50.393 "read": true, 00:19:50.393 "write": true, 00:19:50.393 "unmap": true, 00:19:50.393 "flush": true, 00:19:50.393 "reset": true, 00:19:50.393 "nvme_admin": false, 00:19:50.393 "nvme_io": false, 00:19:50.393 "nvme_io_md": false, 00:19:50.393 "write_zeroes": true, 00:19:50.393 "zcopy": true, 00:19:50.393 "get_zone_info": false, 00:19:50.393 "zone_management": false, 00:19:50.393 "zone_append": false, 00:19:50.393 "compare": false, 00:19:50.393 "compare_and_write": false, 00:19:50.393 "abort": true, 00:19:50.393 "seek_hole": false, 00:19:50.393 "seek_data": false, 00:19:50.393 "copy": true, 00:19:50.393 "nvme_iov_md": false 00:19:50.393 }, 00:19:50.393 "memory_domains": [ 00:19:50.393 { 00:19:50.393 "dma_device_id": "system", 00:19:50.393 "dma_device_type": 1 00:19:50.393 }, 00:19:50.393 { 00:19:50.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:50.393 "dma_device_type": 2 00:19:50.393 } 00:19:50.393 ], 00:19:50.393 "driver_specific": {} 00:19:50.393 }' 00:19:50.393 22:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:50.393 22:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:50.393 22:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:50.393 22:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:50.393 22:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:50.393 22:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:50.393 22:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:50.694 22:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:50.694 22:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:50.694 22:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:50.694 22:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:50.694 22:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:50.694 22:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:50.694 22:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:50.694 22:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:50.953 22:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:50.953 "name": "BaseBdev2", 00:19:50.953 "aliases": [ 00:19:50.953 "37e5b7f7-f0e9-4159-9345-465e1f69e8a7" 00:19:50.953 ], 00:19:50.953 "product_name": "Malloc disk", 00:19:50.953 "block_size": 512, 00:19:50.953 "num_blocks": 65536, 00:19:50.953 "uuid": "37e5b7f7-f0e9-4159-9345-465e1f69e8a7", 00:19:50.953 "assigned_rate_limits": { 00:19:50.953 "rw_ios_per_sec": 0, 00:19:50.953 "rw_mbytes_per_sec": 0, 00:19:50.953 "r_mbytes_per_sec": 0, 00:19:50.953 "w_mbytes_per_sec": 0 00:19:50.953 }, 00:19:50.953 "claimed": true, 00:19:50.953 "claim_type": "exclusive_write", 00:19:50.953 "zoned": false, 00:19:50.953 "supported_io_types": { 00:19:50.953 "read": true, 00:19:50.953 "write": true, 00:19:50.953 "unmap": true, 00:19:50.953 "flush": true, 00:19:50.953 "reset": true, 00:19:50.953 "nvme_admin": false, 00:19:50.953 "nvme_io": false, 00:19:50.953 "nvme_io_md": false, 00:19:50.953 "write_zeroes": true, 00:19:50.953 "zcopy": true, 00:19:50.953 "get_zone_info": false, 00:19:50.953 "zone_management": false, 00:19:50.953 "zone_append": false, 00:19:50.953 "compare": false, 00:19:50.953 "compare_and_write": false, 00:19:50.953 "abort": true, 00:19:50.953 "seek_hole": false, 00:19:50.953 "seek_data": false, 00:19:50.953 "copy": true, 00:19:50.953 "nvme_iov_md": false 00:19:50.953 }, 00:19:50.953 "memory_domains": [ 00:19:50.953 { 00:19:50.953 "dma_device_id": "system", 00:19:50.953 "dma_device_type": 1 00:19:50.953 }, 00:19:50.953 { 00:19:50.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:50.953 "dma_device_type": 2 00:19:50.953 } 00:19:50.953 ], 00:19:50.953 "driver_specific": {} 00:19:50.953 }' 00:19:50.953 22:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:50.953 22:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:50.953 22:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:50.953 22:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:50.953 22:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:50.953 22:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:50.953 22:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:50.953 22:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:50.953 22:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:50.953 22:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:51.212 22:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:51.212 22:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:51.212 22:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:51.212 22:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:51.212 22:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:51.212 22:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:51.212 "name": "BaseBdev3", 00:19:51.212 "aliases": [ 00:19:51.212 "e4dd0321-a794-4dd7-88b3-2851811f4829" 00:19:51.212 ], 00:19:51.212 "product_name": "Malloc disk", 00:19:51.212 "block_size": 512, 00:19:51.212 "num_blocks": 65536, 00:19:51.212 "uuid": "e4dd0321-a794-4dd7-88b3-2851811f4829", 00:19:51.212 "assigned_rate_limits": { 00:19:51.212 "rw_ios_per_sec": 0, 00:19:51.212 "rw_mbytes_per_sec": 0, 00:19:51.212 "r_mbytes_per_sec": 0, 00:19:51.212 "w_mbytes_per_sec": 0 00:19:51.212 }, 00:19:51.212 "claimed": true, 00:19:51.212 "claim_type": "exclusive_write", 00:19:51.212 "zoned": false, 00:19:51.212 "supported_io_types": { 00:19:51.212 "read": true, 00:19:51.212 "write": true, 00:19:51.212 "unmap": true, 00:19:51.212 "flush": true, 00:19:51.212 "reset": true, 00:19:51.212 "nvme_admin": false, 00:19:51.212 "nvme_io": false, 00:19:51.212 "nvme_io_md": false, 00:19:51.212 "write_zeroes": true, 00:19:51.212 "zcopy": true, 00:19:51.212 "get_zone_info": false, 00:19:51.212 "zone_management": false, 00:19:51.212 "zone_append": false, 00:19:51.212 "compare": false, 00:19:51.212 "compare_and_write": false, 00:19:51.212 "abort": true, 00:19:51.212 "seek_hole": false, 00:19:51.212 "seek_data": false, 00:19:51.212 "copy": true, 00:19:51.212 "nvme_iov_md": false 00:19:51.212 }, 00:19:51.212 "memory_domains": [ 00:19:51.212 { 00:19:51.212 "dma_device_id": "system", 00:19:51.212 "dma_device_type": 1 00:19:51.212 }, 00:19:51.212 { 00:19:51.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:51.212 "dma_device_type": 2 00:19:51.212 } 00:19:51.212 ], 00:19:51.212 "driver_specific": {} 00:19:51.212 }' 00:19:51.212 22:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:51.471 22:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:51.471 22:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:51.471 22:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:51.471 22:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:51.471 22:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:51.471 22:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:51.471 22:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:51.471 22:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:51.471 22:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:51.729 22:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:51.729 22:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:51.729 22:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:51.729 22:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:19:51.729 22:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:51.729 22:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:51.729 "name": "BaseBdev4", 00:19:51.729 "aliases": [ 00:19:51.729 "53e13ec3-6a5e-4cfe-b1a4-8799d40e34bd" 00:19:51.729 ], 00:19:51.729 "product_name": "Malloc disk", 00:19:51.729 "block_size": 512, 00:19:51.729 "num_blocks": 65536, 00:19:51.729 "uuid": "53e13ec3-6a5e-4cfe-b1a4-8799d40e34bd", 00:19:51.729 "assigned_rate_limits": { 00:19:51.729 "rw_ios_per_sec": 0, 00:19:51.729 "rw_mbytes_per_sec": 0, 00:19:51.729 "r_mbytes_per_sec": 0, 00:19:51.729 "w_mbytes_per_sec": 0 00:19:51.729 }, 00:19:51.729 "claimed": true, 00:19:51.729 "claim_type": "exclusive_write", 00:19:51.729 "zoned": false, 00:19:51.729 "supported_io_types": { 00:19:51.729 "read": true, 00:19:51.729 "write": true, 00:19:51.729 "unmap": true, 00:19:51.729 "flush": true, 00:19:51.729 "reset": true, 00:19:51.729 "nvme_admin": false, 00:19:51.729 "nvme_io": false, 00:19:51.729 "nvme_io_md": false, 00:19:51.729 "write_zeroes": true, 00:19:51.729 "zcopy": true, 00:19:51.729 "get_zone_info": false, 00:19:51.729 "zone_management": false, 00:19:51.729 "zone_append": false, 00:19:51.729 "compare": false, 00:19:51.729 "compare_and_write": false, 00:19:51.729 "abort": true, 00:19:51.729 "seek_hole": false, 00:19:51.729 "seek_data": false, 00:19:51.729 "copy": true, 00:19:51.729 "nvme_iov_md": false 00:19:51.729 }, 00:19:51.729 "memory_domains": [ 00:19:51.729 { 00:19:51.729 "dma_device_id": "system", 00:19:51.729 "dma_device_type": 1 00:19:51.729 }, 00:19:51.729 { 00:19:51.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:51.729 "dma_device_type": 2 00:19:51.729 } 00:19:51.729 ], 00:19:51.729 "driver_specific": {} 00:19:51.729 }' 00:19:51.730 22:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:51.988 22:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:51.988 22:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:51.988 22:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:51.988 22:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:51.988 22:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:51.988 22:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:51.988 22:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:51.989 22:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:51.989 22:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:51.989 22:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:52.248 22:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:52.248 22:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:52.248 [2024-07-13 22:03:11.538082] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:52.248 [2024-07-13 22:03:11.538109] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:52.248 [2024-07-13 22:03:11.538178] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:52.248 [2024-07-13 22:03:11.538242] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:52.248 [2024-07-13 22:03:11.538253] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042080 name Existed_Raid, state offline 00:19:52.248 22:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 1432709 00:19:52.248 22:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 1432709 ']' 00:19:52.248 22:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 1432709 00:19:52.248 22:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:19:52.248 22:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:52.248 22:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1432709 00:19:52.248 22:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:52.248 22:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:52.248 22:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1432709' 00:19:52.248 killing process with pid 1432709 00:19:52.248 22:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 1432709 00:19:52.248 [2024-07-13 22:03:11.599740] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:52.248 22:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 1432709 00:19:52.817 [2024-07-13 22:03:11.921667] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:53.756 22:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:19:53.756 00:19:53.756 real 0m26.160s 00:19:53.756 user 0m45.899s 00:19:53.756 sys 0m4.794s 00:19:53.756 22:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:53.756 22:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.756 ************************************ 00:19:53.756 END TEST raid_state_function_test_sb 00:19:53.756 ************************************ 00:19:54.021 22:03:13 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:54.021 22:03:13 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:19:54.021 22:03:13 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:19:54.021 22:03:13 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:54.021 22:03:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:54.021 ************************************ 00:19:54.021 START TEST raid_superblock_test 00:19:54.021 ************************************ 00:19:54.021 22:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 4 00:19:54.021 22:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:19:54.021 22:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:19:54.021 22:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:19:54.021 22:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:19:54.021 22:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:19:54.021 22:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:19:54.021 22:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:19:54.021 22:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:19:54.021 22:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:19:54.021 22:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:19:54.021 22:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:19:54.021 22:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:19:54.021 22:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:19:54.021 22:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:19:54.021 22:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:19:54.021 22:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:19:54.021 22:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=1438277 00:19:54.021 22:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 1438277 /var/tmp/spdk-raid.sock 00:19:54.021 22:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:19:54.021 22:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 1438277 ']' 00:19:54.021 22:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:54.021 22:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:54.021 22:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:54.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:54.021 22:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:54.021 22:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.021 [2024-07-13 22:03:13.302814] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:54.021 [2024-07-13 22:03:13.302916] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1438277 ] 00:19:54.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:54.021 EAL: Requested device 0000:3d:01.0 cannot be used 00:19:54.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:54.021 EAL: Requested device 0000:3d:01.1 cannot be used 00:19:54.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:54.021 EAL: Requested device 0000:3d:01.2 cannot be used 00:19:54.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:54.021 EAL: Requested device 0000:3d:01.3 cannot be used 00:19:54.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:54.021 EAL: Requested device 0000:3d:01.4 cannot be used 00:19:54.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:54.021 EAL: Requested device 0000:3d:01.5 cannot be used 00:19:54.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:54.021 EAL: Requested device 0000:3d:01.6 cannot be used 00:19:54.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:54.021 EAL: Requested device 0000:3d:01.7 cannot be used 00:19:54.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:54.021 EAL: Requested device 0000:3d:02.0 cannot be used 00:19:54.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:54.021 EAL: Requested device 0000:3d:02.1 cannot be used 00:19:54.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:54.021 EAL: Requested device 0000:3d:02.2 cannot be used 00:19:54.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:54.021 EAL: Requested device 0000:3d:02.3 cannot be used 00:19:54.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:54.021 EAL: Requested device 0000:3d:02.4 cannot be used 00:19:54.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:54.021 EAL: Requested device 0000:3d:02.5 cannot be used 00:19:54.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:54.021 EAL: Requested device 0000:3d:02.6 cannot be used 00:19:54.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:54.021 EAL: Requested device 0000:3d:02.7 cannot be used 00:19:54.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:54.021 EAL: Requested device 0000:3f:01.0 cannot be used 00:19:54.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:54.021 EAL: Requested device 0000:3f:01.1 cannot be used 00:19:54.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:54.021 EAL: Requested device 0000:3f:01.2 cannot be used 00:19:54.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:54.021 EAL: Requested device 0000:3f:01.3 cannot be used 00:19:54.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:54.021 EAL: Requested device 0000:3f:01.4 cannot be used 00:19:54.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:54.021 EAL: Requested device 0000:3f:01.5 cannot be used 00:19:54.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:54.021 EAL: Requested device 0000:3f:01.6 cannot be used 00:19:54.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:54.021 EAL: Requested device 0000:3f:01.7 cannot be used 00:19:54.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:54.021 EAL: Requested device 0000:3f:02.0 cannot be used 00:19:54.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:54.021 EAL: Requested device 0000:3f:02.1 cannot be used 00:19:54.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:54.021 EAL: Requested device 0000:3f:02.2 cannot be used 00:19:54.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:54.021 EAL: Requested device 0000:3f:02.3 cannot be used 00:19:54.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:54.021 EAL: Requested device 0000:3f:02.4 cannot be used 00:19:54.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:54.021 EAL: Requested device 0000:3f:02.5 cannot be used 00:19:54.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:54.021 EAL: Requested device 0000:3f:02.6 cannot be used 00:19:54.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:54.021 EAL: Requested device 0000:3f:02.7 cannot be used 00:19:54.279 [2024-07-13 22:03:13.463633] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.279 [2024-07-13 22:03:13.665886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:54.537 [2024-07-13 22:03:13.901319] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:54.537 [2024-07-13 22:03:13.901348] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:54.795 22:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:54.795 22:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:19:54.795 22:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:19:54.795 22:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:54.795 22:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:19:54.795 22:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:19:54.795 22:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:54.795 22:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:54.795 22:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:19:54.795 22:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:54.795 22:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:19:55.054 malloc1 00:19:55.054 22:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:55.054 [2024-07-13 22:03:14.409493] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:55.054 [2024-07-13 22:03:14.409544] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:55.054 [2024-07-13 22:03:14.409582] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:19:55.054 [2024-07-13 22:03:14.409594] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:55.054 [2024-07-13 22:03:14.411661] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:55.054 [2024-07-13 22:03:14.411690] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:55.054 pt1 00:19:55.054 22:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:19:55.054 22:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:55.054 22:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:19:55.054 22:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:19:55.054 22:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:55.054 22:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:55.054 22:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:19:55.054 22:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:55.054 22:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:19:55.311 malloc2 00:19:55.311 22:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:55.569 [2024-07-13 22:03:14.786629] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:55.569 [2024-07-13 22:03:14.786683] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:55.569 [2024-07-13 22:03:14.786705] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:19:55.569 [2024-07-13 22:03:14.786717] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:55.569 [2024-07-13 22:03:14.788850] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:55.569 [2024-07-13 22:03:14.788883] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:55.569 pt2 00:19:55.569 22:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:19:55.569 22:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:55.569 22:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:19:55.569 22:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:19:55.569 22:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:55.570 22:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:55.570 22:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:19:55.570 22:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:55.570 22:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:19:55.829 malloc3 00:19:55.829 22:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:55.829 [2024-07-13 22:03:15.171350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:55.829 [2024-07-13 22:03:15.171406] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:55.829 [2024-07-13 22:03:15.171432] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040e80 00:19:55.829 [2024-07-13 22:03:15.171443] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:55.829 [2024-07-13 22:03:15.173512] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:55.829 [2024-07-13 22:03:15.173539] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:55.829 pt3 00:19:55.829 22:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:19:55.829 22:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:55.829 22:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:19:55.829 22:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:19:55.829 22:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:19:55.829 22:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:55.829 22:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:19:55.829 22:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:55.829 22:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:19:56.088 malloc4 00:19:56.088 22:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:56.348 [2024-07-13 22:03:15.553733] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:56.348 [2024-07-13 22:03:15.553791] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:56.348 [2024-07-13 22:03:15.553813] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041a80 00:19:56.348 [2024-07-13 22:03:15.553824] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:56.348 [2024-07-13 22:03:15.555993] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:56.348 [2024-07-13 22:03:15.556021] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:56.348 pt4 00:19:56.348 22:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:19:56.348 22:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:56.348 22:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:19:56.348 [2024-07-13 22:03:15.722258] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:56.348 [2024-07-13 22:03:15.723995] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:56.349 [2024-07-13 22:03:15.724060] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:56.349 [2024-07-13 22:03:15.724100] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:56.349 [2024-07-13 22:03:15.724273] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042080 00:19:56.349 [2024-07-13 22:03:15.724287] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:56.349 [2024-07-13 22:03:15.724551] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:19:56.349 [2024-07-13 22:03:15.724734] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042080 00:19:56.349 [2024-07-13 22:03:15.724747] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000042080 00:19:56.349 [2024-07-13 22:03:15.724889] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:56.349 22:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:19:56.349 22:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:56.349 22:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:56.349 22:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:56.349 22:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:56.349 22:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:56.349 22:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:56.607 22:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:56.607 22:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:56.608 22:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:56.608 22:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.608 22:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:56.608 22:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:56.608 "name": "raid_bdev1", 00:19:56.608 "uuid": "974beb8c-3b18-40b1-8dbf-6680c5c59ca2", 00:19:56.608 "strip_size_kb": 64, 00:19:56.608 "state": "online", 00:19:56.608 "raid_level": "concat", 00:19:56.608 "superblock": true, 00:19:56.608 "num_base_bdevs": 4, 00:19:56.608 "num_base_bdevs_discovered": 4, 00:19:56.608 "num_base_bdevs_operational": 4, 00:19:56.608 "base_bdevs_list": [ 00:19:56.608 { 00:19:56.608 "name": "pt1", 00:19:56.608 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:56.608 "is_configured": true, 00:19:56.608 "data_offset": 2048, 00:19:56.608 "data_size": 63488 00:19:56.608 }, 00:19:56.608 { 00:19:56.608 "name": "pt2", 00:19:56.608 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:56.608 "is_configured": true, 00:19:56.608 "data_offset": 2048, 00:19:56.608 "data_size": 63488 00:19:56.608 }, 00:19:56.608 { 00:19:56.608 "name": "pt3", 00:19:56.608 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:56.608 "is_configured": true, 00:19:56.608 "data_offset": 2048, 00:19:56.608 "data_size": 63488 00:19:56.608 }, 00:19:56.608 { 00:19:56.608 "name": "pt4", 00:19:56.608 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:56.608 "is_configured": true, 00:19:56.608 "data_offset": 2048, 00:19:56.608 "data_size": 63488 00:19:56.608 } 00:19:56.608 ] 00:19:56.608 }' 00:19:56.608 22:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:56.608 22:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.176 22:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:19:57.176 22:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:19:57.176 22:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:57.176 22:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:57.176 22:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:57.176 22:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:57.176 22:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:57.176 22:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:57.436 [2024-07-13 22:03:16.576726] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:57.436 22:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:57.436 "name": "raid_bdev1", 00:19:57.436 "aliases": [ 00:19:57.436 "974beb8c-3b18-40b1-8dbf-6680c5c59ca2" 00:19:57.436 ], 00:19:57.436 "product_name": "Raid Volume", 00:19:57.436 "block_size": 512, 00:19:57.436 "num_blocks": 253952, 00:19:57.436 "uuid": "974beb8c-3b18-40b1-8dbf-6680c5c59ca2", 00:19:57.436 "assigned_rate_limits": { 00:19:57.436 "rw_ios_per_sec": 0, 00:19:57.436 "rw_mbytes_per_sec": 0, 00:19:57.436 "r_mbytes_per_sec": 0, 00:19:57.436 "w_mbytes_per_sec": 0 00:19:57.436 }, 00:19:57.436 "claimed": false, 00:19:57.436 "zoned": false, 00:19:57.436 "supported_io_types": { 00:19:57.436 "read": true, 00:19:57.436 "write": true, 00:19:57.436 "unmap": true, 00:19:57.436 "flush": true, 00:19:57.436 "reset": true, 00:19:57.436 "nvme_admin": false, 00:19:57.436 "nvme_io": false, 00:19:57.436 "nvme_io_md": false, 00:19:57.436 "write_zeroes": true, 00:19:57.436 "zcopy": false, 00:19:57.436 "get_zone_info": false, 00:19:57.436 "zone_management": false, 00:19:57.436 "zone_append": false, 00:19:57.436 "compare": false, 00:19:57.436 "compare_and_write": false, 00:19:57.436 "abort": false, 00:19:57.436 "seek_hole": false, 00:19:57.436 "seek_data": false, 00:19:57.436 "copy": false, 00:19:57.436 "nvme_iov_md": false 00:19:57.436 }, 00:19:57.436 "memory_domains": [ 00:19:57.436 { 00:19:57.436 "dma_device_id": "system", 00:19:57.436 "dma_device_type": 1 00:19:57.436 }, 00:19:57.436 { 00:19:57.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:57.436 "dma_device_type": 2 00:19:57.436 }, 00:19:57.436 { 00:19:57.436 "dma_device_id": "system", 00:19:57.436 "dma_device_type": 1 00:19:57.436 }, 00:19:57.436 { 00:19:57.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:57.436 "dma_device_type": 2 00:19:57.436 }, 00:19:57.436 { 00:19:57.436 "dma_device_id": "system", 00:19:57.436 "dma_device_type": 1 00:19:57.436 }, 00:19:57.436 { 00:19:57.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:57.436 "dma_device_type": 2 00:19:57.436 }, 00:19:57.436 { 00:19:57.436 "dma_device_id": "system", 00:19:57.436 "dma_device_type": 1 00:19:57.436 }, 00:19:57.436 { 00:19:57.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:57.436 "dma_device_type": 2 00:19:57.436 } 00:19:57.436 ], 00:19:57.436 "driver_specific": { 00:19:57.436 "raid": { 00:19:57.436 "uuid": "974beb8c-3b18-40b1-8dbf-6680c5c59ca2", 00:19:57.436 "strip_size_kb": 64, 00:19:57.436 "state": "online", 00:19:57.436 "raid_level": "concat", 00:19:57.436 "superblock": true, 00:19:57.436 "num_base_bdevs": 4, 00:19:57.436 "num_base_bdevs_discovered": 4, 00:19:57.436 "num_base_bdevs_operational": 4, 00:19:57.436 "base_bdevs_list": [ 00:19:57.436 { 00:19:57.436 "name": "pt1", 00:19:57.436 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:57.436 "is_configured": true, 00:19:57.436 "data_offset": 2048, 00:19:57.436 "data_size": 63488 00:19:57.436 }, 00:19:57.436 { 00:19:57.436 "name": "pt2", 00:19:57.436 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:57.436 "is_configured": true, 00:19:57.436 "data_offset": 2048, 00:19:57.436 "data_size": 63488 00:19:57.436 }, 00:19:57.436 { 00:19:57.436 "name": "pt3", 00:19:57.436 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:57.436 "is_configured": true, 00:19:57.436 "data_offset": 2048, 00:19:57.436 "data_size": 63488 00:19:57.436 }, 00:19:57.436 { 00:19:57.436 "name": "pt4", 00:19:57.436 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:57.436 "is_configured": true, 00:19:57.437 "data_offset": 2048, 00:19:57.437 "data_size": 63488 00:19:57.437 } 00:19:57.437 ] 00:19:57.437 } 00:19:57.437 } 00:19:57.437 }' 00:19:57.437 22:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:57.437 22:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:19:57.437 pt2 00:19:57.437 pt3 00:19:57.437 pt4' 00:19:57.437 22:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:57.437 22:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:19:57.437 22:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:57.437 22:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:57.437 "name": "pt1", 00:19:57.437 "aliases": [ 00:19:57.437 "00000000-0000-0000-0000-000000000001" 00:19:57.437 ], 00:19:57.437 "product_name": "passthru", 00:19:57.437 "block_size": 512, 00:19:57.437 "num_blocks": 65536, 00:19:57.437 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:57.437 "assigned_rate_limits": { 00:19:57.437 "rw_ios_per_sec": 0, 00:19:57.437 "rw_mbytes_per_sec": 0, 00:19:57.437 "r_mbytes_per_sec": 0, 00:19:57.437 "w_mbytes_per_sec": 0 00:19:57.437 }, 00:19:57.437 "claimed": true, 00:19:57.437 "claim_type": "exclusive_write", 00:19:57.437 "zoned": false, 00:19:57.437 "supported_io_types": { 00:19:57.437 "read": true, 00:19:57.437 "write": true, 00:19:57.437 "unmap": true, 00:19:57.437 "flush": true, 00:19:57.437 "reset": true, 00:19:57.437 "nvme_admin": false, 00:19:57.437 "nvme_io": false, 00:19:57.437 "nvme_io_md": false, 00:19:57.437 "write_zeroes": true, 00:19:57.437 "zcopy": true, 00:19:57.437 "get_zone_info": false, 00:19:57.437 "zone_management": false, 00:19:57.437 "zone_append": false, 00:19:57.437 "compare": false, 00:19:57.437 "compare_and_write": false, 00:19:57.437 "abort": true, 00:19:57.437 "seek_hole": false, 00:19:57.437 "seek_data": false, 00:19:57.437 "copy": true, 00:19:57.437 "nvme_iov_md": false 00:19:57.437 }, 00:19:57.437 "memory_domains": [ 00:19:57.437 { 00:19:57.437 "dma_device_id": "system", 00:19:57.437 "dma_device_type": 1 00:19:57.437 }, 00:19:57.437 { 00:19:57.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:57.437 "dma_device_type": 2 00:19:57.437 } 00:19:57.437 ], 00:19:57.437 "driver_specific": { 00:19:57.437 "passthru": { 00:19:57.437 "name": "pt1", 00:19:57.437 "base_bdev_name": "malloc1" 00:19:57.437 } 00:19:57.437 } 00:19:57.437 }' 00:19:57.437 22:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:57.696 22:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:57.696 22:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:57.696 22:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:57.696 22:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:57.696 22:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:57.696 22:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:57.696 22:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:57.696 22:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:57.696 22:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:57.955 22:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:57.955 22:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:57.955 22:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:57.955 22:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:19:57.955 22:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:57.955 22:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:57.955 "name": "pt2", 00:19:57.955 "aliases": [ 00:19:57.955 "00000000-0000-0000-0000-000000000002" 00:19:57.955 ], 00:19:57.955 "product_name": "passthru", 00:19:57.955 "block_size": 512, 00:19:57.955 "num_blocks": 65536, 00:19:57.955 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:57.955 "assigned_rate_limits": { 00:19:57.955 "rw_ios_per_sec": 0, 00:19:57.955 "rw_mbytes_per_sec": 0, 00:19:57.955 "r_mbytes_per_sec": 0, 00:19:57.955 "w_mbytes_per_sec": 0 00:19:57.955 }, 00:19:57.955 "claimed": true, 00:19:57.955 "claim_type": "exclusive_write", 00:19:57.955 "zoned": false, 00:19:57.955 "supported_io_types": { 00:19:57.955 "read": true, 00:19:57.955 "write": true, 00:19:57.955 "unmap": true, 00:19:57.955 "flush": true, 00:19:57.955 "reset": true, 00:19:57.955 "nvme_admin": false, 00:19:57.955 "nvme_io": false, 00:19:57.955 "nvme_io_md": false, 00:19:57.955 "write_zeroes": true, 00:19:57.955 "zcopy": true, 00:19:57.955 "get_zone_info": false, 00:19:57.955 "zone_management": false, 00:19:57.955 "zone_append": false, 00:19:57.955 "compare": false, 00:19:57.955 "compare_and_write": false, 00:19:57.955 "abort": true, 00:19:57.955 "seek_hole": false, 00:19:57.955 "seek_data": false, 00:19:57.955 "copy": true, 00:19:57.955 "nvme_iov_md": false 00:19:57.955 }, 00:19:57.955 "memory_domains": [ 00:19:57.955 { 00:19:57.955 "dma_device_id": "system", 00:19:57.955 "dma_device_type": 1 00:19:57.955 }, 00:19:57.955 { 00:19:57.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:57.955 "dma_device_type": 2 00:19:57.955 } 00:19:57.955 ], 00:19:57.955 "driver_specific": { 00:19:57.955 "passthru": { 00:19:57.955 "name": "pt2", 00:19:57.955 "base_bdev_name": "malloc2" 00:19:57.955 } 00:19:57.955 } 00:19:57.955 }' 00:19:57.955 22:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:58.214 22:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:58.214 22:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:58.214 22:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:58.214 22:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:58.214 22:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:58.214 22:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:58.214 22:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:58.214 22:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:58.214 22:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:58.214 22:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:58.214 22:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:58.214 22:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:58.473 22:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:19:58.473 22:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:58.473 22:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:58.473 "name": "pt3", 00:19:58.473 "aliases": [ 00:19:58.473 "00000000-0000-0000-0000-000000000003" 00:19:58.473 ], 00:19:58.473 "product_name": "passthru", 00:19:58.473 "block_size": 512, 00:19:58.473 "num_blocks": 65536, 00:19:58.473 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:58.473 "assigned_rate_limits": { 00:19:58.473 "rw_ios_per_sec": 0, 00:19:58.473 "rw_mbytes_per_sec": 0, 00:19:58.473 "r_mbytes_per_sec": 0, 00:19:58.473 "w_mbytes_per_sec": 0 00:19:58.473 }, 00:19:58.473 "claimed": true, 00:19:58.473 "claim_type": "exclusive_write", 00:19:58.473 "zoned": false, 00:19:58.473 "supported_io_types": { 00:19:58.473 "read": true, 00:19:58.473 "write": true, 00:19:58.473 "unmap": true, 00:19:58.473 "flush": true, 00:19:58.473 "reset": true, 00:19:58.473 "nvme_admin": false, 00:19:58.473 "nvme_io": false, 00:19:58.473 "nvme_io_md": false, 00:19:58.473 "write_zeroes": true, 00:19:58.473 "zcopy": true, 00:19:58.473 "get_zone_info": false, 00:19:58.473 "zone_management": false, 00:19:58.473 "zone_append": false, 00:19:58.473 "compare": false, 00:19:58.473 "compare_and_write": false, 00:19:58.473 "abort": true, 00:19:58.473 "seek_hole": false, 00:19:58.473 "seek_data": false, 00:19:58.473 "copy": true, 00:19:58.473 "nvme_iov_md": false 00:19:58.473 }, 00:19:58.473 "memory_domains": [ 00:19:58.473 { 00:19:58.473 "dma_device_id": "system", 00:19:58.473 "dma_device_type": 1 00:19:58.473 }, 00:19:58.473 { 00:19:58.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:58.473 "dma_device_type": 2 00:19:58.473 } 00:19:58.473 ], 00:19:58.473 "driver_specific": { 00:19:58.473 "passthru": { 00:19:58.473 "name": "pt3", 00:19:58.473 "base_bdev_name": "malloc3" 00:19:58.473 } 00:19:58.473 } 00:19:58.473 }' 00:19:58.473 22:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:58.473 22:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:58.473 22:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:58.473 22:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:58.732 22:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:58.732 22:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:58.732 22:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:58.732 22:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:58.732 22:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:58.732 22:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:58.732 22:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:58.732 22:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:58.732 22:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:58.732 22:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:19:58.733 22:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:58.991 22:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:58.992 "name": "pt4", 00:19:58.992 "aliases": [ 00:19:58.992 "00000000-0000-0000-0000-000000000004" 00:19:58.992 ], 00:19:58.992 "product_name": "passthru", 00:19:58.992 "block_size": 512, 00:19:58.992 "num_blocks": 65536, 00:19:58.992 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:58.992 "assigned_rate_limits": { 00:19:58.992 "rw_ios_per_sec": 0, 00:19:58.992 "rw_mbytes_per_sec": 0, 00:19:58.992 "r_mbytes_per_sec": 0, 00:19:58.992 "w_mbytes_per_sec": 0 00:19:58.992 }, 00:19:58.992 "claimed": true, 00:19:58.992 "claim_type": "exclusive_write", 00:19:58.992 "zoned": false, 00:19:58.992 "supported_io_types": { 00:19:58.992 "read": true, 00:19:58.992 "write": true, 00:19:58.992 "unmap": true, 00:19:58.992 "flush": true, 00:19:58.992 "reset": true, 00:19:58.992 "nvme_admin": false, 00:19:58.992 "nvme_io": false, 00:19:58.992 "nvme_io_md": false, 00:19:58.992 "write_zeroes": true, 00:19:58.992 "zcopy": true, 00:19:58.992 "get_zone_info": false, 00:19:58.992 "zone_management": false, 00:19:58.992 "zone_append": false, 00:19:58.992 "compare": false, 00:19:58.992 "compare_and_write": false, 00:19:58.992 "abort": true, 00:19:58.992 "seek_hole": false, 00:19:58.992 "seek_data": false, 00:19:58.992 "copy": true, 00:19:58.992 "nvme_iov_md": false 00:19:58.992 }, 00:19:58.992 "memory_domains": [ 00:19:58.992 { 00:19:58.992 "dma_device_id": "system", 00:19:58.992 "dma_device_type": 1 00:19:58.992 }, 00:19:58.992 { 00:19:58.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:58.992 "dma_device_type": 2 00:19:58.992 } 00:19:58.992 ], 00:19:58.992 "driver_specific": { 00:19:58.992 "passthru": { 00:19:58.992 "name": "pt4", 00:19:58.992 "base_bdev_name": "malloc4" 00:19:58.992 } 00:19:58.992 } 00:19:58.992 }' 00:19:58.992 22:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:58.992 22:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:58.992 22:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:58.992 22:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:58.992 22:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:59.251 22:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:59.251 22:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:59.251 22:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:59.251 22:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:59.251 22:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:59.251 22:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:59.251 22:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:59.251 22:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:19:59.251 22:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:59.510 [2024-07-13 22:03:18.718334] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:59.510 22:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=974beb8c-3b18-40b1-8dbf-6680c5c59ca2 00:19:59.510 22:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 974beb8c-3b18-40b1-8dbf-6680c5c59ca2 ']' 00:19:59.510 22:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:59.510 [2024-07-13 22:03:18.890495] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:59.510 [2024-07-13 22:03:18.890521] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:59.510 [2024-07-13 22:03:18.890595] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:59.510 [2024-07-13 22:03:18.890658] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:59.510 [2024-07-13 22:03:18.890672] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042080 name raid_bdev1, state offline 00:19:59.770 22:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:59.770 22:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:19:59.770 22:03:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:19:59.770 22:03:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:19:59.770 22:03:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:19:59.770 22:03:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:00.029 22:03:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:20:00.029 22:03:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:00.288 22:03:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:20:00.288 22:03:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:00.288 22:03:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:20:00.288 22:03:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:20:00.548 22:03:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:20:00.548 22:03:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:00.548 22:03:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:20:00.548 22:03:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:00.548 22:03:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:20:00.548 22:03:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:00.548 22:03:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:00.548 22:03:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:00.548 22:03:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:00.548 22:03:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:00.548 22:03:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:00.548 22:03:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:00.548 22:03:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:00.548 22:03:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:20:00.548 22:03:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:00.807 [2024-07-13 22:03:20.081621] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:00.807 [2024-07-13 22:03:20.083360] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:00.807 [2024-07-13 22:03:20.083406] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:20:00.807 [2024-07-13 22:03:20.083440] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:20:00.807 [2024-07-13 22:03:20.083483] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:00.807 [2024-07-13 22:03:20.083526] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:00.807 [2024-07-13 22:03:20.083547] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:20:00.807 [2024-07-13 22:03:20.083567] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:20:00.807 [2024-07-13 22:03:20.083583] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:00.807 [2024-07-13 22:03:20.083596] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042680 name raid_bdev1, state configuring 00:20:00.807 request: 00:20:00.807 { 00:20:00.807 "name": "raid_bdev1", 00:20:00.807 "raid_level": "concat", 00:20:00.807 "base_bdevs": [ 00:20:00.807 "malloc1", 00:20:00.807 "malloc2", 00:20:00.807 "malloc3", 00:20:00.807 "malloc4" 00:20:00.807 ], 00:20:00.807 "strip_size_kb": 64, 00:20:00.807 "superblock": false, 00:20:00.807 "method": "bdev_raid_create", 00:20:00.807 "req_id": 1 00:20:00.807 } 00:20:00.807 Got JSON-RPC error response 00:20:00.807 response: 00:20:00.807 { 00:20:00.807 "code": -17, 00:20:00.807 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:00.807 } 00:20:00.807 22:03:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:20:00.807 22:03:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:00.807 22:03:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:00.807 22:03:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:00.807 22:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:00.807 22:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:20:01.066 22:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:20:01.066 22:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:20:01.066 22:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:01.066 [2024-07-13 22:03:20.418521] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:01.066 [2024-07-13 22:03:20.418598] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:01.066 [2024-07-13 22:03:20.418618] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042c80 00:20:01.066 [2024-07-13 22:03:20.418632] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:01.066 [2024-07-13 22:03:20.420785] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:01.066 [2024-07-13 22:03:20.420820] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:01.066 [2024-07-13 22:03:20.420899] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:01.066 [2024-07-13 22:03:20.420965] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:01.066 pt1 00:20:01.066 22:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:20:01.066 22:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:01.066 22:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:01.066 22:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:01.066 22:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:01.066 22:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:01.066 22:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:01.066 22:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:01.066 22:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:01.066 22:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:01.066 22:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:01.066 22:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.325 22:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:01.325 "name": "raid_bdev1", 00:20:01.325 "uuid": "974beb8c-3b18-40b1-8dbf-6680c5c59ca2", 00:20:01.325 "strip_size_kb": 64, 00:20:01.325 "state": "configuring", 00:20:01.325 "raid_level": "concat", 00:20:01.325 "superblock": true, 00:20:01.325 "num_base_bdevs": 4, 00:20:01.325 "num_base_bdevs_discovered": 1, 00:20:01.325 "num_base_bdevs_operational": 4, 00:20:01.325 "base_bdevs_list": [ 00:20:01.325 { 00:20:01.325 "name": "pt1", 00:20:01.325 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:01.325 "is_configured": true, 00:20:01.325 "data_offset": 2048, 00:20:01.325 "data_size": 63488 00:20:01.325 }, 00:20:01.325 { 00:20:01.325 "name": null, 00:20:01.325 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:01.325 "is_configured": false, 00:20:01.325 "data_offset": 2048, 00:20:01.325 "data_size": 63488 00:20:01.326 }, 00:20:01.326 { 00:20:01.326 "name": null, 00:20:01.326 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:01.326 "is_configured": false, 00:20:01.326 "data_offset": 2048, 00:20:01.326 "data_size": 63488 00:20:01.326 }, 00:20:01.326 { 00:20:01.326 "name": null, 00:20:01.326 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:01.326 "is_configured": false, 00:20:01.326 "data_offset": 2048, 00:20:01.326 "data_size": 63488 00:20:01.326 } 00:20:01.326 ] 00:20:01.326 }' 00:20:01.326 22:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:01.326 22:03:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.893 22:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:20:01.893 22:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:01.893 [2024-07-13 22:03:21.268837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:01.893 [2024-07-13 22:03:21.268916] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:01.893 [2024-07-13 22:03:21.268937] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043580 00:20:01.893 [2024-07-13 22:03:21.268952] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:01.893 [2024-07-13 22:03:21.269390] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:01.893 [2024-07-13 22:03:21.269410] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:01.893 [2024-07-13 22:03:21.269483] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:01.893 [2024-07-13 22:03:21.269507] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:01.893 pt2 00:20:02.153 22:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:02.153 [2024-07-13 22:03:21.437304] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:20:02.153 22:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:20:02.153 22:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:02.153 22:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:02.153 22:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:02.153 22:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:02.153 22:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:02.153 22:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:02.153 22:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:02.153 22:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:02.153 22:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:02.153 22:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.153 22:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:02.413 22:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:02.413 "name": "raid_bdev1", 00:20:02.413 "uuid": "974beb8c-3b18-40b1-8dbf-6680c5c59ca2", 00:20:02.413 "strip_size_kb": 64, 00:20:02.413 "state": "configuring", 00:20:02.413 "raid_level": "concat", 00:20:02.413 "superblock": true, 00:20:02.413 "num_base_bdevs": 4, 00:20:02.413 "num_base_bdevs_discovered": 1, 00:20:02.413 "num_base_bdevs_operational": 4, 00:20:02.413 "base_bdevs_list": [ 00:20:02.413 { 00:20:02.413 "name": "pt1", 00:20:02.413 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:02.413 "is_configured": true, 00:20:02.413 "data_offset": 2048, 00:20:02.413 "data_size": 63488 00:20:02.413 }, 00:20:02.413 { 00:20:02.413 "name": null, 00:20:02.413 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:02.413 "is_configured": false, 00:20:02.414 "data_offset": 2048, 00:20:02.414 "data_size": 63488 00:20:02.414 }, 00:20:02.414 { 00:20:02.414 "name": null, 00:20:02.414 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:02.414 "is_configured": false, 00:20:02.414 "data_offset": 2048, 00:20:02.414 "data_size": 63488 00:20:02.414 }, 00:20:02.414 { 00:20:02.414 "name": null, 00:20:02.414 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:02.414 "is_configured": false, 00:20:02.414 "data_offset": 2048, 00:20:02.414 "data_size": 63488 00:20:02.414 } 00:20:02.414 ] 00:20:02.414 }' 00:20:02.414 22:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:02.414 22:03:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.982 22:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:20:02.983 22:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:20:02.983 22:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:02.983 [2024-07-13 22:03:22.271451] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:02.983 [2024-07-13 22:03:22.271504] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:02.983 [2024-07-13 22:03:22.271541] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043880 00:20:02.983 [2024-07-13 22:03:22.271552] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:02.983 [2024-07-13 22:03:22.271992] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:02.983 [2024-07-13 22:03:22.272011] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:02.983 [2024-07-13 22:03:22.272087] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:02.983 [2024-07-13 22:03:22.272108] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:02.983 pt2 00:20:02.983 22:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:20:02.983 22:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:20:02.983 22:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:03.242 [2024-07-13 22:03:22.443914] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:03.242 [2024-07-13 22:03:22.443958] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:03.242 [2024-07-13 22:03:22.443983] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043b80 00:20:03.242 [2024-07-13 22:03:22.443993] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:03.242 [2024-07-13 22:03:22.444408] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:03.242 [2024-07-13 22:03:22.444427] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:03.242 [2024-07-13 22:03:22.444494] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:03.242 [2024-07-13 22:03:22.444513] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:03.242 pt3 00:20:03.242 22:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:20:03.242 22:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:20:03.242 22:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:03.242 [2024-07-13 22:03:22.624397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:03.242 [2024-07-13 22:03:22.624445] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:03.242 [2024-07-13 22:03:22.624467] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043e80 00:20:03.242 [2024-07-13 22:03:22.624478] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:03.242 [2024-07-13 22:03:22.624882] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:03.242 [2024-07-13 22:03:22.624900] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:03.242 [2024-07-13 22:03:22.624992] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:20:03.242 [2024-07-13 22:03:22.625012] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:03.242 [2024-07-13 22:03:22.625171] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000043280 00:20:03.242 [2024-07-13 22:03:22.625181] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:03.242 [2024-07-13 22:03:22.625419] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:20:03.242 [2024-07-13 22:03:22.625585] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000043280 00:20:03.242 [2024-07-13 22:03:22.625599] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000043280 00:20:03.242 [2024-07-13 22:03:22.625734] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:03.242 pt4 00:20:03.540 22:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:20:03.540 22:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:20:03.540 22:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:20:03.540 22:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:03.540 22:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:03.540 22:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:03.540 22:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:03.540 22:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:03.540 22:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:03.540 22:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:03.540 22:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:03.540 22:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:03.540 22:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:03.540 22:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:03.540 22:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:03.540 "name": "raid_bdev1", 00:20:03.540 "uuid": "974beb8c-3b18-40b1-8dbf-6680c5c59ca2", 00:20:03.540 "strip_size_kb": 64, 00:20:03.540 "state": "online", 00:20:03.540 "raid_level": "concat", 00:20:03.540 "superblock": true, 00:20:03.540 "num_base_bdevs": 4, 00:20:03.540 "num_base_bdevs_discovered": 4, 00:20:03.540 "num_base_bdevs_operational": 4, 00:20:03.540 "base_bdevs_list": [ 00:20:03.540 { 00:20:03.540 "name": "pt1", 00:20:03.540 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:03.540 "is_configured": true, 00:20:03.540 "data_offset": 2048, 00:20:03.540 "data_size": 63488 00:20:03.540 }, 00:20:03.540 { 00:20:03.540 "name": "pt2", 00:20:03.540 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:03.540 "is_configured": true, 00:20:03.540 "data_offset": 2048, 00:20:03.540 "data_size": 63488 00:20:03.540 }, 00:20:03.540 { 00:20:03.540 "name": "pt3", 00:20:03.540 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:03.540 "is_configured": true, 00:20:03.540 "data_offset": 2048, 00:20:03.540 "data_size": 63488 00:20:03.540 }, 00:20:03.540 { 00:20:03.540 "name": "pt4", 00:20:03.540 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:03.540 "is_configured": true, 00:20:03.540 "data_offset": 2048, 00:20:03.540 "data_size": 63488 00:20:03.540 } 00:20:03.540 ] 00:20:03.540 }' 00:20:03.540 22:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:03.540 22:03:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.108 22:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:20:04.108 22:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:20:04.108 22:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:04.108 22:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:04.108 22:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:04.108 22:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:04.108 22:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:04.108 22:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:04.108 [2024-07-13 22:03:23.475024] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:04.108 22:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:04.108 "name": "raid_bdev1", 00:20:04.108 "aliases": [ 00:20:04.108 "974beb8c-3b18-40b1-8dbf-6680c5c59ca2" 00:20:04.108 ], 00:20:04.108 "product_name": "Raid Volume", 00:20:04.108 "block_size": 512, 00:20:04.108 "num_blocks": 253952, 00:20:04.108 "uuid": "974beb8c-3b18-40b1-8dbf-6680c5c59ca2", 00:20:04.108 "assigned_rate_limits": { 00:20:04.108 "rw_ios_per_sec": 0, 00:20:04.108 "rw_mbytes_per_sec": 0, 00:20:04.108 "r_mbytes_per_sec": 0, 00:20:04.108 "w_mbytes_per_sec": 0 00:20:04.108 }, 00:20:04.108 "claimed": false, 00:20:04.108 "zoned": false, 00:20:04.108 "supported_io_types": { 00:20:04.108 "read": true, 00:20:04.108 "write": true, 00:20:04.108 "unmap": true, 00:20:04.108 "flush": true, 00:20:04.108 "reset": true, 00:20:04.108 "nvme_admin": false, 00:20:04.108 "nvme_io": false, 00:20:04.108 "nvme_io_md": false, 00:20:04.108 "write_zeroes": true, 00:20:04.108 "zcopy": false, 00:20:04.108 "get_zone_info": false, 00:20:04.108 "zone_management": false, 00:20:04.108 "zone_append": false, 00:20:04.108 "compare": false, 00:20:04.108 "compare_and_write": false, 00:20:04.108 "abort": false, 00:20:04.108 "seek_hole": false, 00:20:04.108 "seek_data": false, 00:20:04.108 "copy": false, 00:20:04.108 "nvme_iov_md": false 00:20:04.108 }, 00:20:04.108 "memory_domains": [ 00:20:04.108 { 00:20:04.108 "dma_device_id": "system", 00:20:04.108 "dma_device_type": 1 00:20:04.108 }, 00:20:04.108 { 00:20:04.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:04.108 "dma_device_type": 2 00:20:04.108 }, 00:20:04.108 { 00:20:04.108 "dma_device_id": "system", 00:20:04.108 "dma_device_type": 1 00:20:04.108 }, 00:20:04.108 { 00:20:04.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:04.108 "dma_device_type": 2 00:20:04.108 }, 00:20:04.108 { 00:20:04.108 "dma_device_id": "system", 00:20:04.108 "dma_device_type": 1 00:20:04.108 }, 00:20:04.108 { 00:20:04.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:04.108 "dma_device_type": 2 00:20:04.108 }, 00:20:04.108 { 00:20:04.108 "dma_device_id": "system", 00:20:04.109 "dma_device_type": 1 00:20:04.109 }, 00:20:04.109 { 00:20:04.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:04.109 "dma_device_type": 2 00:20:04.109 } 00:20:04.109 ], 00:20:04.109 "driver_specific": { 00:20:04.109 "raid": { 00:20:04.109 "uuid": "974beb8c-3b18-40b1-8dbf-6680c5c59ca2", 00:20:04.109 "strip_size_kb": 64, 00:20:04.109 "state": "online", 00:20:04.109 "raid_level": "concat", 00:20:04.109 "superblock": true, 00:20:04.109 "num_base_bdevs": 4, 00:20:04.109 "num_base_bdevs_discovered": 4, 00:20:04.109 "num_base_bdevs_operational": 4, 00:20:04.109 "base_bdevs_list": [ 00:20:04.109 { 00:20:04.109 "name": "pt1", 00:20:04.109 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:04.109 "is_configured": true, 00:20:04.109 "data_offset": 2048, 00:20:04.109 "data_size": 63488 00:20:04.109 }, 00:20:04.109 { 00:20:04.109 "name": "pt2", 00:20:04.109 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:04.109 "is_configured": true, 00:20:04.109 "data_offset": 2048, 00:20:04.109 "data_size": 63488 00:20:04.109 }, 00:20:04.109 { 00:20:04.109 "name": "pt3", 00:20:04.109 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:04.109 "is_configured": true, 00:20:04.109 "data_offset": 2048, 00:20:04.109 "data_size": 63488 00:20:04.109 }, 00:20:04.109 { 00:20:04.109 "name": "pt4", 00:20:04.109 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:04.109 "is_configured": true, 00:20:04.109 "data_offset": 2048, 00:20:04.109 "data_size": 63488 00:20:04.109 } 00:20:04.109 ] 00:20:04.109 } 00:20:04.109 } 00:20:04.109 }' 00:20:04.367 22:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:04.367 22:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:20:04.367 pt2 00:20:04.367 pt3 00:20:04.367 pt4' 00:20:04.367 22:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:04.367 22:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:20:04.367 22:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:04.367 22:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:04.367 "name": "pt1", 00:20:04.367 "aliases": [ 00:20:04.367 "00000000-0000-0000-0000-000000000001" 00:20:04.367 ], 00:20:04.367 "product_name": "passthru", 00:20:04.367 "block_size": 512, 00:20:04.367 "num_blocks": 65536, 00:20:04.367 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:04.367 "assigned_rate_limits": { 00:20:04.367 "rw_ios_per_sec": 0, 00:20:04.367 "rw_mbytes_per_sec": 0, 00:20:04.367 "r_mbytes_per_sec": 0, 00:20:04.367 "w_mbytes_per_sec": 0 00:20:04.367 }, 00:20:04.367 "claimed": true, 00:20:04.367 "claim_type": "exclusive_write", 00:20:04.367 "zoned": false, 00:20:04.367 "supported_io_types": { 00:20:04.367 "read": true, 00:20:04.367 "write": true, 00:20:04.367 "unmap": true, 00:20:04.367 "flush": true, 00:20:04.367 "reset": true, 00:20:04.367 "nvme_admin": false, 00:20:04.367 "nvme_io": false, 00:20:04.367 "nvme_io_md": false, 00:20:04.367 "write_zeroes": true, 00:20:04.367 "zcopy": true, 00:20:04.367 "get_zone_info": false, 00:20:04.368 "zone_management": false, 00:20:04.368 "zone_append": false, 00:20:04.368 "compare": false, 00:20:04.368 "compare_and_write": false, 00:20:04.368 "abort": true, 00:20:04.368 "seek_hole": false, 00:20:04.368 "seek_data": false, 00:20:04.368 "copy": true, 00:20:04.368 "nvme_iov_md": false 00:20:04.368 }, 00:20:04.368 "memory_domains": [ 00:20:04.368 { 00:20:04.368 "dma_device_id": "system", 00:20:04.368 "dma_device_type": 1 00:20:04.368 }, 00:20:04.368 { 00:20:04.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:04.368 "dma_device_type": 2 00:20:04.368 } 00:20:04.368 ], 00:20:04.368 "driver_specific": { 00:20:04.368 "passthru": { 00:20:04.368 "name": "pt1", 00:20:04.368 "base_bdev_name": "malloc1" 00:20:04.368 } 00:20:04.368 } 00:20:04.368 }' 00:20:04.368 22:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:04.625 22:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:04.625 22:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:04.625 22:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:04.625 22:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:04.625 22:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:04.625 22:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:04.625 22:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:04.625 22:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:04.625 22:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:04.625 22:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:04.884 22:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:04.884 22:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:04.884 22:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:20:04.884 22:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:04.884 22:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:04.884 "name": "pt2", 00:20:04.884 "aliases": [ 00:20:04.884 "00000000-0000-0000-0000-000000000002" 00:20:04.884 ], 00:20:04.884 "product_name": "passthru", 00:20:04.884 "block_size": 512, 00:20:04.884 "num_blocks": 65536, 00:20:04.884 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:04.884 "assigned_rate_limits": { 00:20:04.884 "rw_ios_per_sec": 0, 00:20:04.884 "rw_mbytes_per_sec": 0, 00:20:04.884 "r_mbytes_per_sec": 0, 00:20:04.884 "w_mbytes_per_sec": 0 00:20:04.884 }, 00:20:04.884 "claimed": true, 00:20:04.884 "claim_type": "exclusive_write", 00:20:04.884 "zoned": false, 00:20:04.884 "supported_io_types": { 00:20:04.884 "read": true, 00:20:04.884 "write": true, 00:20:04.884 "unmap": true, 00:20:04.884 "flush": true, 00:20:04.884 "reset": true, 00:20:04.884 "nvme_admin": false, 00:20:04.884 "nvme_io": false, 00:20:04.884 "nvme_io_md": false, 00:20:04.884 "write_zeroes": true, 00:20:04.884 "zcopy": true, 00:20:04.884 "get_zone_info": false, 00:20:04.884 "zone_management": false, 00:20:04.884 "zone_append": false, 00:20:04.884 "compare": false, 00:20:04.884 "compare_and_write": false, 00:20:04.884 "abort": true, 00:20:04.884 "seek_hole": false, 00:20:04.884 "seek_data": false, 00:20:04.884 "copy": true, 00:20:04.884 "nvme_iov_md": false 00:20:04.884 }, 00:20:04.884 "memory_domains": [ 00:20:04.884 { 00:20:04.884 "dma_device_id": "system", 00:20:04.884 "dma_device_type": 1 00:20:04.884 }, 00:20:04.884 { 00:20:04.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:04.884 "dma_device_type": 2 00:20:04.884 } 00:20:04.884 ], 00:20:04.884 "driver_specific": { 00:20:04.884 "passthru": { 00:20:04.884 "name": "pt2", 00:20:04.884 "base_bdev_name": "malloc2" 00:20:04.884 } 00:20:04.884 } 00:20:04.884 }' 00:20:04.884 22:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:04.884 22:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:05.143 22:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:05.143 22:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:05.143 22:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:05.143 22:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:05.143 22:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:05.143 22:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:05.143 22:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:05.143 22:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:05.143 22:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:05.143 22:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:05.143 22:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:05.143 22:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:20:05.143 22:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:05.402 22:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:05.402 "name": "pt3", 00:20:05.402 "aliases": [ 00:20:05.402 "00000000-0000-0000-0000-000000000003" 00:20:05.402 ], 00:20:05.402 "product_name": "passthru", 00:20:05.402 "block_size": 512, 00:20:05.402 "num_blocks": 65536, 00:20:05.402 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:05.402 "assigned_rate_limits": { 00:20:05.402 "rw_ios_per_sec": 0, 00:20:05.402 "rw_mbytes_per_sec": 0, 00:20:05.402 "r_mbytes_per_sec": 0, 00:20:05.402 "w_mbytes_per_sec": 0 00:20:05.402 }, 00:20:05.402 "claimed": true, 00:20:05.402 "claim_type": "exclusive_write", 00:20:05.402 "zoned": false, 00:20:05.402 "supported_io_types": { 00:20:05.402 "read": true, 00:20:05.402 "write": true, 00:20:05.402 "unmap": true, 00:20:05.402 "flush": true, 00:20:05.402 "reset": true, 00:20:05.402 "nvme_admin": false, 00:20:05.402 "nvme_io": false, 00:20:05.402 "nvme_io_md": false, 00:20:05.402 "write_zeroes": true, 00:20:05.402 "zcopy": true, 00:20:05.402 "get_zone_info": false, 00:20:05.402 "zone_management": false, 00:20:05.402 "zone_append": false, 00:20:05.402 "compare": false, 00:20:05.402 "compare_and_write": false, 00:20:05.402 "abort": true, 00:20:05.402 "seek_hole": false, 00:20:05.402 "seek_data": false, 00:20:05.402 "copy": true, 00:20:05.402 "nvme_iov_md": false 00:20:05.402 }, 00:20:05.402 "memory_domains": [ 00:20:05.402 { 00:20:05.402 "dma_device_id": "system", 00:20:05.402 "dma_device_type": 1 00:20:05.402 }, 00:20:05.402 { 00:20:05.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:05.402 "dma_device_type": 2 00:20:05.402 } 00:20:05.402 ], 00:20:05.402 "driver_specific": { 00:20:05.402 "passthru": { 00:20:05.402 "name": "pt3", 00:20:05.402 "base_bdev_name": "malloc3" 00:20:05.402 } 00:20:05.402 } 00:20:05.402 }' 00:20:05.402 22:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:05.402 22:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:05.402 22:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:05.402 22:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:05.661 22:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:05.661 22:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:05.661 22:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:05.661 22:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:05.661 22:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:05.661 22:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:05.661 22:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:05.661 22:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:05.661 22:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:05.661 22:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:20:05.661 22:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:05.920 22:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:05.920 "name": "pt4", 00:20:05.920 "aliases": [ 00:20:05.920 "00000000-0000-0000-0000-000000000004" 00:20:05.920 ], 00:20:05.920 "product_name": "passthru", 00:20:05.920 "block_size": 512, 00:20:05.920 "num_blocks": 65536, 00:20:05.920 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:05.920 "assigned_rate_limits": { 00:20:05.920 "rw_ios_per_sec": 0, 00:20:05.920 "rw_mbytes_per_sec": 0, 00:20:05.920 "r_mbytes_per_sec": 0, 00:20:05.920 "w_mbytes_per_sec": 0 00:20:05.920 }, 00:20:05.920 "claimed": true, 00:20:05.920 "claim_type": "exclusive_write", 00:20:05.920 "zoned": false, 00:20:05.920 "supported_io_types": { 00:20:05.920 "read": true, 00:20:05.920 "write": true, 00:20:05.920 "unmap": true, 00:20:05.920 "flush": true, 00:20:05.920 "reset": true, 00:20:05.920 "nvme_admin": false, 00:20:05.920 "nvme_io": false, 00:20:05.920 "nvme_io_md": false, 00:20:05.920 "write_zeroes": true, 00:20:05.920 "zcopy": true, 00:20:05.920 "get_zone_info": false, 00:20:05.920 "zone_management": false, 00:20:05.920 "zone_append": false, 00:20:05.920 "compare": false, 00:20:05.920 "compare_and_write": false, 00:20:05.920 "abort": true, 00:20:05.920 "seek_hole": false, 00:20:05.920 "seek_data": false, 00:20:05.920 "copy": true, 00:20:05.920 "nvme_iov_md": false 00:20:05.920 }, 00:20:05.920 "memory_domains": [ 00:20:05.920 { 00:20:05.920 "dma_device_id": "system", 00:20:05.920 "dma_device_type": 1 00:20:05.920 }, 00:20:05.920 { 00:20:05.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:05.920 "dma_device_type": 2 00:20:05.920 } 00:20:05.920 ], 00:20:05.920 "driver_specific": { 00:20:05.920 "passthru": { 00:20:05.920 "name": "pt4", 00:20:05.920 "base_bdev_name": "malloc4" 00:20:05.920 } 00:20:05.920 } 00:20:05.920 }' 00:20:05.920 22:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:05.920 22:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:05.920 22:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:05.920 22:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:05.920 22:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:06.179 22:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:06.179 22:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:06.179 22:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:06.179 22:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:06.179 22:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:06.179 22:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:06.179 22:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:06.179 22:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:20:06.179 22:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:06.438 [2024-07-13 22:03:25.640691] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:06.438 22:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 974beb8c-3b18-40b1-8dbf-6680c5c59ca2 '!=' 974beb8c-3b18-40b1-8dbf-6680c5c59ca2 ']' 00:20:06.438 22:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:20:06.438 22:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:06.438 22:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:20:06.438 22:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 1438277 00:20:06.438 22:03:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 1438277 ']' 00:20:06.438 22:03:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 1438277 00:20:06.438 22:03:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:20:06.438 22:03:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:06.438 22:03:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1438277 00:20:06.438 22:03:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:06.438 22:03:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:06.438 22:03:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1438277' 00:20:06.438 killing process with pid 1438277 00:20:06.438 22:03:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 1438277 00:20:06.439 [2024-07-13 22:03:25.705572] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:06.439 [2024-07-13 22:03:25.705652] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:06.439 [2024-07-13 22:03:25.705719] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:06.439 22:03:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 1438277 00:20:06.439 [2024-07-13 22:03:25.705732] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000043280 name raid_bdev1, state offline 00:20:06.697 [2024-07-13 22:03:26.025144] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:08.075 22:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:20:08.075 00:20:08.075 real 0m14.003s 00:20:08.075 user 0m23.746s 00:20:08.075 sys 0m2.559s 00:20:08.075 22:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:08.075 22:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.075 ************************************ 00:20:08.075 END TEST raid_superblock_test 00:20:08.075 ************************************ 00:20:08.075 22:03:27 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:20:08.075 22:03:27 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:20:08.075 22:03:27 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:20:08.075 22:03:27 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:08.075 22:03:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:08.075 ************************************ 00:20:08.075 START TEST raid_read_error_test 00:20:08.075 ************************************ 00:20:08.075 22:03:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 4 read 00:20:08.075 22:03:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:20:08.075 22:03:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:20:08.075 22:03:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:20:08.075 22:03:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:20:08.075 22:03:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:08.075 22:03:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:20:08.075 22:03:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:08.075 22:03:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:08.075 22:03:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:20:08.075 22:03:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:08.075 22:03:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:08.075 22:03:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:20:08.075 22:03:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:08.075 22:03:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:08.075 22:03:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:20:08.075 22:03:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:08.075 22:03:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:08.075 22:03:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:08.075 22:03:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:20:08.075 22:03:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:20:08.075 22:03:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:20:08.075 22:03:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:20:08.075 22:03:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:20:08.075 22:03:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:20:08.075 22:03:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:20:08.075 22:03:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:20:08.075 22:03:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:20:08.075 22:03:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:20:08.075 22:03:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.VOQ3HffDUb 00:20:08.075 22:03:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:08.075 22:03:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1440942 00:20:08.075 22:03:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1440942 /var/tmp/spdk-raid.sock 00:20:08.075 22:03:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 1440942 ']' 00:20:08.075 22:03:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:08.075 22:03:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:08.075 22:03:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:08.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:08.075 22:03:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:08.075 22:03:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.075 [2024-07-13 22:03:27.362158] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:08.075 [2024-07-13 22:03:27.362272] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1440942 ] 00:20:08.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:08.075 EAL: Requested device 0000:3d:01.0 cannot be used 00:20:08.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:08.075 EAL: Requested device 0000:3d:01.1 cannot be used 00:20:08.076 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:08.076 EAL: Requested device 0000:3d:01.2 cannot be used 00:20:08.076 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:08.076 EAL: Requested device 0000:3d:01.3 cannot be used 00:20:08.076 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:08.076 EAL: Requested device 0000:3d:01.4 cannot be used 00:20:08.076 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:08.076 EAL: Requested device 0000:3d:01.5 cannot be used 00:20:08.076 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:08.076 EAL: Requested device 0000:3d:01.6 cannot be used 00:20:08.076 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:08.076 EAL: Requested device 0000:3d:01.7 cannot be used 00:20:08.076 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:08.076 EAL: Requested device 0000:3d:02.0 cannot be used 00:20:08.076 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:08.076 EAL: Requested device 0000:3d:02.1 cannot be used 00:20:08.076 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:08.076 EAL: Requested device 0000:3d:02.2 cannot be used 00:20:08.076 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:08.076 EAL: Requested device 0000:3d:02.3 cannot be used 00:20:08.076 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:08.076 EAL: Requested device 0000:3d:02.4 cannot be used 00:20:08.076 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:08.076 EAL: Requested device 0000:3d:02.5 cannot be used 00:20:08.076 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:08.076 EAL: Requested device 0000:3d:02.6 cannot be used 00:20:08.076 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:08.076 EAL: Requested device 0000:3d:02.7 cannot be used 00:20:08.076 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:08.076 EAL: Requested device 0000:3f:01.0 cannot be used 00:20:08.076 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:08.076 EAL: Requested device 0000:3f:01.1 cannot be used 00:20:08.076 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:08.076 EAL: Requested device 0000:3f:01.2 cannot be used 00:20:08.076 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:08.076 EAL: Requested device 0000:3f:01.3 cannot be used 00:20:08.076 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:08.076 EAL: Requested device 0000:3f:01.4 cannot be used 00:20:08.076 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:08.076 EAL: Requested device 0000:3f:01.5 cannot be used 00:20:08.076 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:08.076 EAL: Requested device 0000:3f:01.6 cannot be used 00:20:08.076 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:08.076 EAL: Requested device 0000:3f:01.7 cannot be used 00:20:08.076 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:08.076 EAL: Requested device 0000:3f:02.0 cannot be used 00:20:08.076 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:08.076 EAL: Requested device 0000:3f:02.1 cannot be used 00:20:08.076 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:08.076 EAL: Requested device 0000:3f:02.2 cannot be used 00:20:08.076 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:08.076 EAL: Requested device 0000:3f:02.3 cannot be used 00:20:08.076 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:08.076 EAL: Requested device 0000:3f:02.4 cannot be used 00:20:08.076 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:08.076 EAL: Requested device 0000:3f:02.5 cannot be used 00:20:08.076 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:08.076 EAL: Requested device 0000:3f:02.6 cannot be used 00:20:08.076 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:08.076 EAL: Requested device 0000:3f:02.7 cannot be used 00:20:08.335 [2024-07-13 22:03:27.518809] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.594 [2024-07-13 22:03:27.733631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.594 [2024-07-13 22:03:27.980526] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:08.594 [2024-07-13 22:03:27.980554] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:08.853 22:03:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:08.853 22:03:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:20:08.853 22:03:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:08.853 22:03:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:09.112 BaseBdev1_malloc 00:20:09.112 22:03:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:20:09.112 true 00:20:09.371 22:03:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:09.371 [2024-07-13 22:03:28.675337] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:09.371 [2024-07-13 22:03:28.675389] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:09.371 [2024-07-13 22:03:28.675430] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:20:09.371 [2024-07-13 22:03:28.675446] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:09.371 [2024-07-13 22:03:28.677538] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:09.371 [2024-07-13 22:03:28.677569] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:09.371 BaseBdev1 00:20:09.371 22:03:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:09.371 22:03:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:09.630 BaseBdev2_malloc 00:20:09.630 22:03:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:20:09.889 true 00:20:09.889 22:03:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:09.889 [2024-07-13 22:03:29.209119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:09.889 [2024-07-13 22:03:29.209170] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:09.889 [2024-07-13 22:03:29.209209] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:20:09.889 [2024-07-13 22:03:29.209225] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:09.889 [2024-07-13 22:03:29.211323] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:09.889 [2024-07-13 22:03:29.211353] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:09.889 BaseBdev2 00:20:09.889 22:03:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:09.889 22:03:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:10.148 BaseBdev3_malloc 00:20:10.148 22:03:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:20:10.408 true 00:20:10.408 22:03:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:20:10.408 [2024-07-13 22:03:29.733404] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:20:10.408 [2024-07-13 22:03:29.733455] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:10.408 [2024-07-13 22:03:29.733479] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041780 00:20:10.408 [2024-07-13 22:03:29.733493] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:10.408 [2024-07-13 22:03:29.735636] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:10.408 [2024-07-13 22:03:29.735666] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:10.408 BaseBdev3 00:20:10.408 22:03:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:10.408 22:03:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:10.668 BaseBdev4_malloc 00:20:10.668 22:03:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:20:10.927 true 00:20:10.927 22:03:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:20:10.927 [2024-07-13 22:03:30.258180] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:20:10.927 [2024-07-13 22:03:30.258249] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:10.927 [2024-07-13 22:03:30.258290] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042680 00:20:10.927 [2024-07-13 22:03:30.258303] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:10.927 [2024-07-13 22:03:30.260470] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:10.927 [2024-07-13 22:03:30.260502] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:10.927 BaseBdev4 00:20:10.927 22:03:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:20:11.186 [2024-07-13 22:03:30.430674] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:11.187 [2024-07-13 22:03:30.432426] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:11.187 [2024-07-13 22:03:30.432498] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:11.187 [2024-07-13 22:03:30.432557] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:11.187 [2024-07-13 22:03:30.432766] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042c80 00:20:11.187 [2024-07-13 22:03:30.432783] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:11.187 [2024-07-13 22:03:30.433054] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:20:11.187 [2024-07-13 22:03:30.433265] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042c80 00:20:11.187 [2024-07-13 22:03:30.433275] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000042c80 00:20:11.187 [2024-07-13 22:03:30.433428] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:11.187 22:03:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:20:11.187 22:03:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:11.187 22:03:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:11.187 22:03:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:11.187 22:03:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:11.187 22:03:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:11.187 22:03:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:11.187 22:03:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:11.187 22:03:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:11.187 22:03:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:11.187 22:03:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:11.187 22:03:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.446 22:03:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:11.446 "name": "raid_bdev1", 00:20:11.446 "uuid": "11dec210-9b84-48cd-b707-0a5e42ab1058", 00:20:11.446 "strip_size_kb": 64, 00:20:11.446 "state": "online", 00:20:11.446 "raid_level": "concat", 00:20:11.446 "superblock": true, 00:20:11.446 "num_base_bdevs": 4, 00:20:11.446 "num_base_bdevs_discovered": 4, 00:20:11.446 "num_base_bdevs_operational": 4, 00:20:11.446 "base_bdevs_list": [ 00:20:11.446 { 00:20:11.446 "name": "BaseBdev1", 00:20:11.446 "uuid": "9bd87970-46c7-528e-a1d7-cda1938ef46f", 00:20:11.446 "is_configured": true, 00:20:11.446 "data_offset": 2048, 00:20:11.446 "data_size": 63488 00:20:11.446 }, 00:20:11.446 { 00:20:11.446 "name": "BaseBdev2", 00:20:11.446 "uuid": "f1137bb1-39b0-50d6-b5a5-18aa761c9207", 00:20:11.446 "is_configured": true, 00:20:11.446 "data_offset": 2048, 00:20:11.446 "data_size": 63488 00:20:11.446 }, 00:20:11.446 { 00:20:11.446 "name": "BaseBdev3", 00:20:11.446 "uuid": "38755781-f5ec-5885-b1cd-083c27b998a6", 00:20:11.446 "is_configured": true, 00:20:11.446 "data_offset": 2048, 00:20:11.446 "data_size": 63488 00:20:11.446 }, 00:20:11.446 { 00:20:11.446 "name": "BaseBdev4", 00:20:11.446 "uuid": "9c25c44e-b718-545b-b8a8-4b9622c5bc3a", 00:20:11.446 "is_configured": true, 00:20:11.446 "data_offset": 2048, 00:20:11.446 "data_size": 63488 00:20:11.446 } 00:20:11.446 ] 00:20:11.446 }' 00:20:11.446 22:03:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:11.446 22:03:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.012 22:03:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:20:12.012 22:03:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:12.012 [2024-07-13 22:03:31.194036] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000108b0 00:20:12.949 22:03:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:20:12.949 22:03:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:20:12.949 22:03:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:20:12.949 22:03:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:20:12.949 22:03:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:20:12.949 22:03:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:12.949 22:03:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:12.949 22:03:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:12.949 22:03:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:12.949 22:03:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:12.949 22:03:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:12.949 22:03:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:12.949 22:03:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:12.949 22:03:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:12.949 22:03:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:12.949 22:03:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.208 22:03:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:13.208 "name": "raid_bdev1", 00:20:13.208 "uuid": "11dec210-9b84-48cd-b707-0a5e42ab1058", 00:20:13.208 "strip_size_kb": 64, 00:20:13.208 "state": "online", 00:20:13.208 "raid_level": "concat", 00:20:13.208 "superblock": true, 00:20:13.208 "num_base_bdevs": 4, 00:20:13.208 "num_base_bdevs_discovered": 4, 00:20:13.208 "num_base_bdevs_operational": 4, 00:20:13.208 "base_bdevs_list": [ 00:20:13.208 { 00:20:13.208 "name": "BaseBdev1", 00:20:13.208 "uuid": "9bd87970-46c7-528e-a1d7-cda1938ef46f", 00:20:13.208 "is_configured": true, 00:20:13.208 "data_offset": 2048, 00:20:13.208 "data_size": 63488 00:20:13.208 }, 00:20:13.208 { 00:20:13.208 "name": "BaseBdev2", 00:20:13.208 "uuid": "f1137bb1-39b0-50d6-b5a5-18aa761c9207", 00:20:13.208 "is_configured": true, 00:20:13.208 "data_offset": 2048, 00:20:13.208 "data_size": 63488 00:20:13.208 }, 00:20:13.208 { 00:20:13.208 "name": "BaseBdev3", 00:20:13.208 "uuid": "38755781-f5ec-5885-b1cd-083c27b998a6", 00:20:13.208 "is_configured": true, 00:20:13.208 "data_offset": 2048, 00:20:13.208 "data_size": 63488 00:20:13.208 }, 00:20:13.208 { 00:20:13.208 "name": "BaseBdev4", 00:20:13.208 "uuid": "9c25c44e-b718-545b-b8a8-4b9622c5bc3a", 00:20:13.208 "is_configured": true, 00:20:13.208 "data_offset": 2048, 00:20:13.208 "data_size": 63488 00:20:13.208 } 00:20:13.208 ] 00:20:13.208 }' 00:20:13.208 22:03:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:13.208 22:03:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.777 22:03:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:13.777 [2024-07-13 22:03:33.113983] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:13.777 [2024-07-13 22:03:33.114027] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:13.777 [2024-07-13 22:03:33.116390] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:13.777 [2024-07-13 22:03:33.116437] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:13.777 [2024-07-13 22:03:33.116477] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:13.777 [2024-07-13 22:03:33.116497] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042c80 name raid_bdev1, state offline 00:20:13.777 0 00:20:13.777 22:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1440942 00:20:13.777 22:03:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 1440942 ']' 00:20:13.777 22:03:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 1440942 00:20:13.777 22:03:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:20:13.777 22:03:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:13.777 22:03:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1440942 00:20:14.035 22:03:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:14.035 22:03:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:14.035 22:03:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1440942' 00:20:14.035 killing process with pid 1440942 00:20:14.035 22:03:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 1440942 00:20:14.035 [2024-07-13 22:03:33.184806] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:14.035 22:03:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 1440942 00:20:14.293 [2024-07-13 22:03:33.439547] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:15.671 22:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.VOQ3HffDUb 00:20:15.671 22:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:20:15.671 22:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:20:15.671 22:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.52 00:20:15.671 22:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:20:15.671 22:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:15.671 22:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:20:15.671 22:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.52 != \0\.\0\0 ]] 00:20:15.671 00:20:15.671 real 0m7.432s 00:20:15.671 user 0m10.508s 00:20:15.671 sys 0m1.243s 00:20:15.671 22:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:15.671 22:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.671 ************************************ 00:20:15.671 END TEST raid_read_error_test 00:20:15.671 ************************************ 00:20:15.671 22:03:34 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:20:15.671 22:03:34 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:20:15.671 22:03:34 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:20:15.671 22:03:34 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:15.671 22:03:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:15.671 ************************************ 00:20:15.671 START TEST raid_write_error_test 00:20:15.671 ************************************ 00:20:15.671 22:03:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 4 write 00:20:15.671 22:03:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:20:15.671 22:03:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:20:15.671 22:03:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:20:15.671 22:03:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:20:15.671 22:03:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:15.671 22:03:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:20:15.671 22:03:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:15.671 22:03:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:15.671 22:03:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:20:15.671 22:03:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:15.671 22:03:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:15.671 22:03:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:20:15.671 22:03:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:15.671 22:03:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:15.671 22:03:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:20:15.671 22:03:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:15.671 22:03:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:15.671 22:03:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:15.671 22:03:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:20:15.671 22:03:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:20:15.671 22:03:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:20:15.671 22:03:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:20:15.671 22:03:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:20:15.671 22:03:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:20:15.671 22:03:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:20:15.671 22:03:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:20:15.671 22:03:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:20:15.671 22:03:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:20:15.671 22:03:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.NelOpE6ArO 00:20:15.671 22:03:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:15.671 22:03:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1442302 00:20:15.671 22:03:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1442302 /var/tmp/spdk-raid.sock 00:20:15.671 22:03:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 1442302 ']' 00:20:15.671 22:03:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:15.671 22:03:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:15.671 22:03:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:15.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:15.671 22:03:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:15.671 22:03:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.671 [2024-07-13 22:03:34.874652] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:15.671 [2024-07-13 22:03:34.874750] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1442302 ] 00:20:15.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:15.671 EAL: Requested device 0000:3d:01.0 cannot be used 00:20:15.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:15.671 EAL: Requested device 0000:3d:01.1 cannot be used 00:20:15.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:15.671 EAL: Requested device 0000:3d:01.2 cannot be used 00:20:15.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:15.671 EAL: Requested device 0000:3d:01.3 cannot be used 00:20:15.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:15.671 EAL: Requested device 0000:3d:01.4 cannot be used 00:20:15.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:15.671 EAL: Requested device 0000:3d:01.5 cannot be used 00:20:15.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:15.671 EAL: Requested device 0000:3d:01.6 cannot be used 00:20:15.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:15.671 EAL: Requested device 0000:3d:01.7 cannot be used 00:20:15.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:15.671 EAL: Requested device 0000:3d:02.0 cannot be used 00:20:15.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:15.671 EAL: Requested device 0000:3d:02.1 cannot be used 00:20:15.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:15.671 EAL: Requested device 0000:3d:02.2 cannot be used 00:20:15.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:15.671 EAL: Requested device 0000:3d:02.3 cannot be used 00:20:15.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:15.671 EAL: Requested device 0000:3d:02.4 cannot be used 00:20:15.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:15.671 EAL: Requested device 0000:3d:02.5 cannot be used 00:20:15.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:15.672 EAL: Requested device 0000:3d:02.6 cannot be used 00:20:15.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:15.672 EAL: Requested device 0000:3d:02.7 cannot be used 00:20:15.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:15.672 EAL: Requested device 0000:3f:01.0 cannot be used 00:20:15.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:15.672 EAL: Requested device 0000:3f:01.1 cannot be used 00:20:15.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:15.672 EAL: Requested device 0000:3f:01.2 cannot be used 00:20:15.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:15.672 EAL: Requested device 0000:3f:01.3 cannot be used 00:20:15.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:15.672 EAL: Requested device 0000:3f:01.4 cannot be used 00:20:15.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:15.672 EAL: Requested device 0000:3f:01.5 cannot be used 00:20:15.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:15.672 EAL: Requested device 0000:3f:01.6 cannot be used 00:20:15.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:15.672 EAL: Requested device 0000:3f:01.7 cannot be used 00:20:15.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:15.672 EAL: Requested device 0000:3f:02.0 cannot be used 00:20:15.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:15.672 EAL: Requested device 0000:3f:02.1 cannot be used 00:20:15.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:15.672 EAL: Requested device 0000:3f:02.2 cannot be used 00:20:15.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:15.672 EAL: Requested device 0000:3f:02.3 cannot be used 00:20:15.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:15.672 EAL: Requested device 0000:3f:02.4 cannot be used 00:20:15.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:15.672 EAL: Requested device 0000:3f:02.5 cannot be used 00:20:15.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:15.672 EAL: Requested device 0000:3f:02.6 cannot be used 00:20:15.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:15.672 EAL: Requested device 0000:3f:02.7 cannot be used 00:20:15.672 [2024-07-13 22:03:35.036500] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.931 [2024-07-13 22:03:35.242664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.191 [2024-07-13 22:03:35.482584] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:16.191 [2024-07-13 22:03:35.482612] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:16.450 22:03:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:16.450 22:03:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:20:16.450 22:03:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:16.450 22:03:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:16.450 BaseBdev1_malloc 00:20:16.709 22:03:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:20:16.709 true 00:20:16.709 22:03:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:16.969 [2024-07-13 22:03:36.174080] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:16.969 [2024-07-13 22:03:36.174135] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:16.969 [2024-07-13 22:03:36.174175] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:20:16.969 [2024-07-13 22:03:36.174192] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:16.969 [2024-07-13 22:03:36.176293] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:16.969 [2024-07-13 22:03:36.176326] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:16.969 BaseBdev1 00:20:16.969 22:03:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:16.969 22:03:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:17.301 BaseBdev2_malloc 00:20:17.301 22:03:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:20:17.301 true 00:20:17.301 22:03:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:17.560 [2024-07-13 22:03:36.712640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:17.560 [2024-07-13 22:03:36.712695] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:17.560 [2024-07-13 22:03:36.712733] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:20:17.560 [2024-07-13 22:03:36.712749] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:17.560 [2024-07-13 22:03:36.714838] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:17.560 [2024-07-13 22:03:36.714869] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:17.560 BaseBdev2 00:20:17.560 22:03:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:17.560 22:03:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:17.560 BaseBdev3_malloc 00:20:17.560 22:03:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:20:17.819 true 00:20:17.819 22:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:20:18.079 [2024-07-13 22:03:37.250101] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:20:18.079 [2024-07-13 22:03:37.250150] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:18.079 [2024-07-13 22:03:37.250191] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041780 00:20:18.079 [2024-07-13 22:03:37.250204] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:18.079 [2024-07-13 22:03:37.252309] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:18.079 [2024-07-13 22:03:37.252344] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:18.079 BaseBdev3 00:20:18.079 22:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:18.079 22:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:18.079 BaseBdev4_malloc 00:20:18.338 22:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:20:18.338 true 00:20:18.338 22:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:20:18.598 [2024-07-13 22:03:37.784227] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:20:18.598 [2024-07-13 22:03:37.784282] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:18.598 [2024-07-13 22:03:37.784322] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042680 00:20:18.598 [2024-07-13 22:03:37.784335] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:18.598 [2024-07-13 22:03:37.786417] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:18.598 [2024-07-13 22:03:37.786447] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:18.598 BaseBdev4 00:20:18.598 22:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:20:18.598 [2024-07-13 22:03:37.952714] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:18.598 [2024-07-13 22:03:37.954449] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:18.598 [2024-07-13 22:03:37.954519] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:18.598 [2024-07-13 22:03:37.954576] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:18.598 [2024-07-13 22:03:37.954782] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042c80 00:20:18.598 [2024-07-13 22:03:37.954797] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:18.598 [2024-07-13 22:03:37.955052] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:20:18.598 [2024-07-13 22:03:37.955253] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042c80 00:20:18.598 [2024-07-13 22:03:37.955264] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000042c80 00:20:18.598 [2024-07-13 22:03:37.955422] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:18.598 22:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:20:18.598 22:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:18.598 22:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:18.598 22:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:18.598 22:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:18.598 22:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:18.598 22:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:18.598 22:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:18.598 22:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:18.598 22:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:18.598 22:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:18.598 22:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.857 22:03:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:18.857 "name": "raid_bdev1", 00:20:18.857 "uuid": "a600244c-5beb-4c36-8789-043b24291977", 00:20:18.857 "strip_size_kb": 64, 00:20:18.857 "state": "online", 00:20:18.857 "raid_level": "concat", 00:20:18.857 "superblock": true, 00:20:18.857 "num_base_bdevs": 4, 00:20:18.857 "num_base_bdevs_discovered": 4, 00:20:18.857 "num_base_bdevs_operational": 4, 00:20:18.857 "base_bdevs_list": [ 00:20:18.857 { 00:20:18.857 "name": "BaseBdev1", 00:20:18.857 "uuid": "783c1298-9cf2-5fd7-a72f-6f54e959a835", 00:20:18.857 "is_configured": true, 00:20:18.857 "data_offset": 2048, 00:20:18.857 "data_size": 63488 00:20:18.857 }, 00:20:18.857 { 00:20:18.857 "name": "BaseBdev2", 00:20:18.857 "uuid": "1dd5fed0-9294-529f-8248-99de305779dd", 00:20:18.857 "is_configured": true, 00:20:18.857 "data_offset": 2048, 00:20:18.857 "data_size": 63488 00:20:18.857 }, 00:20:18.857 { 00:20:18.857 "name": "BaseBdev3", 00:20:18.857 "uuid": "df14854f-a33d-57d0-adb1-396d312d1ece", 00:20:18.857 "is_configured": true, 00:20:18.857 "data_offset": 2048, 00:20:18.857 "data_size": 63488 00:20:18.857 }, 00:20:18.857 { 00:20:18.857 "name": "BaseBdev4", 00:20:18.857 "uuid": "0a9d3047-6820-5344-a766-203f8df2a5a4", 00:20:18.857 "is_configured": true, 00:20:18.857 "data_offset": 2048, 00:20:18.857 "data_size": 63488 00:20:18.858 } 00:20:18.858 ] 00:20:18.858 }' 00:20:18.858 22:03:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:18.858 22:03:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:19.426 22:03:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:20:19.426 22:03:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:19.426 [2024-07-13 22:03:38.716129] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000108b0 00:20:20.363 22:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:20:20.623 22:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:20:20.623 22:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:20:20.623 22:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:20:20.623 22:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:20:20.623 22:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:20.623 22:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:20.623 22:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:20.623 22:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:20.623 22:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:20.623 22:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:20.623 22:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:20.623 22:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:20.623 22:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:20.623 22:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.623 22:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:20.623 22:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:20.623 "name": "raid_bdev1", 00:20:20.623 "uuid": "a600244c-5beb-4c36-8789-043b24291977", 00:20:20.623 "strip_size_kb": 64, 00:20:20.623 "state": "online", 00:20:20.623 "raid_level": "concat", 00:20:20.623 "superblock": true, 00:20:20.623 "num_base_bdevs": 4, 00:20:20.623 "num_base_bdevs_discovered": 4, 00:20:20.623 "num_base_bdevs_operational": 4, 00:20:20.623 "base_bdevs_list": [ 00:20:20.623 { 00:20:20.623 "name": "BaseBdev1", 00:20:20.623 "uuid": "783c1298-9cf2-5fd7-a72f-6f54e959a835", 00:20:20.623 "is_configured": true, 00:20:20.623 "data_offset": 2048, 00:20:20.623 "data_size": 63488 00:20:20.623 }, 00:20:20.623 { 00:20:20.623 "name": "BaseBdev2", 00:20:20.623 "uuid": "1dd5fed0-9294-529f-8248-99de305779dd", 00:20:20.623 "is_configured": true, 00:20:20.623 "data_offset": 2048, 00:20:20.623 "data_size": 63488 00:20:20.623 }, 00:20:20.623 { 00:20:20.623 "name": "BaseBdev3", 00:20:20.623 "uuid": "df14854f-a33d-57d0-adb1-396d312d1ece", 00:20:20.623 "is_configured": true, 00:20:20.623 "data_offset": 2048, 00:20:20.623 "data_size": 63488 00:20:20.623 }, 00:20:20.623 { 00:20:20.623 "name": "BaseBdev4", 00:20:20.623 "uuid": "0a9d3047-6820-5344-a766-203f8df2a5a4", 00:20:20.623 "is_configured": true, 00:20:20.623 "data_offset": 2048, 00:20:20.623 "data_size": 63488 00:20:20.623 } 00:20:20.623 ] 00:20:20.623 }' 00:20:20.623 22:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:20.623 22:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.191 22:03:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:21.449 [2024-07-13 22:03:40.629068] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:21.449 [2024-07-13 22:03:40.629109] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:21.449 [2024-07-13 22:03:40.631351] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:21.449 [2024-07-13 22:03:40.631398] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:21.449 [2024-07-13 22:03:40.631437] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:21.449 [2024-07-13 22:03:40.631458] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042c80 name raid_bdev1, state offline 00:20:21.449 0 00:20:21.449 22:03:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1442302 00:20:21.449 22:03:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 1442302 ']' 00:20:21.449 22:03:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 1442302 00:20:21.449 22:03:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:20:21.449 22:03:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:21.449 22:03:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1442302 00:20:21.449 22:03:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:21.449 22:03:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:21.449 22:03:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1442302' 00:20:21.449 killing process with pid 1442302 00:20:21.449 22:03:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 1442302 00:20:21.449 [2024-07-13 22:03:40.699983] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:21.449 22:03:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 1442302 00:20:21.708 [2024-07-13 22:03:40.955087] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:23.086 22:03:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:20:23.086 22:03:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.NelOpE6ArO 00:20:23.086 22:03:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:20:23.086 22:03:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.52 00:20:23.086 22:03:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:20:23.086 22:03:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:23.086 22:03:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:20:23.086 22:03:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.52 != \0\.\0\0 ]] 00:20:23.086 00:20:23.086 real 0m7.459s 00:20:23.086 user 0m10.570s 00:20:23.086 sys 0m1.209s 00:20:23.086 22:03:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:23.086 22:03:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.086 ************************************ 00:20:23.086 END TEST raid_write_error_test 00:20:23.086 ************************************ 00:20:23.086 22:03:42 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:20:23.086 22:03:42 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:20:23.086 22:03:42 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:20:23.086 22:03:42 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:20:23.086 22:03:42 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:23.086 22:03:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:23.086 ************************************ 00:20:23.086 START TEST raid_state_function_test 00:20:23.086 ************************************ 00:20:23.086 22:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 4 false 00:20:23.086 22:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:20:23.086 22:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:20:23.086 22:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:20:23.086 22:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:20:23.086 22:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:20:23.086 22:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:23.086 22:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:20:23.086 22:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:23.086 22:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:23.086 22:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:20:23.086 22:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:23.086 22:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:23.086 22:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:20:23.086 22:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:23.086 22:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:23.086 22:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:20:23.086 22:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:23.086 22:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:23.086 22:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:23.086 22:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:20:23.086 22:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:20:23.086 22:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:20:23.086 22:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:20:23.086 22:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:20:23.086 22:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:20:23.086 22:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:20:23.086 22:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:20:23.086 22:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:20:23.086 22:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=1443717 00:20:23.086 22:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1443717' 00:20:23.086 Process raid pid: 1443717 00:20:23.086 22:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 1443717 /var/tmp/spdk-raid.sock 00:20:23.086 22:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 1443717 ']' 00:20:23.086 22:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:23.086 22:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:23.086 22:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:23.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:23.086 22:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:23.086 22:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.086 22:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:23.086 [2024-07-13 22:03:42.400212] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:23.086 [2024-07-13 22:03:42.400303] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:23.345 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:23.345 EAL: Requested device 0000:3d:01.0 cannot be used 00:20:23.345 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:23.345 EAL: Requested device 0000:3d:01.1 cannot be used 00:20:23.345 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:23.345 EAL: Requested device 0000:3d:01.2 cannot be used 00:20:23.345 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:23.345 EAL: Requested device 0000:3d:01.3 cannot be used 00:20:23.345 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:23.345 EAL: Requested device 0000:3d:01.4 cannot be used 00:20:23.345 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:23.345 EAL: Requested device 0000:3d:01.5 cannot be used 00:20:23.345 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:23.345 EAL: Requested device 0000:3d:01.6 cannot be used 00:20:23.345 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:23.345 EAL: Requested device 0000:3d:01.7 cannot be used 00:20:23.345 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:23.345 EAL: Requested device 0000:3d:02.0 cannot be used 00:20:23.345 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:23.345 EAL: Requested device 0000:3d:02.1 cannot be used 00:20:23.345 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:23.345 EAL: Requested device 0000:3d:02.2 cannot be used 00:20:23.345 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:23.345 EAL: Requested device 0000:3d:02.3 cannot be used 00:20:23.345 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:23.345 EAL: Requested device 0000:3d:02.4 cannot be used 00:20:23.345 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:23.346 EAL: Requested device 0000:3d:02.5 cannot be used 00:20:23.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:23.346 EAL: Requested device 0000:3d:02.6 cannot be used 00:20:23.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:23.346 EAL: Requested device 0000:3d:02.7 cannot be used 00:20:23.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:23.346 EAL: Requested device 0000:3f:01.0 cannot be used 00:20:23.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:23.346 EAL: Requested device 0000:3f:01.1 cannot be used 00:20:23.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:23.346 EAL: Requested device 0000:3f:01.2 cannot be used 00:20:23.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:23.346 EAL: Requested device 0000:3f:01.3 cannot be used 00:20:23.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:23.346 EAL: Requested device 0000:3f:01.4 cannot be used 00:20:23.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:23.346 EAL: Requested device 0000:3f:01.5 cannot be used 00:20:23.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:23.346 EAL: Requested device 0000:3f:01.6 cannot be used 00:20:23.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:23.346 EAL: Requested device 0000:3f:01.7 cannot be used 00:20:23.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:23.346 EAL: Requested device 0000:3f:02.0 cannot be used 00:20:23.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:23.346 EAL: Requested device 0000:3f:02.1 cannot be used 00:20:23.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:23.346 EAL: Requested device 0000:3f:02.2 cannot be used 00:20:23.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:23.346 EAL: Requested device 0000:3f:02.3 cannot be used 00:20:23.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:23.346 EAL: Requested device 0000:3f:02.4 cannot be used 00:20:23.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:23.346 EAL: Requested device 0000:3f:02.5 cannot be used 00:20:23.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:23.346 EAL: Requested device 0000:3f:02.6 cannot be used 00:20:23.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:23.346 EAL: Requested device 0000:3f:02.7 cannot be used 00:20:23.346 [2024-07-13 22:03:42.562894] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.605 [2024-07-13 22:03:42.761573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:23.865 [2024-07-13 22:03:43.005684] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:23.865 [2024-07-13 22:03:43.005711] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:23.865 22:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:23.865 22:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:20:23.865 22:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:24.124 [2024-07-13 22:03:43.288182] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:24.124 [2024-07-13 22:03:43.288226] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:24.124 [2024-07-13 22:03:43.288237] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:24.124 [2024-07-13 22:03:43.288249] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:24.124 [2024-07-13 22:03:43.288257] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:24.124 [2024-07-13 22:03:43.288268] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:24.124 [2024-07-13 22:03:43.288276] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:24.124 [2024-07-13 22:03:43.288287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:24.124 22:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:24.124 22:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:24.124 22:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:24.124 22:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:24.124 22:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:24.124 22:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:24.124 22:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:24.124 22:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:24.124 22:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:24.124 22:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:24.124 22:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:24.124 22:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:24.124 22:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:24.124 "name": "Existed_Raid", 00:20:24.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.124 "strip_size_kb": 0, 00:20:24.124 "state": "configuring", 00:20:24.124 "raid_level": "raid1", 00:20:24.124 "superblock": false, 00:20:24.124 "num_base_bdevs": 4, 00:20:24.124 "num_base_bdevs_discovered": 0, 00:20:24.124 "num_base_bdevs_operational": 4, 00:20:24.124 "base_bdevs_list": [ 00:20:24.124 { 00:20:24.124 "name": "BaseBdev1", 00:20:24.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.124 "is_configured": false, 00:20:24.124 "data_offset": 0, 00:20:24.124 "data_size": 0 00:20:24.124 }, 00:20:24.124 { 00:20:24.124 "name": "BaseBdev2", 00:20:24.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.124 "is_configured": false, 00:20:24.124 "data_offset": 0, 00:20:24.124 "data_size": 0 00:20:24.124 }, 00:20:24.124 { 00:20:24.124 "name": "BaseBdev3", 00:20:24.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.124 "is_configured": false, 00:20:24.124 "data_offset": 0, 00:20:24.124 "data_size": 0 00:20:24.124 }, 00:20:24.124 { 00:20:24.124 "name": "BaseBdev4", 00:20:24.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.124 "is_configured": false, 00:20:24.124 "data_offset": 0, 00:20:24.124 "data_size": 0 00:20:24.124 } 00:20:24.124 ] 00:20:24.124 }' 00:20:24.124 22:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:24.124 22:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.691 22:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:24.950 [2024-07-13 22:03:44.094188] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:24.950 [2024-07-13 22:03:44.094224] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:20:24.950 22:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:24.950 [2024-07-13 22:03:44.254643] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:24.950 [2024-07-13 22:03:44.254681] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:24.950 [2024-07-13 22:03:44.254691] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:24.950 [2024-07-13 22:03:44.254724] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:24.950 [2024-07-13 22:03:44.254732] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:24.950 [2024-07-13 22:03:44.254743] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:24.950 [2024-07-13 22:03:44.254751] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:24.950 [2024-07-13 22:03:44.254761] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:24.950 22:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:25.208 [2024-07-13 22:03:44.463607] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:25.208 BaseBdev1 00:20:25.208 22:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:20:25.208 22:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:20:25.208 22:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:25.208 22:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:25.208 22:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:25.208 22:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:25.208 22:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:25.467 22:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:25.467 [ 00:20:25.467 { 00:20:25.467 "name": "BaseBdev1", 00:20:25.467 "aliases": [ 00:20:25.467 "36bd739e-33eb-4b8f-97c6-9244eadd0915" 00:20:25.467 ], 00:20:25.467 "product_name": "Malloc disk", 00:20:25.467 "block_size": 512, 00:20:25.467 "num_blocks": 65536, 00:20:25.467 "uuid": "36bd739e-33eb-4b8f-97c6-9244eadd0915", 00:20:25.467 "assigned_rate_limits": { 00:20:25.467 "rw_ios_per_sec": 0, 00:20:25.467 "rw_mbytes_per_sec": 0, 00:20:25.467 "r_mbytes_per_sec": 0, 00:20:25.467 "w_mbytes_per_sec": 0 00:20:25.467 }, 00:20:25.467 "claimed": true, 00:20:25.467 "claim_type": "exclusive_write", 00:20:25.467 "zoned": false, 00:20:25.467 "supported_io_types": { 00:20:25.467 "read": true, 00:20:25.467 "write": true, 00:20:25.467 "unmap": true, 00:20:25.467 "flush": true, 00:20:25.467 "reset": true, 00:20:25.467 "nvme_admin": false, 00:20:25.467 "nvme_io": false, 00:20:25.467 "nvme_io_md": false, 00:20:25.467 "write_zeroes": true, 00:20:25.467 "zcopy": true, 00:20:25.467 "get_zone_info": false, 00:20:25.467 "zone_management": false, 00:20:25.467 "zone_append": false, 00:20:25.467 "compare": false, 00:20:25.467 "compare_and_write": false, 00:20:25.467 "abort": true, 00:20:25.467 "seek_hole": false, 00:20:25.467 "seek_data": false, 00:20:25.467 "copy": true, 00:20:25.467 "nvme_iov_md": false 00:20:25.467 }, 00:20:25.467 "memory_domains": [ 00:20:25.467 { 00:20:25.467 "dma_device_id": "system", 00:20:25.467 "dma_device_type": 1 00:20:25.467 }, 00:20:25.467 { 00:20:25.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:25.467 "dma_device_type": 2 00:20:25.467 } 00:20:25.467 ], 00:20:25.467 "driver_specific": {} 00:20:25.467 } 00:20:25.467 ] 00:20:25.467 22:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:25.467 22:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:25.467 22:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:25.467 22:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:25.467 22:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:25.467 22:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:25.467 22:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:25.467 22:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:25.467 22:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:25.467 22:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:25.467 22:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:25.467 22:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:25.467 22:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:25.726 22:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:25.726 "name": "Existed_Raid", 00:20:25.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.726 "strip_size_kb": 0, 00:20:25.726 "state": "configuring", 00:20:25.726 "raid_level": "raid1", 00:20:25.726 "superblock": false, 00:20:25.726 "num_base_bdevs": 4, 00:20:25.726 "num_base_bdevs_discovered": 1, 00:20:25.726 "num_base_bdevs_operational": 4, 00:20:25.726 "base_bdevs_list": [ 00:20:25.726 { 00:20:25.726 "name": "BaseBdev1", 00:20:25.726 "uuid": "36bd739e-33eb-4b8f-97c6-9244eadd0915", 00:20:25.726 "is_configured": true, 00:20:25.726 "data_offset": 0, 00:20:25.726 "data_size": 65536 00:20:25.726 }, 00:20:25.726 { 00:20:25.726 "name": "BaseBdev2", 00:20:25.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.726 "is_configured": false, 00:20:25.726 "data_offset": 0, 00:20:25.726 "data_size": 0 00:20:25.726 }, 00:20:25.726 { 00:20:25.726 "name": "BaseBdev3", 00:20:25.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.726 "is_configured": false, 00:20:25.726 "data_offset": 0, 00:20:25.726 "data_size": 0 00:20:25.726 }, 00:20:25.726 { 00:20:25.726 "name": "BaseBdev4", 00:20:25.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.726 "is_configured": false, 00:20:25.726 "data_offset": 0, 00:20:25.726 "data_size": 0 00:20:25.726 } 00:20:25.726 ] 00:20:25.726 }' 00:20:25.726 22:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:25.726 22:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.295 22:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:26.295 [2024-07-13 22:03:45.614628] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:26.295 [2024-07-13 22:03:45.614674] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:20:26.295 22:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:26.554 [2024-07-13 22:03:45.783153] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:26.554 [2024-07-13 22:03:45.784804] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:26.554 [2024-07-13 22:03:45.784836] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:26.554 [2024-07-13 22:03:45.784846] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:26.554 [2024-07-13 22:03:45.784873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:26.554 [2024-07-13 22:03:45.784882] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:26.554 [2024-07-13 22:03:45.784895] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:26.554 22:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:20:26.554 22:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:26.554 22:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:26.555 22:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:26.555 22:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:26.555 22:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:26.555 22:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:26.555 22:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:26.555 22:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:26.555 22:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:26.555 22:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:26.555 22:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:26.555 22:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:26.555 22:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:26.813 22:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:26.813 "name": "Existed_Raid", 00:20:26.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.813 "strip_size_kb": 0, 00:20:26.813 "state": "configuring", 00:20:26.813 "raid_level": "raid1", 00:20:26.813 "superblock": false, 00:20:26.813 "num_base_bdevs": 4, 00:20:26.813 "num_base_bdevs_discovered": 1, 00:20:26.813 "num_base_bdevs_operational": 4, 00:20:26.813 "base_bdevs_list": [ 00:20:26.813 { 00:20:26.813 "name": "BaseBdev1", 00:20:26.813 "uuid": "36bd739e-33eb-4b8f-97c6-9244eadd0915", 00:20:26.813 "is_configured": true, 00:20:26.813 "data_offset": 0, 00:20:26.813 "data_size": 65536 00:20:26.813 }, 00:20:26.813 { 00:20:26.813 "name": "BaseBdev2", 00:20:26.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.813 "is_configured": false, 00:20:26.813 "data_offset": 0, 00:20:26.813 "data_size": 0 00:20:26.813 }, 00:20:26.813 { 00:20:26.813 "name": "BaseBdev3", 00:20:26.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.813 "is_configured": false, 00:20:26.813 "data_offset": 0, 00:20:26.813 "data_size": 0 00:20:26.813 }, 00:20:26.813 { 00:20:26.813 "name": "BaseBdev4", 00:20:26.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.813 "is_configured": false, 00:20:26.813 "data_offset": 0, 00:20:26.813 "data_size": 0 00:20:26.813 } 00:20:26.813 ] 00:20:26.813 }' 00:20:26.813 22:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:26.813 22:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.071 22:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:27.330 [2024-07-13 22:03:46.618333] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:27.330 BaseBdev2 00:20:27.330 22:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:20:27.330 22:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:20:27.330 22:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:27.330 22:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:27.330 22:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:27.330 22:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:27.330 22:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:27.589 22:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:27.589 [ 00:20:27.589 { 00:20:27.589 "name": "BaseBdev2", 00:20:27.589 "aliases": [ 00:20:27.589 "251a56f0-eacc-45dc-b6d8-fe55ccc8ea5d" 00:20:27.589 ], 00:20:27.589 "product_name": "Malloc disk", 00:20:27.589 "block_size": 512, 00:20:27.589 "num_blocks": 65536, 00:20:27.589 "uuid": "251a56f0-eacc-45dc-b6d8-fe55ccc8ea5d", 00:20:27.589 "assigned_rate_limits": { 00:20:27.589 "rw_ios_per_sec": 0, 00:20:27.589 "rw_mbytes_per_sec": 0, 00:20:27.589 "r_mbytes_per_sec": 0, 00:20:27.589 "w_mbytes_per_sec": 0 00:20:27.589 }, 00:20:27.589 "claimed": true, 00:20:27.589 "claim_type": "exclusive_write", 00:20:27.589 "zoned": false, 00:20:27.589 "supported_io_types": { 00:20:27.589 "read": true, 00:20:27.589 "write": true, 00:20:27.589 "unmap": true, 00:20:27.589 "flush": true, 00:20:27.589 "reset": true, 00:20:27.589 "nvme_admin": false, 00:20:27.589 "nvme_io": false, 00:20:27.589 "nvme_io_md": false, 00:20:27.589 "write_zeroes": true, 00:20:27.589 "zcopy": true, 00:20:27.589 "get_zone_info": false, 00:20:27.589 "zone_management": false, 00:20:27.589 "zone_append": false, 00:20:27.589 "compare": false, 00:20:27.589 "compare_and_write": false, 00:20:27.589 "abort": true, 00:20:27.589 "seek_hole": false, 00:20:27.589 "seek_data": false, 00:20:27.589 "copy": true, 00:20:27.589 "nvme_iov_md": false 00:20:27.589 }, 00:20:27.589 "memory_domains": [ 00:20:27.589 { 00:20:27.589 "dma_device_id": "system", 00:20:27.589 "dma_device_type": 1 00:20:27.589 }, 00:20:27.589 { 00:20:27.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:27.589 "dma_device_type": 2 00:20:27.589 } 00:20:27.589 ], 00:20:27.589 "driver_specific": {} 00:20:27.589 } 00:20:27.589 ] 00:20:27.589 22:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:27.589 22:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:27.589 22:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:27.589 22:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:27.589 22:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:27.589 22:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:27.589 22:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:27.589 22:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:27.589 22:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:27.589 22:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:27.589 22:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:27.589 22:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:27.589 22:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:27.589 22:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:27.589 22:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:27.849 22:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:27.849 "name": "Existed_Raid", 00:20:27.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.849 "strip_size_kb": 0, 00:20:27.849 "state": "configuring", 00:20:27.849 "raid_level": "raid1", 00:20:27.849 "superblock": false, 00:20:27.849 "num_base_bdevs": 4, 00:20:27.849 "num_base_bdevs_discovered": 2, 00:20:27.849 "num_base_bdevs_operational": 4, 00:20:27.849 "base_bdevs_list": [ 00:20:27.849 { 00:20:27.849 "name": "BaseBdev1", 00:20:27.849 "uuid": "36bd739e-33eb-4b8f-97c6-9244eadd0915", 00:20:27.849 "is_configured": true, 00:20:27.849 "data_offset": 0, 00:20:27.849 "data_size": 65536 00:20:27.849 }, 00:20:27.849 { 00:20:27.849 "name": "BaseBdev2", 00:20:27.849 "uuid": "251a56f0-eacc-45dc-b6d8-fe55ccc8ea5d", 00:20:27.849 "is_configured": true, 00:20:27.849 "data_offset": 0, 00:20:27.849 "data_size": 65536 00:20:27.849 }, 00:20:27.849 { 00:20:27.849 "name": "BaseBdev3", 00:20:27.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.849 "is_configured": false, 00:20:27.849 "data_offset": 0, 00:20:27.849 "data_size": 0 00:20:27.849 }, 00:20:27.849 { 00:20:27.849 "name": "BaseBdev4", 00:20:27.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.849 "is_configured": false, 00:20:27.849 "data_offset": 0, 00:20:27.849 "data_size": 0 00:20:27.849 } 00:20:27.849 ] 00:20:27.849 }' 00:20:27.849 22:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:27.849 22:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.417 22:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:28.675 [2024-07-13 22:03:47.827687] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:28.675 BaseBdev3 00:20:28.675 22:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:20:28.675 22:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:20:28.675 22:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:28.675 22:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:28.675 22:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:28.675 22:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:28.675 22:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:28.675 22:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:28.934 [ 00:20:28.934 { 00:20:28.934 "name": "BaseBdev3", 00:20:28.934 "aliases": [ 00:20:28.934 "b9d15c22-3903-4b0d-a62e-92c820356bea" 00:20:28.934 ], 00:20:28.934 "product_name": "Malloc disk", 00:20:28.934 "block_size": 512, 00:20:28.934 "num_blocks": 65536, 00:20:28.934 "uuid": "b9d15c22-3903-4b0d-a62e-92c820356bea", 00:20:28.934 "assigned_rate_limits": { 00:20:28.934 "rw_ios_per_sec": 0, 00:20:28.934 "rw_mbytes_per_sec": 0, 00:20:28.934 "r_mbytes_per_sec": 0, 00:20:28.934 "w_mbytes_per_sec": 0 00:20:28.934 }, 00:20:28.935 "claimed": true, 00:20:28.935 "claim_type": "exclusive_write", 00:20:28.935 "zoned": false, 00:20:28.935 "supported_io_types": { 00:20:28.935 "read": true, 00:20:28.935 "write": true, 00:20:28.935 "unmap": true, 00:20:28.935 "flush": true, 00:20:28.935 "reset": true, 00:20:28.935 "nvme_admin": false, 00:20:28.935 "nvme_io": false, 00:20:28.935 "nvme_io_md": false, 00:20:28.935 "write_zeroes": true, 00:20:28.935 "zcopy": true, 00:20:28.935 "get_zone_info": false, 00:20:28.935 "zone_management": false, 00:20:28.935 "zone_append": false, 00:20:28.935 "compare": false, 00:20:28.935 "compare_and_write": false, 00:20:28.935 "abort": true, 00:20:28.935 "seek_hole": false, 00:20:28.935 "seek_data": false, 00:20:28.935 "copy": true, 00:20:28.935 "nvme_iov_md": false 00:20:28.935 }, 00:20:28.935 "memory_domains": [ 00:20:28.935 { 00:20:28.935 "dma_device_id": "system", 00:20:28.935 "dma_device_type": 1 00:20:28.935 }, 00:20:28.935 { 00:20:28.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:28.935 "dma_device_type": 2 00:20:28.935 } 00:20:28.935 ], 00:20:28.935 "driver_specific": {} 00:20:28.935 } 00:20:28.935 ] 00:20:28.935 22:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:28.935 22:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:28.935 22:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:28.935 22:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:28.935 22:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:28.935 22:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:28.935 22:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:28.935 22:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:28.935 22:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:28.935 22:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:28.935 22:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:28.935 22:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:28.935 22:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:28.935 22:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:28.935 22:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:29.194 22:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:29.194 "name": "Existed_Raid", 00:20:29.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.194 "strip_size_kb": 0, 00:20:29.194 "state": "configuring", 00:20:29.194 "raid_level": "raid1", 00:20:29.194 "superblock": false, 00:20:29.194 "num_base_bdevs": 4, 00:20:29.194 "num_base_bdevs_discovered": 3, 00:20:29.194 "num_base_bdevs_operational": 4, 00:20:29.194 "base_bdevs_list": [ 00:20:29.194 { 00:20:29.194 "name": "BaseBdev1", 00:20:29.194 "uuid": "36bd739e-33eb-4b8f-97c6-9244eadd0915", 00:20:29.194 "is_configured": true, 00:20:29.194 "data_offset": 0, 00:20:29.194 "data_size": 65536 00:20:29.194 }, 00:20:29.194 { 00:20:29.194 "name": "BaseBdev2", 00:20:29.194 "uuid": "251a56f0-eacc-45dc-b6d8-fe55ccc8ea5d", 00:20:29.194 "is_configured": true, 00:20:29.194 "data_offset": 0, 00:20:29.194 "data_size": 65536 00:20:29.194 }, 00:20:29.194 { 00:20:29.194 "name": "BaseBdev3", 00:20:29.194 "uuid": "b9d15c22-3903-4b0d-a62e-92c820356bea", 00:20:29.194 "is_configured": true, 00:20:29.194 "data_offset": 0, 00:20:29.194 "data_size": 65536 00:20:29.194 }, 00:20:29.194 { 00:20:29.194 "name": "BaseBdev4", 00:20:29.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.194 "is_configured": false, 00:20:29.194 "data_offset": 0, 00:20:29.194 "data_size": 0 00:20:29.194 } 00:20:29.194 ] 00:20:29.194 }' 00:20:29.194 22:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:29.194 22:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.452 22:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:29.712 [2024-07-13 22:03:49.012817] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:29.712 [2024-07-13 22:03:49.012865] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:20:29.712 [2024-07-13 22:03:49.012876] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:29.712 [2024-07-13 22:03:49.013131] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:20:29.712 [2024-07-13 22:03:49.013308] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:20:29.712 [2024-07-13 22:03:49.013321] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:20:29.712 [2024-07-13 22:03:49.013564] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:29.712 BaseBdev4 00:20:29.712 22:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:20:29.712 22:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:20:29.712 22:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:29.712 22:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:29.712 22:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:29.712 22:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:29.712 22:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:29.970 22:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:29.970 [ 00:20:29.970 { 00:20:29.970 "name": "BaseBdev4", 00:20:29.970 "aliases": [ 00:20:29.970 "b29c4778-e2a8-44a2-9709-5a589c73674d" 00:20:29.970 ], 00:20:29.970 "product_name": "Malloc disk", 00:20:29.970 "block_size": 512, 00:20:29.970 "num_blocks": 65536, 00:20:29.970 "uuid": "b29c4778-e2a8-44a2-9709-5a589c73674d", 00:20:29.970 "assigned_rate_limits": { 00:20:29.971 "rw_ios_per_sec": 0, 00:20:29.971 "rw_mbytes_per_sec": 0, 00:20:29.971 "r_mbytes_per_sec": 0, 00:20:29.971 "w_mbytes_per_sec": 0 00:20:29.971 }, 00:20:29.971 "claimed": true, 00:20:29.971 "claim_type": "exclusive_write", 00:20:29.971 "zoned": false, 00:20:29.971 "supported_io_types": { 00:20:29.971 "read": true, 00:20:29.971 "write": true, 00:20:29.971 "unmap": true, 00:20:29.971 "flush": true, 00:20:29.971 "reset": true, 00:20:29.971 "nvme_admin": false, 00:20:29.971 "nvme_io": false, 00:20:29.971 "nvme_io_md": false, 00:20:29.971 "write_zeroes": true, 00:20:29.971 "zcopy": true, 00:20:29.971 "get_zone_info": false, 00:20:29.971 "zone_management": false, 00:20:29.971 "zone_append": false, 00:20:29.971 "compare": false, 00:20:29.971 "compare_and_write": false, 00:20:29.971 "abort": true, 00:20:29.971 "seek_hole": false, 00:20:29.971 "seek_data": false, 00:20:29.971 "copy": true, 00:20:29.971 "nvme_iov_md": false 00:20:29.971 }, 00:20:29.971 "memory_domains": [ 00:20:29.971 { 00:20:29.971 "dma_device_id": "system", 00:20:29.971 "dma_device_type": 1 00:20:29.971 }, 00:20:29.971 { 00:20:29.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:29.971 "dma_device_type": 2 00:20:29.971 } 00:20:29.971 ], 00:20:29.971 "driver_specific": {} 00:20:29.971 } 00:20:29.971 ] 00:20:30.230 22:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:30.230 22:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:30.230 22:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:30.230 22:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:20:30.230 22:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:30.230 22:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:30.230 22:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:30.230 22:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:30.230 22:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:30.230 22:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:30.230 22:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:30.230 22:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:30.230 22:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:30.230 22:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:30.230 22:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:30.230 22:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:30.230 "name": "Existed_Raid", 00:20:30.230 "uuid": "ace8dbaa-6212-43b5-b2d7-22d12d26c082", 00:20:30.230 "strip_size_kb": 0, 00:20:30.230 "state": "online", 00:20:30.230 "raid_level": "raid1", 00:20:30.230 "superblock": false, 00:20:30.230 "num_base_bdevs": 4, 00:20:30.230 "num_base_bdevs_discovered": 4, 00:20:30.230 "num_base_bdevs_operational": 4, 00:20:30.230 "base_bdevs_list": [ 00:20:30.230 { 00:20:30.230 "name": "BaseBdev1", 00:20:30.230 "uuid": "36bd739e-33eb-4b8f-97c6-9244eadd0915", 00:20:30.230 "is_configured": true, 00:20:30.230 "data_offset": 0, 00:20:30.230 "data_size": 65536 00:20:30.230 }, 00:20:30.230 { 00:20:30.230 "name": "BaseBdev2", 00:20:30.230 "uuid": "251a56f0-eacc-45dc-b6d8-fe55ccc8ea5d", 00:20:30.230 "is_configured": true, 00:20:30.230 "data_offset": 0, 00:20:30.230 "data_size": 65536 00:20:30.230 }, 00:20:30.230 { 00:20:30.230 "name": "BaseBdev3", 00:20:30.230 "uuid": "b9d15c22-3903-4b0d-a62e-92c820356bea", 00:20:30.230 "is_configured": true, 00:20:30.230 "data_offset": 0, 00:20:30.230 "data_size": 65536 00:20:30.230 }, 00:20:30.230 { 00:20:30.230 "name": "BaseBdev4", 00:20:30.230 "uuid": "b29c4778-e2a8-44a2-9709-5a589c73674d", 00:20:30.230 "is_configured": true, 00:20:30.230 "data_offset": 0, 00:20:30.230 "data_size": 65536 00:20:30.230 } 00:20:30.230 ] 00:20:30.230 }' 00:20:30.230 22:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:30.230 22:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.825 22:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:20:30.825 22:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:30.825 22:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:30.825 22:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:30.825 22:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:30.825 22:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:30.825 22:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:30.825 22:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:30.825 [2024-07-13 22:03:50.172230] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:30.825 22:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:30.825 "name": "Existed_Raid", 00:20:30.825 "aliases": [ 00:20:30.825 "ace8dbaa-6212-43b5-b2d7-22d12d26c082" 00:20:30.825 ], 00:20:30.825 "product_name": "Raid Volume", 00:20:30.825 "block_size": 512, 00:20:30.825 "num_blocks": 65536, 00:20:30.825 "uuid": "ace8dbaa-6212-43b5-b2d7-22d12d26c082", 00:20:30.825 "assigned_rate_limits": { 00:20:30.825 "rw_ios_per_sec": 0, 00:20:30.825 "rw_mbytes_per_sec": 0, 00:20:30.825 "r_mbytes_per_sec": 0, 00:20:30.825 "w_mbytes_per_sec": 0 00:20:30.825 }, 00:20:30.825 "claimed": false, 00:20:30.825 "zoned": false, 00:20:30.825 "supported_io_types": { 00:20:30.825 "read": true, 00:20:30.825 "write": true, 00:20:30.825 "unmap": false, 00:20:30.825 "flush": false, 00:20:30.825 "reset": true, 00:20:30.825 "nvme_admin": false, 00:20:30.825 "nvme_io": false, 00:20:30.825 "nvme_io_md": false, 00:20:30.825 "write_zeroes": true, 00:20:30.825 "zcopy": false, 00:20:30.825 "get_zone_info": false, 00:20:30.825 "zone_management": false, 00:20:30.825 "zone_append": false, 00:20:30.825 "compare": false, 00:20:30.825 "compare_and_write": false, 00:20:30.825 "abort": false, 00:20:30.825 "seek_hole": false, 00:20:30.825 "seek_data": false, 00:20:30.825 "copy": false, 00:20:30.825 "nvme_iov_md": false 00:20:30.825 }, 00:20:30.825 "memory_domains": [ 00:20:30.825 { 00:20:30.825 "dma_device_id": "system", 00:20:30.825 "dma_device_type": 1 00:20:30.825 }, 00:20:30.825 { 00:20:30.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:30.825 "dma_device_type": 2 00:20:30.825 }, 00:20:30.825 { 00:20:30.825 "dma_device_id": "system", 00:20:30.825 "dma_device_type": 1 00:20:30.825 }, 00:20:30.825 { 00:20:30.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:30.825 "dma_device_type": 2 00:20:30.825 }, 00:20:30.825 { 00:20:30.825 "dma_device_id": "system", 00:20:30.825 "dma_device_type": 1 00:20:30.825 }, 00:20:30.825 { 00:20:30.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:30.825 "dma_device_type": 2 00:20:30.825 }, 00:20:30.825 { 00:20:30.825 "dma_device_id": "system", 00:20:30.825 "dma_device_type": 1 00:20:30.825 }, 00:20:30.825 { 00:20:30.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:30.825 "dma_device_type": 2 00:20:30.825 } 00:20:30.825 ], 00:20:30.825 "driver_specific": { 00:20:30.825 "raid": { 00:20:30.825 "uuid": "ace8dbaa-6212-43b5-b2d7-22d12d26c082", 00:20:30.825 "strip_size_kb": 0, 00:20:30.825 "state": "online", 00:20:30.825 "raid_level": "raid1", 00:20:30.825 "superblock": false, 00:20:30.825 "num_base_bdevs": 4, 00:20:30.825 "num_base_bdevs_discovered": 4, 00:20:30.825 "num_base_bdevs_operational": 4, 00:20:30.825 "base_bdevs_list": [ 00:20:30.825 { 00:20:30.825 "name": "BaseBdev1", 00:20:30.825 "uuid": "36bd739e-33eb-4b8f-97c6-9244eadd0915", 00:20:30.825 "is_configured": true, 00:20:30.825 "data_offset": 0, 00:20:30.825 "data_size": 65536 00:20:30.825 }, 00:20:30.825 { 00:20:30.825 "name": "BaseBdev2", 00:20:30.825 "uuid": "251a56f0-eacc-45dc-b6d8-fe55ccc8ea5d", 00:20:30.825 "is_configured": true, 00:20:30.825 "data_offset": 0, 00:20:30.825 "data_size": 65536 00:20:30.825 }, 00:20:30.825 { 00:20:30.825 "name": "BaseBdev3", 00:20:30.825 "uuid": "b9d15c22-3903-4b0d-a62e-92c820356bea", 00:20:30.825 "is_configured": true, 00:20:30.825 "data_offset": 0, 00:20:30.825 "data_size": 65536 00:20:30.825 }, 00:20:30.825 { 00:20:30.825 "name": "BaseBdev4", 00:20:30.825 "uuid": "b29c4778-e2a8-44a2-9709-5a589c73674d", 00:20:30.825 "is_configured": true, 00:20:30.825 "data_offset": 0, 00:20:30.825 "data_size": 65536 00:20:30.825 } 00:20:30.825 ] 00:20:30.825 } 00:20:30.825 } 00:20:30.825 }' 00:20:30.825 22:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:31.084 22:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:20:31.084 BaseBdev2 00:20:31.084 BaseBdev3 00:20:31.084 BaseBdev4' 00:20:31.084 22:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:31.084 22:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:20:31.084 22:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:31.084 22:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:31.084 "name": "BaseBdev1", 00:20:31.084 "aliases": [ 00:20:31.084 "36bd739e-33eb-4b8f-97c6-9244eadd0915" 00:20:31.084 ], 00:20:31.084 "product_name": "Malloc disk", 00:20:31.084 "block_size": 512, 00:20:31.084 "num_blocks": 65536, 00:20:31.084 "uuid": "36bd739e-33eb-4b8f-97c6-9244eadd0915", 00:20:31.084 "assigned_rate_limits": { 00:20:31.084 "rw_ios_per_sec": 0, 00:20:31.084 "rw_mbytes_per_sec": 0, 00:20:31.084 "r_mbytes_per_sec": 0, 00:20:31.084 "w_mbytes_per_sec": 0 00:20:31.084 }, 00:20:31.084 "claimed": true, 00:20:31.084 "claim_type": "exclusive_write", 00:20:31.084 "zoned": false, 00:20:31.084 "supported_io_types": { 00:20:31.084 "read": true, 00:20:31.084 "write": true, 00:20:31.084 "unmap": true, 00:20:31.084 "flush": true, 00:20:31.084 "reset": true, 00:20:31.084 "nvme_admin": false, 00:20:31.084 "nvme_io": false, 00:20:31.084 "nvme_io_md": false, 00:20:31.084 "write_zeroes": true, 00:20:31.084 "zcopy": true, 00:20:31.084 "get_zone_info": false, 00:20:31.084 "zone_management": false, 00:20:31.084 "zone_append": false, 00:20:31.084 "compare": false, 00:20:31.084 "compare_and_write": false, 00:20:31.084 "abort": true, 00:20:31.084 "seek_hole": false, 00:20:31.084 "seek_data": false, 00:20:31.084 "copy": true, 00:20:31.084 "nvme_iov_md": false 00:20:31.084 }, 00:20:31.084 "memory_domains": [ 00:20:31.084 { 00:20:31.084 "dma_device_id": "system", 00:20:31.084 "dma_device_type": 1 00:20:31.084 }, 00:20:31.084 { 00:20:31.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:31.084 "dma_device_type": 2 00:20:31.084 } 00:20:31.084 ], 00:20:31.084 "driver_specific": {} 00:20:31.084 }' 00:20:31.084 22:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:31.084 22:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:31.342 22:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:31.342 22:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:31.342 22:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:31.342 22:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:31.342 22:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:31.342 22:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:31.342 22:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:31.342 22:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:31.342 22:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:31.342 22:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:31.342 22:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:31.342 22:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:31.342 22:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:31.600 22:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:31.600 "name": "BaseBdev2", 00:20:31.600 "aliases": [ 00:20:31.600 "251a56f0-eacc-45dc-b6d8-fe55ccc8ea5d" 00:20:31.600 ], 00:20:31.600 "product_name": "Malloc disk", 00:20:31.600 "block_size": 512, 00:20:31.600 "num_blocks": 65536, 00:20:31.600 "uuid": "251a56f0-eacc-45dc-b6d8-fe55ccc8ea5d", 00:20:31.600 "assigned_rate_limits": { 00:20:31.600 "rw_ios_per_sec": 0, 00:20:31.600 "rw_mbytes_per_sec": 0, 00:20:31.600 "r_mbytes_per_sec": 0, 00:20:31.600 "w_mbytes_per_sec": 0 00:20:31.600 }, 00:20:31.600 "claimed": true, 00:20:31.600 "claim_type": "exclusive_write", 00:20:31.600 "zoned": false, 00:20:31.600 "supported_io_types": { 00:20:31.600 "read": true, 00:20:31.600 "write": true, 00:20:31.600 "unmap": true, 00:20:31.600 "flush": true, 00:20:31.600 "reset": true, 00:20:31.600 "nvme_admin": false, 00:20:31.600 "nvme_io": false, 00:20:31.600 "nvme_io_md": false, 00:20:31.600 "write_zeroes": true, 00:20:31.600 "zcopy": true, 00:20:31.601 "get_zone_info": false, 00:20:31.601 "zone_management": false, 00:20:31.601 "zone_append": false, 00:20:31.601 "compare": false, 00:20:31.601 "compare_and_write": false, 00:20:31.601 "abort": true, 00:20:31.601 "seek_hole": false, 00:20:31.601 "seek_data": false, 00:20:31.601 "copy": true, 00:20:31.601 "nvme_iov_md": false 00:20:31.601 }, 00:20:31.601 "memory_domains": [ 00:20:31.601 { 00:20:31.601 "dma_device_id": "system", 00:20:31.601 "dma_device_type": 1 00:20:31.601 }, 00:20:31.601 { 00:20:31.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:31.601 "dma_device_type": 2 00:20:31.601 } 00:20:31.601 ], 00:20:31.601 "driver_specific": {} 00:20:31.601 }' 00:20:31.601 22:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:31.601 22:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:31.601 22:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:31.601 22:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:31.859 22:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:31.859 22:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:31.859 22:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:31.859 22:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:31.859 22:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:31.859 22:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:31.859 22:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:31.859 22:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:31.859 22:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:31.859 22:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:31.859 22:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:32.117 22:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:32.117 "name": "BaseBdev3", 00:20:32.117 "aliases": [ 00:20:32.117 "b9d15c22-3903-4b0d-a62e-92c820356bea" 00:20:32.117 ], 00:20:32.117 "product_name": "Malloc disk", 00:20:32.117 "block_size": 512, 00:20:32.117 "num_blocks": 65536, 00:20:32.117 "uuid": "b9d15c22-3903-4b0d-a62e-92c820356bea", 00:20:32.117 "assigned_rate_limits": { 00:20:32.117 "rw_ios_per_sec": 0, 00:20:32.117 "rw_mbytes_per_sec": 0, 00:20:32.117 "r_mbytes_per_sec": 0, 00:20:32.117 "w_mbytes_per_sec": 0 00:20:32.117 }, 00:20:32.117 "claimed": true, 00:20:32.117 "claim_type": "exclusive_write", 00:20:32.117 "zoned": false, 00:20:32.117 "supported_io_types": { 00:20:32.117 "read": true, 00:20:32.117 "write": true, 00:20:32.117 "unmap": true, 00:20:32.117 "flush": true, 00:20:32.117 "reset": true, 00:20:32.117 "nvme_admin": false, 00:20:32.117 "nvme_io": false, 00:20:32.117 "nvme_io_md": false, 00:20:32.117 "write_zeroes": true, 00:20:32.117 "zcopy": true, 00:20:32.117 "get_zone_info": false, 00:20:32.117 "zone_management": false, 00:20:32.117 "zone_append": false, 00:20:32.117 "compare": false, 00:20:32.117 "compare_and_write": false, 00:20:32.117 "abort": true, 00:20:32.117 "seek_hole": false, 00:20:32.117 "seek_data": false, 00:20:32.117 "copy": true, 00:20:32.117 "nvme_iov_md": false 00:20:32.117 }, 00:20:32.117 "memory_domains": [ 00:20:32.117 { 00:20:32.117 "dma_device_id": "system", 00:20:32.117 "dma_device_type": 1 00:20:32.117 }, 00:20:32.117 { 00:20:32.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:32.117 "dma_device_type": 2 00:20:32.117 } 00:20:32.118 ], 00:20:32.118 "driver_specific": {} 00:20:32.118 }' 00:20:32.118 22:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:32.118 22:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:32.118 22:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:32.118 22:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:32.118 22:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:32.375 22:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:32.375 22:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:32.375 22:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:32.375 22:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:32.375 22:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:32.375 22:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:32.375 22:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:32.375 22:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:32.375 22:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:32.375 22:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:20:32.633 22:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:32.633 "name": "BaseBdev4", 00:20:32.633 "aliases": [ 00:20:32.633 "b29c4778-e2a8-44a2-9709-5a589c73674d" 00:20:32.633 ], 00:20:32.634 "product_name": "Malloc disk", 00:20:32.634 "block_size": 512, 00:20:32.634 "num_blocks": 65536, 00:20:32.634 "uuid": "b29c4778-e2a8-44a2-9709-5a589c73674d", 00:20:32.634 "assigned_rate_limits": { 00:20:32.634 "rw_ios_per_sec": 0, 00:20:32.634 "rw_mbytes_per_sec": 0, 00:20:32.634 "r_mbytes_per_sec": 0, 00:20:32.634 "w_mbytes_per_sec": 0 00:20:32.634 }, 00:20:32.634 "claimed": true, 00:20:32.634 "claim_type": "exclusive_write", 00:20:32.634 "zoned": false, 00:20:32.634 "supported_io_types": { 00:20:32.634 "read": true, 00:20:32.634 "write": true, 00:20:32.634 "unmap": true, 00:20:32.634 "flush": true, 00:20:32.634 "reset": true, 00:20:32.634 "nvme_admin": false, 00:20:32.634 "nvme_io": false, 00:20:32.634 "nvme_io_md": false, 00:20:32.634 "write_zeroes": true, 00:20:32.634 "zcopy": true, 00:20:32.634 "get_zone_info": false, 00:20:32.634 "zone_management": false, 00:20:32.634 "zone_append": false, 00:20:32.634 "compare": false, 00:20:32.634 "compare_and_write": false, 00:20:32.634 "abort": true, 00:20:32.634 "seek_hole": false, 00:20:32.634 "seek_data": false, 00:20:32.634 "copy": true, 00:20:32.634 "nvme_iov_md": false 00:20:32.634 }, 00:20:32.634 "memory_domains": [ 00:20:32.634 { 00:20:32.634 "dma_device_id": "system", 00:20:32.634 "dma_device_type": 1 00:20:32.634 }, 00:20:32.634 { 00:20:32.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:32.634 "dma_device_type": 2 00:20:32.634 } 00:20:32.634 ], 00:20:32.634 "driver_specific": {} 00:20:32.634 }' 00:20:32.634 22:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:32.634 22:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:32.634 22:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:32.634 22:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:32.634 22:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:32.634 22:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:32.634 22:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:32.891 22:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:32.891 22:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:32.891 22:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:32.891 22:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:32.891 22:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:32.891 22:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:33.150 [2024-07-13 22:03:52.301598] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:33.150 22:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:20:33.150 22:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:20:33.150 22:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:33.150 22:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:20:33.150 22:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:20:33.150 22:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:20:33.150 22:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:33.150 22:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:33.150 22:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:33.150 22:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:33.150 22:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:33.150 22:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:33.150 22:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:33.150 22:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:33.150 22:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:33.150 22:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:33.150 22:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:33.150 22:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:33.150 "name": "Existed_Raid", 00:20:33.150 "uuid": "ace8dbaa-6212-43b5-b2d7-22d12d26c082", 00:20:33.150 "strip_size_kb": 0, 00:20:33.150 "state": "online", 00:20:33.150 "raid_level": "raid1", 00:20:33.150 "superblock": false, 00:20:33.150 "num_base_bdevs": 4, 00:20:33.150 "num_base_bdevs_discovered": 3, 00:20:33.150 "num_base_bdevs_operational": 3, 00:20:33.150 "base_bdevs_list": [ 00:20:33.150 { 00:20:33.150 "name": null, 00:20:33.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.150 "is_configured": false, 00:20:33.150 "data_offset": 0, 00:20:33.150 "data_size": 65536 00:20:33.150 }, 00:20:33.150 { 00:20:33.150 "name": "BaseBdev2", 00:20:33.150 "uuid": "251a56f0-eacc-45dc-b6d8-fe55ccc8ea5d", 00:20:33.150 "is_configured": true, 00:20:33.150 "data_offset": 0, 00:20:33.150 "data_size": 65536 00:20:33.150 }, 00:20:33.150 { 00:20:33.150 "name": "BaseBdev3", 00:20:33.150 "uuid": "b9d15c22-3903-4b0d-a62e-92c820356bea", 00:20:33.150 "is_configured": true, 00:20:33.150 "data_offset": 0, 00:20:33.150 "data_size": 65536 00:20:33.150 }, 00:20:33.150 { 00:20:33.150 "name": "BaseBdev4", 00:20:33.150 "uuid": "b29c4778-e2a8-44a2-9709-5a589c73674d", 00:20:33.150 "is_configured": true, 00:20:33.150 "data_offset": 0, 00:20:33.150 "data_size": 65536 00:20:33.150 } 00:20:33.150 ] 00:20:33.150 }' 00:20:33.150 22:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:33.150 22:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.717 22:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:20:33.717 22:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:33.717 22:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:33.717 22:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:33.978 22:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:33.978 22:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:33.978 22:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:33.978 [2024-07-13 22:03:53.360747] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:34.237 22:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:34.237 22:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:34.237 22:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:34.237 22:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:34.495 22:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:34.495 22:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:34.495 22:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:34.495 [2024-07-13 22:03:53.792852] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:34.754 22:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:34.754 22:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:34.754 22:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:34.755 22:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:34.755 22:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:34.755 22:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:34.755 22:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:20:35.014 [2024-07-13 22:03:54.208896] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:20:35.014 [2024-07-13 22:03:54.208997] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:35.014 [2024-07-13 22:03:54.301958] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:35.014 [2024-07-13 22:03:54.302022] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:35.014 [2024-07-13 22:03:54.302037] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:20:35.014 22:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:35.014 22:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:35.014 22:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:35.014 22:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:20:35.273 22:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:20:35.273 22:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:20:35.273 22:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:20:35.273 22:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:20:35.273 22:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:35.273 22:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:35.273 BaseBdev2 00:20:35.273 22:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:20:35.273 22:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:20:35.273 22:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:35.273 22:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:35.273 22:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:35.273 22:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:35.273 22:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:35.533 22:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:35.793 [ 00:20:35.793 { 00:20:35.793 "name": "BaseBdev2", 00:20:35.793 "aliases": [ 00:20:35.793 "833b12b8-25b7-4774-a838-85d0b52b47f5" 00:20:35.793 ], 00:20:35.793 "product_name": "Malloc disk", 00:20:35.793 "block_size": 512, 00:20:35.793 "num_blocks": 65536, 00:20:35.793 "uuid": "833b12b8-25b7-4774-a838-85d0b52b47f5", 00:20:35.793 "assigned_rate_limits": { 00:20:35.793 "rw_ios_per_sec": 0, 00:20:35.793 "rw_mbytes_per_sec": 0, 00:20:35.793 "r_mbytes_per_sec": 0, 00:20:35.793 "w_mbytes_per_sec": 0 00:20:35.793 }, 00:20:35.793 "claimed": false, 00:20:35.793 "zoned": false, 00:20:35.793 "supported_io_types": { 00:20:35.793 "read": true, 00:20:35.793 "write": true, 00:20:35.793 "unmap": true, 00:20:35.793 "flush": true, 00:20:35.793 "reset": true, 00:20:35.793 "nvme_admin": false, 00:20:35.793 "nvme_io": false, 00:20:35.793 "nvme_io_md": false, 00:20:35.793 "write_zeroes": true, 00:20:35.793 "zcopy": true, 00:20:35.793 "get_zone_info": false, 00:20:35.793 "zone_management": false, 00:20:35.793 "zone_append": false, 00:20:35.793 "compare": false, 00:20:35.793 "compare_and_write": false, 00:20:35.793 "abort": true, 00:20:35.793 "seek_hole": false, 00:20:35.793 "seek_data": false, 00:20:35.793 "copy": true, 00:20:35.793 "nvme_iov_md": false 00:20:35.793 }, 00:20:35.793 "memory_domains": [ 00:20:35.793 { 00:20:35.793 "dma_device_id": "system", 00:20:35.793 "dma_device_type": 1 00:20:35.793 }, 00:20:35.793 { 00:20:35.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:35.793 "dma_device_type": 2 00:20:35.793 } 00:20:35.793 ], 00:20:35.793 "driver_specific": {} 00:20:35.793 } 00:20:35.793 ] 00:20:35.793 22:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:35.793 22:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:35.793 22:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:35.793 22:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:36.053 BaseBdev3 00:20:36.053 22:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:20:36.053 22:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:20:36.053 22:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:36.053 22:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:36.053 22:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:36.053 22:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:36.053 22:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:36.053 22:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:36.312 [ 00:20:36.312 { 00:20:36.312 "name": "BaseBdev3", 00:20:36.312 "aliases": [ 00:20:36.312 "3c87150e-dc83-42a1-87d0-814ef6153e1d" 00:20:36.312 ], 00:20:36.313 "product_name": "Malloc disk", 00:20:36.313 "block_size": 512, 00:20:36.313 "num_blocks": 65536, 00:20:36.313 "uuid": "3c87150e-dc83-42a1-87d0-814ef6153e1d", 00:20:36.313 "assigned_rate_limits": { 00:20:36.313 "rw_ios_per_sec": 0, 00:20:36.313 "rw_mbytes_per_sec": 0, 00:20:36.313 "r_mbytes_per_sec": 0, 00:20:36.313 "w_mbytes_per_sec": 0 00:20:36.313 }, 00:20:36.313 "claimed": false, 00:20:36.313 "zoned": false, 00:20:36.313 "supported_io_types": { 00:20:36.313 "read": true, 00:20:36.313 "write": true, 00:20:36.313 "unmap": true, 00:20:36.313 "flush": true, 00:20:36.313 "reset": true, 00:20:36.313 "nvme_admin": false, 00:20:36.313 "nvme_io": false, 00:20:36.313 "nvme_io_md": false, 00:20:36.313 "write_zeroes": true, 00:20:36.313 "zcopy": true, 00:20:36.313 "get_zone_info": false, 00:20:36.313 "zone_management": false, 00:20:36.313 "zone_append": false, 00:20:36.313 "compare": false, 00:20:36.313 "compare_and_write": false, 00:20:36.313 "abort": true, 00:20:36.313 "seek_hole": false, 00:20:36.313 "seek_data": false, 00:20:36.313 "copy": true, 00:20:36.313 "nvme_iov_md": false 00:20:36.313 }, 00:20:36.313 "memory_domains": [ 00:20:36.313 { 00:20:36.313 "dma_device_id": "system", 00:20:36.313 "dma_device_type": 1 00:20:36.313 }, 00:20:36.313 { 00:20:36.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:36.313 "dma_device_type": 2 00:20:36.313 } 00:20:36.313 ], 00:20:36.313 "driver_specific": {} 00:20:36.313 } 00:20:36.313 ] 00:20:36.313 22:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:36.313 22:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:36.313 22:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:36.313 22:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:36.573 BaseBdev4 00:20:36.573 22:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:20:36.573 22:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:20:36.573 22:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:36.573 22:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:36.573 22:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:36.573 22:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:36.573 22:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:36.573 22:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:36.833 [ 00:20:36.833 { 00:20:36.833 "name": "BaseBdev4", 00:20:36.833 "aliases": [ 00:20:36.833 "7afb3964-f3e4-4a77-80e5-7c7054128d0a" 00:20:36.833 ], 00:20:36.833 "product_name": "Malloc disk", 00:20:36.833 "block_size": 512, 00:20:36.833 "num_blocks": 65536, 00:20:36.833 "uuid": "7afb3964-f3e4-4a77-80e5-7c7054128d0a", 00:20:36.833 "assigned_rate_limits": { 00:20:36.833 "rw_ios_per_sec": 0, 00:20:36.833 "rw_mbytes_per_sec": 0, 00:20:36.833 "r_mbytes_per_sec": 0, 00:20:36.833 "w_mbytes_per_sec": 0 00:20:36.833 }, 00:20:36.833 "claimed": false, 00:20:36.833 "zoned": false, 00:20:36.833 "supported_io_types": { 00:20:36.833 "read": true, 00:20:36.833 "write": true, 00:20:36.833 "unmap": true, 00:20:36.833 "flush": true, 00:20:36.833 "reset": true, 00:20:36.833 "nvme_admin": false, 00:20:36.833 "nvme_io": false, 00:20:36.833 "nvme_io_md": false, 00:20:36.833 "write_zeroes": true, 00:20:36.833 "zcopy": true, 00:20:36.833 "get_zone_info": false, 00:20:36.833 "zone_management": false, 00:20:36.833 "zone_append": false, 00:20:36.833 "compare": false, 00:20:36.833 "compare_and_write": false, 00:20:36.833 "abort": true, 00:20:36.833 "seek_hole": false, 00:20:36.833 "seek_data": false, 00:20:36.833 "copy": true, 00:20:36.833 "nvme_iov_md": false 00:20:36.833 }, 00:20:36.833 "memory_domains": [ 00:20:36.833 { 00:20:36.833 "dma_device_id": "system", 00:20:36.833 "dma_device_type": 1 00:20:36.833 }, 00:20:36.833 { 00:20:36.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:36.833 "dma_device_type": 2 00:20:36.833 } 00:20:36.833 ], 00:20:36.833 "driver_specific": {} 00:20:36.833 } 00:20:36.833 ] 00:20:36.833 22:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:36.833 22:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:36.833 22:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:36.833 22:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:36.833 [2024-07-13 22:03:56.192791] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:36.833 [2024-07-13 22:03:56.192835] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:36.833 [2024-07-13 22:03:56.192858] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:36.833 [2024-07-13 22:03:56.194604] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:36.833 [2024-07-13 22:03:56.194649] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:36.833 22:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:36.833 22:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:36.833 22:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:36.833 22:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:36.833 22:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:36.833 22:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:36.833 22:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:36.833 22:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:36.833 22:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:36.833 22:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:36.833 22:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:36.833 22:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:37.093 22:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:37.093 "name": "Existed_Raid", 00:20:37.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.093 "strip_size_kb": 0, 00:20:37.093 "state": "configuring", 00:20:37.093 "raid_level": "raid1", 00:20:37.093 "superblock": false, 00:20:37.093 "num_base_bdevs": 4, 00:20:37.093 "num_base_bdevs_discovered": 3, 00:20:37.093 "num_base_bdevs_operational": 4, 00:20:37.093 "base_bdevs_list": [ 00:20:37.093 { 00:20:37.093 "name": "BaseBdev1", 00:20:37.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.093 "is_configured": false, 00:20:37.093 "data_offset": 0, 00:20:37.093 "data_size": 0 00:20:37.093 }, 00:20:37.093 { 00:20:37.093 "name": "BaseBdev2", 00:20:37.093 "uuid": "833b12b8-25b7-4774-a838-85d0b52b47f5", 00:20:37.093 "is_configured": true, 00:20:37.093 "data_offset": 0, 00:20:37.093 "data_size": 65536 00:20:37.093 }, 00:20:37.093 { 00:20:37.093 "name": "BaseBdev3", 00:20:37.093 "uuid": "3c87150e-dc83-42a1-87d0-814ef6153e1d", 00:20:37.093 "is_configured": true, 00:20:37.093 "data_offset": 0, 00:20:37.093 "data_size": 65536 00:20:37.093 }, 00:20:37.093 { 00:20:37.093 "name": "BaseBdev4", 00:20:37.093 "uuid": "7afb3964-f3e4-4a77-80e5-7c7054128d0a", 00:20:37.093 "is_configured": true, 00:20:37.093 "data_offset": 0, 00:20:37.093 "data_size": 65536 00:20:37.093 } 00:20:37.093 ] 00:20:37.093 }' 00:20:37.093 22:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:37.093 22:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.661 22:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:37.661 [2024-07-13 22:03:57.006913] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:37.661 22:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:37.661 22:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:37.661 22:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:37.661 22:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:37.661 22:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:37.661 22:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:37.661 22:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:37.661 22:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:37.661 22:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:37.661 22:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:37.661 22:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:37.661 22:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:37.919 22:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:37.919 "name": "Existed_Raid", 00:20:37.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.919 "strip_size_kb": 0, 00:20:37.919 "state": "configuring", 00:20:37.919 "raid_level": "raid1", 00:20:37.919 "superblock": false, 00:20:37.919 "num_base_bdevs": 4, 00:20:37.919 "num_base_bdevs_discovered": 2, 00:20:37.919 "num_base_bdevs_operational": 4, 00:20:37.919 "base_bdevs_list": [ 00:20:37.919 { 00:20:37.919 "name": "BaseBdev1", 00:20:37.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.919 "is_configured": false, 00:20:37.919 "data_offset": 0, 00:20:37.919 "data_size": 0 00:20:37.919 }, 00:20:37.919 { 00:20:37.919 "name": null, 00:20:37.919 "uuid": "833b12b8-25b7-4774-a838-85d0b52b47f5", 00:20:37.919 "is_configured": false, 00:20:37.919 "data_offset": 0, 00:20:37.919 "data_size": 65536 00:20:37.919 }, 00:20:37.919 { 00:20:37.919 "name": "BaseBdev3", 00:20:37.919 "uuid": "3c87150e-dc83-42a1-87d0-814ef6153e1d", 00:20:37.919 "is_configured": true, 00:20:37.919 "data_offset": 0, 00:20:37.919 "data_size": 65536 00:20:37.919 }, 00:20:37.919 { 00:20:37.919 "name": "BaseBdev4", 00:20:37.919 "uuid": "7afb3964-f3e4-4a77-80e5-7c7054128d0a", 00:20:37.919 "is_configured": true, 00:20:37.919 "data_offset": 0, 00:20:37.919 "data_size": 65536 00:20:37.919 } 00:20:37.919 ] 00:20:37.919 }' 00:20:37.919 22:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:37.919 22:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.486 22:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:38.486 22:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:38.486 22:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:20:38.487 22:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:38.746 [2024-07-13 22:03:58.043361] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:38.746 BaseBdev1 00:20:38.746 22:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:20:38.746 22:03:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:20:38.746 22:03:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:38.746 22:03:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:38.746 22:03:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:38.746 22:03:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:38.746 22:03:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:39.004 22:03:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:39.004 [ 00:20:39.004 { 00:20:39.004 "name": "BaseBdev1", 00:20:39.004 "aliases": [ 00:20:39.004 "9124db0a-2362-4c17-981a-31a76158b91e" 00:20:39.004 ], 00:20:39.004 "product_name": "Malloc disk", 00:20:39.004 "block_size": 512, 00:20:39.004 "num_blocks": 65536, 00:20:39.004 "uuid": "9124db0a-2362-4c17-981a-31a76158b91e", 00:20:39.004 "assigned_rate_limits": { 00:20:39.004 "rw_ios_per_sec": 0, 00:20:39.004 "rw_mbytes_per_sec": 0, 00:20:39.004 "r_mbytes_per_sec": 0, 00:20:39.004 "w_mbytes_per_sec": 0 00:20:39.004 }, 00:20:39.004 "claimed": true, 00:20:39.004 "claim_type": "exclusive_write", 00:20:39.004 "zoned": false, 00:20:39.004 "supported_io_types": { 00:20:39.004 "read": true, 00:20:39.004 "write": true, 00:20:39.004 "unmap": true, 00:20:39.004 "flush": true, 00:20:39.004 "reset": true, 00:20:39.004 "nvme_admin": false, 00:20:39.004 "nvme_io": false, 00:20:39.004 "nvme_io_md": false, 00:20:39.004 "write_zeroes": true, 00:20:39.004 "zcopy": true, 00:20:39.004 "get_zone_info": false, 00:20:39.004 "zone_management": false, 00:20:39.004 "zone_append": false, 00:20:39.005 "compare": false, 00:20:39.005 "compare_and_write": false, 00:20:39.005 "abort": true, 00:20:39.005 "seek_hole": false, 00:20:39.005 "seek_data": false, 00:20:39.005 "copy": true, 00:20:39.005 "nvme_iov_md": false 00:20:39.005 }, 00:20:39.005 "memory_domains": [ 00:20:39.005 { 00:20:39.005 "dma_device_id": "system", 00:20:39.005 "dma_device_type": 1 00:20:39.005 }, 00:20:39.005 { 00:20:39.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:39.005 "dma_device_type": 2 00:20:39.005 } 00:20:39.005 ], 00:20:39.005 "driver_specific": {} 00:20:39.005 } 00:20:39.005 ] 00:20:39.005 22:03:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:39.005 22:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:39.005 22:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:39.005 22:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:39.005 22:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:39.005 22:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:39.005 22:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:39.005 22:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:39.005 22:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:39.005 22:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:39.005 22:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:39.264 22:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:39.264 22:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:39.264 22:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:39.264 "name": "Existed_Raid", 00:20:39.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.264 "strip_size_kb": 0, 00:20:39.264 "state": "configuring", 00:20:39.264 "raid_level": "raid1", 00:20:39.264 "superblock": false, 00:20:39.264 "num_base_bdevs": 4, 00:20:39.264 "num_base_bdevs_discovered": 3, 00:20:39.264 "num_base_bdevs_operational": 4, 00:20:39.264 "base_bdevs_list": [ 00:20:39.264 { 00:20:39.264 "name": "BaseBdev1", 00:20:39.264 "uuid": "9124db0a-2362-4c17-981a-31a76158b91e", 00:20:39.264 "is_configured": true, 00:20:39.264 "data_offset": 0, 00:20:39.264 "data_size": 65536 00:20:39.264 }, 00:20:39.264 { 00:20:39.264 "name": null, 00:20:39.264 "uuid": "833b12b8-25b7-4774-a838-85d0b52b47f5", 00:20:39.264 "is_configured": false, 00:20:39.264 "data_offset": 0, 00:20:39.264 "data_size": 65536 00:20:39.264 }, 00:20:39.264 { 00:20:39.264 "name": "BaseBdev3", 00:20:39.264 "uuid": "3c87150e-dc83-42a1-87d0-814ef6153e1d", 00:20:39.264 "is_configured": true, 00:20:39.264 "data_offset": 0, 00:20:39.264 "data_size": 65536 00:20:39.264 }, 00:20:39.264 { 00:20:39.264 "name": "BaseBdev4", 00:20:39.264 "uuid": "7afb3964-f3e4-4a77-80e5-7c7054128d0a", 00:20:39.264 "is_configured": true, 00:20:39.264 "data_offset": 0, 00:20:39.264 "data_size": 65536 00:20:39.264 } 00:20:39.264 ] 00:20:39.264 }' 00:20:39.264 22:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:39.264 22:03:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.831 22:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:39.831 22:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:39.831 22:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:20:39.831 22:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:20:40.090 [2024-07-13 22:03:59.370935] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:40.091 22:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:40.091 22:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:40.091 22:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:40.091 22:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:40.091 22:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:40.091 22:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:40.091 22:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:40.091 22:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:40.091 22:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:40.091 22:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:40.091 22:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.091 22:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:40.351 22:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:40.351 "name": "Existed_Raid", 00:20:40.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.351 "strip_size_kb": 0, 00:20:40.351 "state": "configuring", 00:20:40.351 "raid_level": "raid1", 00:20:40.351 "superblock": false, 00:20:40.351 "num_base_bdevs": 4, 00:20:40.351 "num_base_bdevs_discovered": 2, 00:20:40.351 "num_base_bdevs_operational": 4, 00:20:40.351 "base_bdevs_list": [ 00:20:40.351 { 00:20:40.351 "name": "BaseBdev1", 00:20:40.351 "uuid": "9124db0a-2362-4c17-981a-31a76158b91e", 00:20:40.351 "is_configured": true, 00:20:40.351 "data_offset": 0, 00:20:40.351 "data_size": 65536 00:20:40.351 }, 00:20:40.351 { 00:20:40.351 "name": null, 00:20:40.351 "uuid": "833b12b8-25b7-4774-a838-85d0b52b47f5", 00:20:40.351 "is_configured": false, 00:20:40.351 "data_offset": 0, 00:20:40.351 "data_size": 65536 00:20:40.351 }, 00:20:40.351 { 00:20:40.351 "name": null, 00:20:40.351 "uuid": "3c87150e-dc83-42a1-87d0-814ef6153e1d", 00:20:40.351 "is_configured": false, 00:20:40.351 "data_offset": 0, 00:20:40.351 "data_size": 65536 00:20:40.351 }, 00:20:40.351 { 00:20:40.351 "name": "BaseBdev4", 00:20:40.351 "uuid": "7afb3964-f3e4-4a77-80e5-7c7054128d0a", 00:20:40.351 "is_configured": true, 00:20:40.351 "data_offset": 0, 00:20:40.351 "data_size": 65536 00:20:40.351 } 00:20:40.351 ] 00:20:40.351 }' 00:20:40.351 22:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:40.351 22:03:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.946 22:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:40.946 22:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.946 22:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:20:40.946 22:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:41.206 [2024-07-13 22:04:00.341505] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:41.206 22:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:41.206 22:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:41.206 22:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:41.206 22:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:41.206 22:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:41.206 22:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:41.206 22:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:41.206 22:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:41.206 22:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:41.206 22:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:41.206 22:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:41.206 22:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:41.206 22:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:41.206 "name": "Existed_Raid", 00:20:41.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.206 "strip_size_kb": 0, 00:20:41.206 "state": "configuring", 00:20:41.206 "raid_level": "raid1", 00:20:41.206 "superblock": false, 00:20:41.206 "num_base_bdevs": 4, 00:20:41.206 "num_base_bdevs_discovered": 3, 00:20:41.206 "num_base_bdevs_operational": 4, 00:20:41.206 "base_bdevs_list": [ 00:20:41.206 { 00:20:41.206 "name": "BaseBdev1", 00:20:41.206 "uuid": "9124db0a-2362-4c17-981a-31a76158b91e", 00:20:41.206 "is_configured": true, 00:20:41.206 "data_offset": 0, 00:20:41.206 "data_size": 65536 00:20:41.206 }, 00:20:41.206 { 00:20:41.206 "name": null, 00:20:41.206 "uuid": "833b12b8-25b7-4774-a838-85d0b52b47f5", 00:20:41.206 "is_configured": false, 00:20:41.206 "data_offset": 0, 00:20:41.206 "data_size": 65536 00:20:41.206 }, 00:20:41.206 { 00:20:41.206 "name": "BaseBdev3", 00:20:41.206 "uuid": "3c87150e-dc83-42a1-87d0-814ef6153e1d", 00:20:41.206 "is_configured": true, 00:20:41.206 "data_offset": 0, 00:20:41.206 "data_size": 65536 00:20:41.206 }, 00:20:41.206 { 00:20:41.206 "name": "BaseBdev4", 00:20:41.206 "uuid": "7afb3964-f3e4-4a77-80e5-7c7054128d0a", 00:20:41.206 "is_configured": true, 00:20:41.206 "data_offset": 0, 00:20:41.206 "data_size": 65536 00:20:41.206 } 00:20:41.206 ] 00:20:41.206 }' 00:20:41.206 22:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:41.206 22:04:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.775 22:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:41.775 22:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:42.035 22:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:20:42.035 22:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:42.035 [2024-07-13 22:04:01.332170] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:42.294 22:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:42.294 22:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:42.294 22:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:42.294 22:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:42.294 22:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:42.294 22:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:42.294 22:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:42.294 22:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:42.294 22:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:42.294 22:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:42.294 22:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.294 22:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:42.294 22:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:42.294 "name": "Existed_Raid", 00:20:42.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.294 "strip_size_kb": 0, 00:20:42.294 "state": "configuring", 00:20:42.294 "raid_level": "raid1", 00:20:42.294 "superblock": false, 00:20:42.294 "num_base_bdevs": 4, 00:20:42.294 "num_base_bdevs_discovered": 2, 00:20:42.294 "num_base_bdevs_operational": 4, 00:20:42.294 "base_bdevs_list": [ 00:20:42.294 { 00:20:42.294 "name": null, 00:20:42.294 "uuid": "9124db0a-2362-4c17-981a-31a76158b91e", 00:20:42.294 "is_configured": false, 00:20:42.294 "data_offset": 0, 00:20:42.294 "data_size": 65536 00:20:42.294 }, 00:20:42.294 { 00:20:42.294 "name": null, 00:20:42.294 "uuid": "833b12b8-25b7-4774-a838-85d0b52b47f5", 00:20:42.294 "is_configured": false, 00:20:42.294 "data_offset": 0, 00:20:42.294 "data_size": 65536 00:20:42.294 }, 00:20:42.294 { 00:20:42.294 "name": "BaseBdev3", 00:20:42.294 "uuid": "3c87150e-dc83-42a1-87d0-814ef6153e1d", 00:20:42.294 "is_configured": true, 00:20:42.294 "data_offset": 0, 00:20:42.294 "data_size": 65536 00:20:42.294 }, 00:20:42.294 { 00:20:42.294 "name": "BaseBdev4", 00:20:42.294 "uuid": "7afb3964-f3e4-4a77-80e5-7c7054128d0a", 00:20:42.294 "is_configured": true, 00:20:42.294 "data_offset": 0, 00:20:42.294 "data_size": 65536 00:20:42.294 } 00:20:42.294 ] 00:20:42.294 }' 00:20:42.294 22:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:42.294 22:04:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.861 22:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.862 22:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:42.862 22:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:20:42.862 22:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:43.121 [2024-07-13 22:04:02.333691] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:43.121 22:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:43.121 22:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:43.121 22:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:43.121 22:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:43.121 22:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:43.121 22:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:43.121 22:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:43.121 22:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:43.121 22:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:43.121 22:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:43.121 22:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:43.121 22:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:43.417 22:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:43.417 "name": "Existed_Raid", 00:20:43.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.417 "strip_size_kb": 0, 00:20:43.417 "state": "configuring", 00:20:43.417 "raid_level": "raid1", 00:20:43.417 "superblock": false, 00:20:43.417 "num_base_bdevs": 4, 00:20:43.417 "num_base_bdevs_discovered": 3, 00:20:43.417 "num_base_bdevs_operational": 4, 00:20:43.417 "base_bdevs_list": [ 00:20:43.417 { 00:20:43.417 "name": null, 00:20:43.417 "uuid": "9124db0a-2362-4c17-981a-31a76158b91e", 00:20:43.417 "is_configured": false, 00:20:43.417 "data_offset": 0, 00:20:43.417 "data_size": 65536 00:20:43.417 }, 00:20:43.417 { 00:20:43.417 "name": "BaseBdev2", 00:20:43.417 "uuid": "833b12b8-25b7-4774-a838-85d0b52b47f5", 00:20:43.417 "is_configured": true, 00:20:43.417 "data_offset": 0, 00:20:43.417 "data_size": 65536 00:20:43.417 }, 00:20:43.417 { 00:20:43.417 "name": "BaseBdev3", 00:20:43.417 "uuid": "3c87150e-dc83-42a1-87d0-814ef6153e1d", 00:20:43.417 "is_configured": true, 00:20:43.417 "data_offset": 0, 00:20:43.417 "data_size": 65536 00:20:43.417 }, 00:20:43.417 { 00:20:43.417 "name": "BaseBdev4", 00:20:43.417 "uuid": "7afb3964-f3e4-4a77-80e5-7c7054128d0a", 00:20:43.417 "is_configured": true, 00:20:43.417 "data_offset": 0, 00:20:43.417 "data_size": 65536 00:20:43.417 } 00:20:43.417 ] 00:20:43.417 }' 00:20:43.417 22:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:43.417 22:04:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.688 22:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:43.688 22:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:43.947 22:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:20:43.947 22:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:43.947 22:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:44.205 22:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 9124db0a-2362-4c17-981a-31a76158b91e 00:20:44.205 [2024-07-13 22:04:03.532876] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:44.205 [2024-07-13 22:04:03.532924] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042080 00:20:44.205 [2024-07-13 22:04:03.532942] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:44.205 [2024-07-13 22:04:03.533194] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010b20 00:20:44.205 [2024-07-13 22:04:03.533362] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042080 00:20:44.205 [2024-07-13 22:04:03.533372] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000042080 00:20:44.205 [2024-07-13 22:04:03.533609] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:44.205 NewBaseBdev 00:20:44.205 22:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:20:44.205 22:04:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:20:44.205 22:04:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:44.205 22:04:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:44.205 22:04:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:44.205 22:04:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:44.205 22:04:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:44.464 22:04:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:44.464 [ 00:20:44.464 { 00:20:44.464 "name": "NewBaseBdev", 00:20:44.464 "aliases": [ 00:20:44.464 "9124db0a-2362-4c17-981a-31a76158b91e" 00:20:44.464 ], 00:20:44.464 "product_name": "Malloc disk", 00:20:44.464 "block_size": 512, 00:20:44.464 "num_blocks": 65536, 00:20:44.464 "uuid": "9124db0a-2362-4c17-981a-31a76158b91e", 00:20:44.464 "assigned_rate_limits": { 00:20:44.464 "rw_ios_per_sec": 0, 00:20:44.464 "rw_mbytes_per_sec": 0, 00:20:44.464 "r_mbytes_per_sec": 0, 00:20:44.464 "w_mbytes_per_sec": 0 00:20:44.464 }, 00:20:44.464 "claimed": true, 00:20:44.464 "claim_type": "exclusive_write", 00:20:44.464 "zoned": false, 00:20:44.464 "supported_io_types": { 00:20:44.464 "read": true, 00:20:44.464 "write": true, 00:20:44.464 "unmap": true, 00:20:44.464 "flush": true, 00:20:44.464 "reset": true, 00:20:44.464 "nvme_admin": false, 00:20:44.464 "nvme_io": false, 00:20:44.464 "nvme_io_md": false, 00:20:44.464 "write_zeroes": true, 00:20:44.464 "zcopy": true, 00:20:44.464 "get_zone_info": false, 00:20:44.464 "zone_management": false, 00:20:44.464 "zone_append": false, 00:20:44.464 "compare": false, 00:20:44.464 "compare_and_write": false, 00:20:44.464 "abort": true, 00:20:44.464 "seek_hole": false, 00:20:44.464 "seek_data": false, 00:20:44.464 "copy": true, 00:20:44.464 "nvme_iov_md": false 00:20:44.464 }, 00:20:44.464 "memory_domains": [ 00:20:44.464 { 00:20:44.464 "dma_device_id": "system", 00:20:44.464 "dma_device_type": 1 00:20:44.464 }, 00:20:44.464 { 00:20:44.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:44.464 "dma_device_type": 2 00:20:44.464 } 00:20:44.464 ], 00:20:44.464 "driver_specific": {} 00:20:44.464 } 00:20:44.464 ] 00:20:44.724 22:04:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:44.724 22:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:20:44.724 22:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:44.724 22:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:44.724 22:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:44.724 22:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:44.724 22:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:44.724 22:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:44.724 22:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:44.724 22:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:44.724 22:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:44.724 22:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:44.724 22:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:44.724 22:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:44.724 "name": "Existed_Raid", 00:20:44.724 "uuid": "801e8cf2-85ed-4c95-adc1-444ed9707ad8", 00:20:44.724 "strip_size_kb": 0, 00:20:44.724 "state": "online", 00:20:44.724 "raid_level": "raid1", 00:20:44.724 "superblock": false, 00:20:44.724 "num_base_bdevs": 4, 00:20:44.724 "num_base_bdevs_discovered": 4, 00:20:44.724 "num_base_bdevs_operational": 4, 00:20:44.724 "base_bdevs_list": [ 00:20:44.724 { 00:20:44.724 "name": "NewBaseBdev", 00:20:44.724 "uuid": "9124db0a-2362-4c17-981a-31a76158b91e", 00:20:44.724 "is_configured": true, 00:20:44.724 "data_offset": 0, 00:20:44.724 "data_size": 65536 00:20:44.724 }, 00:20:44.724 { 00:20:44.724 "name": "BaseBdev2", 00:20:44.724 "uuid": "833b12b8-25b7-4774-a838-85d0b52b47f5", 00:20:44.724 "is_configured": true, 00:20:44.724 "data_offset": 0, 00:20:44.724 "data_size": 65536 00:20:44.724 }, 00:20:44.724 { 00:20:44.724 "name": "BaseBdev3", 00:20:44.724 "uuid": "3c87150e-dc83-42a1-87d0-814ef6153e1d", 00:20:44.724 "is_configured": true, 00:20:44.724 "data_offset": 0, 00:20:44.724 "data_size": 65536 00:20:44.724 }, 00:20:44.724 { 00:20:44.724 "name": "BaseBdev4", 00:20:44.724 "uuid": "7afb3964-f3e4-4a77-80e5-7c7054128d0a", 00:20:44.724 "is_configured": true, 00:20:44.724 "data_offset": 0, 00:20:44.724 "data_size": 65536 00:20:44.724 } 00:20:44.724 ] 00:20:44.724 }' 00:20:44.724 22:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:44.724 22:04:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.291 22:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:20:45.291 22:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:45.291 22:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:45.291 22:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:45.291 22:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:45.291 22:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:45.291 22:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:45.291 22:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:45.291 [2024-07-13 22:04:04.632131] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:45.291 22:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:45.291 "name": "Existed_Raid", 00:20:45.291 "aliases": [ 00:20:45.291 "801e8cf2-85ed-4c95-adc1-444ed9707ad8" 00:20:45.291 ], 00:20:45.291 "product_name": "Raid Volume", 00:20:45.291 "block_size": 512, 00:20:45.291 "num_blocks": 65536, 00:20:45.291 "uuid": "801e8cf2-85ed-4c95-adc1-444ed9707ad8", 00:20:45.291 "assigned_rate_limits": { 00:20:45.291 "rw_ios_per_sec": 0, 00:20:45.291 "rw_mbytes_per_sec": 0, 00:20:45.291 "r_mbytes_per_sec": 0, 00:20:45.291 "w_mbytes_per_sec": 0 00:20:45.291 }, 00:20:45.291 "claimed": false, 00:20:45.291 "zoned": false, 00:20:45.291 "supported_io_types": { 00:20:45.291 "read": true, 00:20:45.291 "write": true, 00:20:45.291 "unmap": false, 00:20:45.291 "flush": false, 00:20:45.291 "reset": true, 00:20:45.291 "nvme_admin": false, 00:20:45.291 "nvme_io": false, 00:20:45.292 "nvme_io_md": false, 00:20:45.292 "write_zeroes": true, 00:20:45.292 "zcopy": false, 00:20:45.292 "get_zone_info": false, 00:20:45.292 "zone_management": false, 00:20:45.292 "zone_append": false, 00:20:45.292 "compare": false, 00:20:45.292 "compare_and_write": false, 00:20:45.292 "abort": false, 00:20:45.292 "seek_hole": false, 00:20:45.292 "seek_data": false, 00:20:45.292 "copy": false, 00:20:45.292 "nvme_iov_md": false 00:20:45.292 }, 00:20:45.292 "memory_domains": [ 00:20:45.292 { 00:20:45.292 "dma_device_id": "system", 00:20:45.292 "dma_device_type": 1 00:20:45.292 }, 00:20:45.292 { 00:20:45.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:45.292 "dma_device_type": 2 00:20:45.292 }, 00:20:45.292 { 00:20:45.292 "dma_device_id": "system", 00:20:45.292 "dma_device_type": 1 00:20:45.292 }, 00:20:45.292 { 00:20:45.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:45.292 "dma_device_type": 2 00:20:45.292 }, 00:20:45.292 { 00:20:45.292 "dma_device_id": "system", 00:20:45.292 "dma_device_type": 1 00:20:45.292 }, 00:20:45.292 { 00:20:45.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:45.292 "dma_device_type": 2 00:20:45.292 }, 00:20:45.292 { 00:20:45.292 "dma_device_id": "system", 00:20:45.292 "dma_device_type": 1 00:20:45.292 }, 00:20:45.292 { 00:20:45.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:45.292 "dma_device_type": 2 00:20:45.292 } 00:20:45.292 ], 00:20:45.292 "driver_specific": { 00:20:45.292 "raid": { 00:20:45.292 "uuid": "801e8cf2-85ed-4c95-adc1-444ed9707ad8", 00:20:45.292 "strip_size_kb": 0, 00:20:45.292 "state": "online", 00:20:45.292 "raid_level": "raid1", 00:20:45.292 "superblock": false, 00:20:45.292 "num_base_bdevs": 4, 00:20:45.292 "num_base_bdevs_discovered": 4, 00:20:45.292 "num_base_bdevs_operational": 4, 00:20:45.292 "base_bdevs_list": [ 00:20:45.292 { 00:20:45.292 "name": "NewBaseBdev", 00:20:45.292 "uuid": "9124db0a-2362-4c17-981a-31a76158b91e", 00:20:45.292 "is_configured": true, 00:20:45.292 "data_offset": 0, 00:20:45.292 "data_size": 65536 00:20:45.292 }, 00:20:45.292 { 00:20:45.292 "name": "BaseBdev2", 00:20:45.292 "uuid": "833b12b8-25b7-4774-a838-85d0b52b47f5", 00:20:45.292 "is_configured": true, 00:20:45.292 "data_offset": 0, 00:20:45.292 "data_size": 65536 00:20:45.292 }, 00:20:45.292 { 00:20:45.292 "name": "BaseBdev3", 00:20:45.292 "uuid": "3c87150e-dc83-42a1-87d0-814ef6153e1d", 00:20:45.292 "is_configured": true, 00:20:45.292 "data_offset": 0, 00:20:45.292 "data_size": 65536 00:20:45.292 }, 00:20:45.292 { 00:20:45.292 "name": "BaseBdev4", 00:20:45.292 "uuid": "7afb3964-f3e4-4a77-80e5-7c7054128d0a", 00:20:45.292 "is_configured": true, 00:20:45.292 "data_offset": 0, 00:20:45.292 "data_size": 65536 00:20:45.292 } 00:20:45.292 ] 00:20:45.292 } 00:20:45.292 } 00:20:45.292 }' 00:20:45.292 22:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:45.550 22:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:20:45.550 BaseBdev2 00:20:45.550 BaseBdev3 00:20:45.550 BaseBdev4' 00:20:45.550 22:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:45.550 22:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:20:45.550 22:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:45.550 22:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:45.550 "name": "NewBaseBdev", 00:20:45.550 "aliases": [ 00:20:45.550 "9124db0a-2362-4c17-981a-31a76158b91e" 00:20:45.550 ], 00:20:45.550 "product_name": "Malloc disk", 00:20:45.550 "block_size": 512, 00:20:45.550 "num_blocks": 65536, 00:20:45.550 "uuid": "9124db0a-2362-4c17-981a-31a76158b91e", 00:20:45.550 "assigned_rate_limits": { 00:20:45.550 "rw_ios_per_sec": 0, 00:20:45.550 "rw_mbytes_per_sec": 0, 00:20:45.550 "r_mbytes_per_sec": 0, 00:20:45.550 "w_mbytes_per_sec": 0 00:20:45.550 }, 00:20:45.550 "claimed": true, 00:20:45.550 "claim_type": "exclusive_write", 00:20:45.550 "zoned": false, 00:20:45.550 "supported_io_types": { 00:20:45.550 "read": true, 00:20:45.550 "write": true, 00:20:45.550 "unmap": true, 00:20:45.550 "flush": true, 00:20:45.550 "reset": true, 00:20:45.550 "nvme_admin": false, 00:20:45.550 "nvme_io": false, 00:20:45.550 "nvme_io_md": false, 00:20:45.550 "write_zeroes": true, 00:20:45.550 "zcopy": true, 00:20:45.550 "get_zone_info": false, 00:20:45.550 "zone_management": false, 00:20:45.550 "zone_append": false, 00:20:45.550 "compare": false, 00:20:45.550 "compare_and_write": false, 00:20:45.550 "abort": true, 00:20:45.550 "seek_hole": false, 00:20:45.550 "seek_data": false, 00:20:45.550 "copy": true, 00:20:45.550 "nvme_iov_md": false 00:20:45.550 }, 00:20:45.550 "memory_domains": [ 00:20:45.550 { 00:20:45.550 "dma_device_id": "system", 00:20:45.550 "dma_device_type": 1 00:20:45.550 }, 00:20:45.550 { 00:20:45.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:45.550 "dma_device_type": 2 00:20:45.550 } 00:20:45.550 ], 00:20:45.550 "driver_specific": {} 00:20:45.550 }' 00:20:45.550 22:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:45.550 22:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:45.550 22:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:45.550 22:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:45.809 22:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:45.809 22:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:45.809 22:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:45.809 22:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:45.809 22:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:45.809 22:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:45.809 22:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:45.809 22:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:45.809 22:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:45.809 22:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:45.809 22:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:46.067 22:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:46.067 "name": "BaseBdev2", 00:20:46.067 "aliases": [ 00:20:46.067 "833b12b8-25b7-4774-a838-85d0b52b47f5" 00:20:46.067 ], 00:20:46.067 "product_name": "Malloc disk", 00:20:46.067 "block_size": 512, 00:20:46.067 "num_blocks": 65536, 00:20:46.067 "uuid": "833b12b8-25b7-4774-a838-85d0b52b47f5", 00:20:46.067 "assigned_rate_limits": { 00:20:46.067 "rw_ios_per_sec": 0, 00:20:46.067 "rw_mbytes_per_sec": 0, 00:20:46.067 "r_mbytes_per_sec": 0, 00:20:46.067 "w_mbytes_per_sec": 0 00:20:46.067 }, 00:20:46.067 "claimed": true, 00:20:46.067 "claim_type": "exclusive_write", 00:20:46.067 "zoned": false, 00:20:46.067 "supported_io_types": { 00:20:46.067 "read": true, 00:20:46.067 "write": true, 00:20:46.067 "unmap": true, 00:20:46.067 "flush": true, 00:20:46.067 "reset": true, 00:20:46.067 "nvme_admin": false, 00:20:46.067 "nvme_io": false, 00:20:46.067 "nvme_io_md": false, 00:20:46.067 "write_zeroes": true, 00:20:46.067 "zcopy": true, 00:20:46.067 "get_zone_info": false, 00:20:46.067 "zone_management": false, 00:20:46.067 "zone_append": false, 00:20:46.067 "compare": false, 00:20:46.067 "compare_and_write": false, 00:20:46.067 "abort": true, 00:20:46.067 "seek_hole": false, 00:20:46.067 "seek_data": false, 00:20:46.067 "copy": true, 00:20:46.067 "nvme_iov_md": false 00:20:46.067 }, 00:20:46.067 "memory_domains": [ 00:20:46.067 { 00:20:46.067 "dma_device_id": "system", 00:20:46.067 "dma_device_type": 1 00:20:46.067 }, 00:20:46.067 { 00:20:46.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:46.067 "dma_device_type": 2 00:20:46.067 } 00:20:46.067 ], 00:20:46.067 "driver_specific": {} 00:20:46.067 }' 00:20:46.067 22:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:46.067 22:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:46.067 22:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:46.067 22:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:46.067 22:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:46.067 22:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:46.067 22:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:46.067 22:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:46.067 22:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:46.067 22:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:46.326 22:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:46.326 22:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:46.326 22:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:46.326 22:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:46.326 22:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:46.326 22:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:46.326 "name": "BaseBdev3", 00:20:46.326 "aliases": [ 00:20:46.326 "3c87150e-dc83-42a1-87d0-814ef6153e1d" 00:20:46.326 ], 00:20:46.326 "product_name": "Malloc disk", 00:20:46.326 "block_size": 512, 00:20:46.326 "num_blocks": 65536, 00:20:46.326 "uuid": "3c87150e-dc83-42a1-87d0-814ef6153e1d", 00:20:46.326 "assigned_rate_limits": { 00:20:46.326 "rw_ios_per_sec": 0, 00:20:46.326 "rw_mbytes_per_sec": 0, 00:20:46.326 "r_mbytes_per_sec": 0, 00:20:46.326 "w_mbytes_per_sec": 0 00:20:46.326 }, 00:20:46.326 "claimed": true, 00:20:46.326 "claim_type": "exclusive_write", 00:20:46.326 "zoned": false, 00:20:46.326 "supported_io_types": { 00:20:46.326 "read": true, 00:20:46.326 "write": true, 00:20:46.326 "unmap": true, 00:20:46.326 "flush": true, 00:20:46.326 "reset": true, 00:20:46.326 "nvme_admin": false, 00:20:46.326 "nvme_io": false, 00:20:46.326 "nvme_io_md": false, 00:20:46.326 "write_zeroes": true, 00:20:46.326 "zcopy": true, 00:20:46.326 "get_zone_info": false, 00:20:46.326 "zone_management": false, 00:20:46.326 "zone_append": false, 00:20:46.326 "compare": false, 00:20:46.326 "compare_and_write": false, 00:20:46.326 "abort": true, 00:20:46.326 "seek_hole": false, 00:20:46.326 "seek_data": false, 00:20:46.326 "copy": true, 00:20:46.326 "nvme_iov_md": false 00:20:46.326 }, 00:20:46.326 "memory_domains": [ 00:20:46.326 { 00:20:46.326 "dma_device_id": "system", 00:20:46.326 "dma_device_type": 1 00:20:46.326 }, 00:20:46.326 { 00:20:46.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:46.326 "dma_device_type": 2 00:20:46.326 } 00:20:46.326 ], 00:20:46.326 "driver_specific": {} 00:20:46.326 }' 00:20:46.326 22:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:46.326 22:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:46.586 22:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:46.586 22:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:46.586 22:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:46.586 22:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:46.586 22:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:46.586 22:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:46.586 22:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:46.586 22:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:46.586 22:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:46.586 22:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:46.586 22:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:46.586 22:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:46.586 22:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:20:46.845 22:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:46.845 "name": "BaseBdev4", 00:20:46.845 "aliases": [ 00:20:46.845 "7afb3964-f3e4-4a77-80e5-7c7054128d0a" 00:20:46.845 ], 00:20:46.845 "product_name": "Malloc disk", 00:20:46.845 "block_size": 512, 00:20:46.845 "num_blocks": 65536, 00:20:46.845 "uuid": "7afb3964-f3e4-4a77-80e5-7c7054128d0a", 00:20:46.845 "assigned_rate_limits": { 00:20:46.845 "rw_ios_per_sec": 0, 00:20:46.845 "rw_mbytes_per_sec": 0, 00:20:46.845 "r_mbytes_per_sec": 0, 00:20:46.845 "w_mbytes_per_sec": 0 00:20:46.845 }, 00:20:46.845 "claimed": true, 00:20:46.845 "claim_type": "exclusive_write", 00:20:46.845 "zoned": false, 00:20:46.845 "supported_io_types": { 00:20:46.845 "read": true, 00:20:46.845 "write": true, 00:20:46.845 "unmap": true, 00:20:46.845 "flush": true, 00:20:46.845 "reset": true, 00:20:46.845 "nvme_admin": false, 00:20:46.845 "nvme_io": false, 00:20:46.845 "nvme_io_md": false, 00:20:46.845 "write_zeroes": true, 00:20:46.845 "zcopy": true, 00:20:46.845 "get_zone_info": false, 00:20:46.845 "zone_management": false, 00:20:46.845 "zone_append": false, 00:20:46.845 "compare": false, 00:20:46.845 "compare_and_write": false, 00:20:46.846 "abort": true, 00:20:46.846 "seek_hole": false, 00:20:46.846 "seek_data": false, 00:20:46.846 "copy": true, 00:20:46.846 "nvme_iov_md": false 00:20:46.846 }, 00:20:46.846 "memory_domains": [ 00:20:46.846 { 00:20:46.846 "dma_device_id": "system", 00:20:46.846 "dma_device_type": 1 00:20:46.846 }, 00:20:46.846 { 00:20:46.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:46.846 "dma_device_type": 2 00:20:46.846 } 00:20:46.846 ], 00:20:46.846 "driver_specific": {} 00:20:46.846 }' 00:20:46.846 22:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:46.846 22:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:46.846 22:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:46.846 22:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:46.846 22:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:46.846 22:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:46.846 22:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:47.105 22:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:47.105 22:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:47.105 22:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:47.105 22:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:47.105 22:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:47.105 22:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:47.364 [2024-07-13 22:04:06.512916] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:47.364 [2024-07-13 22:04:06.512941] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:47.364 [2024-07-13 22:04:06.513016] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:47.364 [2024-07-13 22:04:06.513284] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:47.364 [2024-07-13 22:04:06.513300] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042080 name Existed_Raid, state offline 00:20:47.364 22:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 1443717 00:20:47.365 22:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 1443717 ']' 00:20:47.365 22:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 1443717 00:20:47.365 22:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:20:47.365 22:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:47.365 22:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1443717 00:20:47.365 22:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:47.365 22:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:47.365 22:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1443717' 00:20:47.365 killing process with pid 1443717 00:20:47.365 22:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 1443717 00:20:47.365 [2024-07-13 22:04:06.574177] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:47.365 22:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 1443717 00:20:47.624 [2024-07-13 22:04:06.895870] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:49.004 22:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:20:49.004 00:20:49.004 real 0m25.795s 00:20:49.004 user 0m45.316s 00:20:49.004 sys 0m4.702s 00:20:49.004 22:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:49.004 22:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:49.004 ************************************ 00:20:49.004 END TEST raid_state_function_test 00:20:49.004 ************************************ 00:20:49.004 22:04:08 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:20:49.004 22:04:08 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:20:49.004 22:04:08 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:20:49.004 22:04:08 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:49.004 22:04:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:49.004 ************************************ 00:20:49.004 START TEST raid_state_function_test_sb 00:20:49.004 ************************************ 00:20:49.004 22:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 4 true 00:20:49.004 22:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:20:49.004 22:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:20:49.004 22:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:20:49.004 22:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:20:49.004 22:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:20:49.004 22:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:49.004 22:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:20:49.004 22:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:49.004 22:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:49.004 22:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:20:49.004 22:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:49.004 22:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:49.004 22:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:20:49.004 22:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:49.004 22:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:49.004 22:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:20:49.004 22:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:49.004 22:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:49.004 22:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:49.004 22:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:20:49.004 22:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:20:49.004 22:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:20:49.004 22:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:20:49.004 22:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:20:49.004 22:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:20:49.004 22:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:20:49.004 22:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:20:49.004 22:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:20:49.004 22:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=1448644 00:20:49.004 22:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1448644' 00:20:49.004 Process raid pid: 1448644 00:20:49.004 22:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:49.004 22:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 1448644 /var/tmp/spdk-raid.sock 00:20:49.004 22:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 1448644 ']' 00:20:49.004 22:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:49.004 22:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:49.004 22:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:49.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:49.004 22:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:49.004 22:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:49.004 [2024-07-13 22:04:08.297499] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:49.004 [2024-07-13 22:04:08.297586] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:49.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.004 EAL: Requested device 0000:3d:01.0 cannot be used 00:20:49.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.004 EAL: Requested device 0000:3d:01.1 cannot be used 00:20:49.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.004 EAL: Requested device 0000:3d:01.2 cannot be used 00:20:49.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.264 EAL: Requested device 0000:3d:01.3 cannot be used 00:20:49.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.264 EAL: Requested device 0000:3d:01.4 cannot be used 00:20:49.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.264 EAL: Requested device 0000:3d:01.5 cannot be used 00:20:49.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.264 EAL: Requested device 0000:3d:01.6 cannot be used 00:20:49.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.264 EAL: Requested device 0000:3d:01.7 cannot be used 00:20:49.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.264 EAL: Requested device 0000:3d:02.0 cannot be used 00:20:49.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.264 EAL: Requested device 0000:3d:02.1 cannot be used 00:20:49.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.264 EAL: Requested device 0000:3d:02.2 cannot be used 00:20:49.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.264 EAL: Requested device 0000:3d:02.3 cannot be used 00:20:49.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.264 EAL: Requested device 0000:3d:02.4 cannot be used 00:20:49.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.264 EAL: Requested device 0000:3d:02.5 cannot be used 00:20:49.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.264 EAL: Requested device 0000:3d:02.6 cannot be used 00:20:49.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.264 EAL: Requested device 0000:3d:02.7 cannot be used 00:20:49.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.264 EAL: Requested device 0000:3f:01.0 cannot be used 00:20:49.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.264 EAL: Requested device 0000:3f:01.1 cannot be used 00:20:49.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.264 EAL: Requested device 0000:3f:01.2 cannot be used 00:20:49.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.264 EAL: Requested device 0000:3f:01.3 cannot be used 00:20:49.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.264 EAL: Requested device 0000:3f:01.4 cannot be used 00:20:49.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.264 EAL: Requested device 0000:3f:01.5 cannot be used 00:20:49.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.264 EAL: Requested device 0000:3f:01.6 cannot be used 00:20:49.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.264 EAL: Requested device 0000:3f:01.7 cannot be used 00:20:49.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.264 EAL: Requested device 0000:3f:02.0 cannot be used 00:20:49.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.264 EAL: Requested device 0000:3f:02.1 cannot be used 00:20:49.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.264 EAL: Requested device 0000:3f:02.2 cannot be used 00:20:49.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.264 EAL: Requested device 0000:3f:02.3 cannot be used 00:20:49.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.264 EAL: Requested device 0000:3f:02.4 cannot be used 00:20:49.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.264 EAL: Requested device 0000:3f:02.5 cannot be used 00:20:49.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.264 EAL: Requested device 0000:3f:02.6 cannot be used 00:20:49.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.264 EAL: Requested device 0000:3f:02.7 cannot be used 00:20:49.264 [2024-07-13 22:04:08.460445] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.524 [2024-07-13 22:04:08.674435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:49.783 [2024-07-13 22:04:08.925929] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:49.783 [2024-07-13 22:04:08.925963] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:49.783 22:04:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:49.783 22:04:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:20:49.783 22:04:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:50.043 [2024-07-13 22:04:09.212860] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:50.043 [2024-07-13 22:04:09.212919] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:50.043 [2024-07-13 22:04:09.212930] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:50.043 [2024-07-13 22:04:09.212958] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:50.043 [2024-07-13 22:04:09.212966] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:50.043 [2024-07-13 22:04:09.212977] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:50.043 [2024-07-13 22:04:09.212985] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:50.043 [2024-07-13 22:04:09.212996] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:50.043 22:04:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:50.043 22:04:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:50.043 22:04:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:50.043 22:04:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:50.043 22:04:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:50.043 22:04:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:50.043 22:04:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:50.043 22:04:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:50.043 22:04:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:50.043 22:04:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:50.043 22:04:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.043 22:04:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:50.043 22:04:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:50.043 "name": "Existed_Raid", 00:20:50.043 "uuid": "d4dd3cff-0722-43f4-b8e9-01f09c477cad", 00:20:50.043 "strip_size_kb": 0, 00:20:50.043 "state": "configuring", 00:20:50.043 "raid_level": "raid1", 00:20:50.043 "superblock": true, 00:20:50.043 "num_base_bdevs": 4, 00:20:50.043 "num_base_bdevs_discovered": 0, 00:20:50.043 "num_base_bdevs_operational": 4, 00:20:50.043 "base_bdevs_list": [ 00:20:50.043 { 00:20:50.043 "name": "BaseBdev1", 00:20:50.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.043 "is_configured": false, 00:20:50.043 "data_offset": 0, 00:20:50.043 "data_size": 0 00:20:50.043 }, 00:20:50.043 { 00:20:50.043 "name": "BaseBdev2", 00:20:50.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.043 "is_configured": false, 00:20:50.043 "data_offset": 0, 00:20:50.043 "data_size": 0 00:20:50.043 }, 00:20:50.043 { 00:20:50.043 "name": "BaseBdev3", 00:20:50.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.043 "is_configured": false, 00:20:50.043 "data_offset": 0, 00:20:50.043 "data_size": 0 00:20:50.043 }, 00:20:50.043 { 00:20:50.043 "name": "BaseBdev4", 00:20:50.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.043 "is_configured": false, 00:20:50.043 "data_offset": 0, 00:20:50.043 "data_size": 0 00:20:50.043 } 00:20:50.043 ] 00:20:50.043 }' 00:20:50.043 22:04:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:50.043 22:04:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.610 22:04:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:50.610 [2024-07-13 22:04:09.994817] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:50.610 [2024-07-13 22:04:09.994854] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:20:50.870 22:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:50.870 [2024-07-13 22:04:10.163336] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:50.870 [2024-07-13 22:04:10.163385] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:50.870 [2024-07-13 22:04:10.163395] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:50.870 [2024-07-13 22:04:10.163414] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:50.870 [2024-07-13 22:04:10.163422] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:50.870 [2024-07-13 22:04:10.163433] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:50.870 [2024-07-13 22:04:10.163441] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:50.870 [2024-07-13 22:04:10.163452] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:50.870 22:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:51.129 [2024-07-13 22:04:10.375370] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:51.129 BaseBdev1 00:20:51.129 22:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:20:51.129 22:04:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:20:51.129 22:04:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:51.129 22:04:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:51.129 22:04:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:51.129 22:04:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:51.129 22:04:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:51.388 22:04:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:51.388 [ 00:20:51.388 { 00:20:51.388 "name": "BaseBdev1", 00:20:51.388 "aliases": [ 00:20:51.388 "9e5d406f-540c-4be1-990f-64db11690521" 00:20:51.388 ], 00:20:51.388 "product_name": "Malloc disk", 00:20:51.388 "block_size": 512, 00:20:51.388 "num_blocks": 65536, 00:20:51.388 "uuid": "9e5d406f-540c-4be1-990f-64db11690521", 00:20:51.388 "assigned_rate_limits": { 00:20:51.388 "rw_ios_per_sec": 0, 00:20:51.388 "rw_mbytes_per_sec": 0, 00:20:51.388 "r_mbytes_per_sec": 0, 00:20:51.388 "w_mbytes_per_sec": 0 00:20:51.388 }, 00:20:51.388 "claimed": true, 00:20:51.388 "claim_type": "exclusive_write", 00:20:51.388 "zoned": false, 00:20:51.388 "supported_io_types": { 00:20:51.388 "read": true, 00:20:51.388 "write": true, 00:20:51.388 "unmap": true, 00:20:51.388 "flush": true, 00:20:51.388 "reset": true, 00:20:51.388 "nvme_admin": false, 00:20:51.388 "nvme_io": false, 00:20:51.388 "nvme_io_md": false, 00:20:51.388 "write_zeroes": true, 00:20:51.388 "zcopy": true, 00:20:51.388 "get_zone_info": false, 00:20:51.388 "zone_management": false, 00:20:51.388 "zone_append": false, 00:20:51.388 "compare": false, 00:20:51.388 "compare_and_write": false, 00:20:51.388 "abort": true, 00:20:51.388 "seek_hole": false, 00:20:51.388 "seek_data": false, 00:20:51.388 "copy": true, 00:20:51.388 "nvme_iov_md": false 00:20:51.388 }, 00:20:51.388 "memory_domains": [ 00:20:51.388 { 00:20:51.388 "dma_device_id": "system", 00:20:51.388 "dma_device_type": 1 00:20:51.388 }, 00:20:51.388 { 00:20:51.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:51.388 "dma_device_type": 2 00:20:51.388 } 00:20:51.388 ], 00:20:51.388 "driver_specific": {} 00:20:51.388 } 00:20:51.388 ] 00:20:51.388 22:04:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:51.388 22:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:51.388 22:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:51.388 22:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:51.388 22:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:51.388 22:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:51.388 22:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:51.388 22:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:51.388 22:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:51.388 22:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:51.388 22:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:51.388 22:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.388 22:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:51.647 22:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:51.647 "name": "Existed_Raid", 00:20:51.647 "uuid": "cffb3a56-ffd3-48e1-94d1-b2f9d5c5c517", 00:20:51.647 "strip_size_kb": 0, 00:20:51.647 "state": "configuring", 00:20:51.647 "raid_level": "raid1", 00:20:51.647 "superblock": true, 00:20:51.647 "num_base_bdevs": 4, 00:20:51.647 "num_base_bdevs_discovered": 1, 00:20:51.647 "num_base_bdevs_operational": 4, 00:20:51.647 "base_bdevs_list": [ 00:20:51.647 { 00:20:51.647 "name": "BaseBdev1", 00:20:51.647 "uuid": "9e5d406f-540c-4be1-990f-64db11690521", 00:20:51.647 "is_configured": true, 00:20:51.647 "data_offset": 2048, 00:20:51.647 "data_size": 63488 00:20:51.647 }, 00:20:51.647 { 00:20:51.647 "name": "BaseBdev2", 00:20:51.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:51.647 "is_configured": false, 00:20:51.647 "data_offset": 0, 00:20:51.647 "data_size": 0 00:20:51.647 }, 00:20:51.647 { 00:20:51.647 "name": "BaseBdev3", 00:20:51.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:51.647 "is_configured": false, 00:20:51.647 "data_offset": 0, 00:20:51.647 "data_size": 0 00:20:51.647 }, 00:20:51.647 { 00:20:51.647 "name": "BaseBdev4", 00:20:51.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:51.647 "is_configured": false, 00:20:51.647 "data_offset": 0, 00:20:51.647 "data_size": 0 00:20:51.647 } 00:20:51.647 ] 00:20:51.647 }' 00:20:51.647 22:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:51.647 22:04:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:52.219 22:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:52.219 [2024-07-13 22:04:11.486457] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:52.219 [2024-07-13 22:04:11.486511] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:20:52.219 22:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:52.478 [2024-07-13 22:04:11.646970] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:52.478 [2024-07-13 22:04:11.648718] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:52.478 [2024-07-13 22:04:11.648758] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:52.478 [2024-07-13 22:04:11.648768] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:52.478 [2024-07-13 22:04:11.648796] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:52.478 [2024-07-13 22:04:11.648805] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:52.478 [2024-07-13 22:04:11.648819] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:52.478 22:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:20:52.478 22:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:52.478 22:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:52.478 22:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:52.478 22:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:52.478 22:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:52.478 22:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:52.478 22:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:52.478 22:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:52.478 22:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:52.478 22:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:52.478 22:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:52.478 22:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:52.478 22:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:52.478 22:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:52.478 "name": "Existed_Raid", 00:20:52.478 "uuid": "8b6bf41d-b033-4584-820f-29da10a62b1a", 00:20:52.478 "strip_size_kb": 0, 00:20:52.478 "state": "configuring", 00:20:52.478 "raid_level": "raid1", 00:20:52.478 "superblock": true, 00:20:52.478 "num_base_bdevs": 4, 00:20:52.478 "num_base_bdevs_discovered": 1, 00:20:52.478 "num_base_bdevs_operational": 4, 00:20:52.478 "base_bdevs_list": [ 00:20:52.478 { 00:20:52.478 "name": "BaseBdev1", 00:20:52.478 "uuid": "9e5d406f-540c-4be1-990f-64db11690521", 00:20:52.478 "is_configured": true, 00:20:52.478 "data_offset": 2048, 00:20:52.478 "data_size": 63488 00:20:52.478 }, 00:20:52.478 { 00:20:52.478 "name": "BaseBdev2", 00:20:52.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.478 "is_configured": false, 00:20:52.478 "data_offset": 0, 00:20:52.478 "data_size": 0 00:20:52.478 }, 00:20:52.478 { 00:20:52.478 "name": "BaseBdev3", 00:20:52.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.478 "is_configured": false, 00:20:52.478 "data_offset": 0, 00:20:52.478 "data_size": 0 00:20:52.478 }, 00:20:52.478 { 00:20:52.478 "name": "BaseBdev4", 00:20:52.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.478 "is_configured": false, 00:20:52.478 "data_offset": 0, 00:20:52.478 "data_size": 0 00:20:52.478 } 00:20:52.478 ] 00:20:52.478 }' 00:20:52.478 22:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:52.478 22:04:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:53.047 22:04:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:53.306 [2024-07-13 22:04:12.463765] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:53.306 BaseBdev2 00:20:53.306 22:04:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:20:53.306 22:04:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:20:53.306 22:04:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:53.306 22:04:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:53.306 22:04:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:53.306 22:04:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:53.306 22:04:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:53.306 22:04:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:53.565 [ 00:20:53.565 { 00:20:53.565 "name": "BaseBdev2", 00:20:53.565 "aliases": [ 00:20:53.565 "09a73d36-0f89-4319-8d03-0e7268359bc0" 00:20:53.565 ], 00:20:53.565 "product_name": "Malloc disk", 00:20:53.565 "block_size": 512, 00:20:53.565 "num_blocks": 65536, 00:20:53.565 "uuid": "09a73d36-0f89-4319-8d03-0e7268359bc0", 00:20:53.565 "assigned_rate_limits": { 00:20:53.565 "rw_ios_per_sec": 0, 00:20:53.565 "rw_mbytes_per_sec": 0, 00:20:53.565 "r_mbytes_per_sec": 0, 00:20:53.565 "w_mbytes_per_sec": 0 00:20:53.565 }, 00:20:53.565 "claimed": true, 00:20:53.565 "claim_type": "exclusive_write", 00:20:53.565 "zoned": false, 00:20:53.565 "supported_io_types": { 00:20:53.565 "read": true, 00:20:53.565 "write": true, 00:20:53.565 "unmap": true, 00:20:53.565 "flush": true, 00:20:53.565 "reset": true, 00:20:53.565 "nvme_admin": false, 00:20:53.565 "nvme_io": false, 00:20:53.565 "nvme_io_md": false, 00:20:53.565 "write_zeroes": true, 00:20:53.565 "zcopy": true, 00:20:53.565 "get_zone_info": false, 00:20:53.565 "zone_management": false, 00:20:53.565 "zone_append": false, 00:20:53.565 "compare": false, 00:20:53.565 "compare_and_write": false, 00:20:53.565 "abort": true, 00:20:53.565 "seek_hole": false, 00:20:53.565 "seek_data": false, 00:20:53.565 "copy": true, 00:20:53.565 "nvme_iov_md": false 00:20:53.565 }, 00:20:53.565 "memory_domains": [ 00:20:53.565 { 00:20:53.565 "dma_device_id": "system", 00:20:53.565 "dma_device_type": 1 00:20:53.565 }, 00:20:53.565 { 00:20:53.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:53.565 "dma_device_type": 2 00:20:53.565 } 00:20:53.565 ], 00:20:53.565 "driver_specific": {} 00:20:53.565 } 00:20:53.565 ] 00:20:53.565 22:04:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:53.565 22:04:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:53.565 22:04:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:53.565 22:04:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:53.565 22:04:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:53.565 22:04:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:53.565 22:04:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:53.565 22:04:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:53.565 22:04:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:53.565 22:04:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:53.565 22:04:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:53.565 22:04:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:53.565 22:04:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:53.565 22:04:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:53.566 22:04:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:53.824 22:04:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:53.824 "name": "Existed_Raid", 00:20:53.824 "uuid": "8b6bf41d-b033-4584-820f-29da10a62b1a", 00:20:53.824 "strip_size_kb": 0, 00:20:53.824 "state": "configuring", 00:20:53.824 "raid_level": "raid1", 00:20:53.824 "superblock": true, 00:20:53.824 "num_base_bdevs": 4, 00:20:53.824 "num_base_bdevs_discovered": 2, 00:20:53.824 "num_base_bdevs_operational": 4, 00:20:53.824 "base_bdevs_list": [ 00:20:53.824 { 00:20:53.824 "name": "BaseBdev1", 00:20:53.824 "uuid": "9e5d406f-540c-4be1-990f-64db11690521", 00:20:53.824 "is_configured": true, 00:20:53.824 "data_offset": 2048, 00:20:53.824 "data_size": 63488 00:20:53.824 }, 00:20:53.825 { 00:20:53.825 "name": "BaseBdev2", 00:20:53.825 "uuid": "09a73d36-0f89-4319-8d03-0e7268359bc0", 00:20:53.825 "is_configured": true, 00:20:53.825 "data_offset": 2048, 00:20:53.825 "data_size": 63488 00:20:53.825 }, 00:20:53.825 { 00:20:53.825 "name": "BaseBdev3", 00:20:53.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.825 "is_configured": false, 00:20:53.825 "data_offset": 0, 00:20:53.825 "data_size": 0 00:20:53.825 }, 00:20:53.825 { 00:20:53.825 "name": "BaseBdev4", 00:20:53.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.825 "is_configured": false, 00:20:53.825 "data_offset": 0, 00:20:53.825 "data_size": 0 00:20:53.825 } 00:20:53.825 ] 00:20:53.825 }' 00:20:53.825 22:04:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:53.825 22:04:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:54.082 22:04:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:54.341 [2024-07-13 22:04:13.649583] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:54.341 BaseBdev3 00:20:54.341 22:04:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:20:54.341 22:04:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:20:54.341 22:04:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:54.341 22:04:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:54.341 22:04:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:54.341 22:04:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:54.341 22:04:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:54.599 22:04:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:54.859 [ 00:20:54.859 { 00:20:54.859 "name": "BaseBdev3", 00:20:54.859 "aliases": [ 00:20:54.859 "1eeaddc6-29a7-4974-8c80-7049e9d9a225" 00:20:54.859 ], 00:20:54.859 "product_name": "Malloc disk", 00:20:54.859 "block_size": 512, 00:20:54.859 "num_blocks": 65536, 00:20:54.859 "uuid": "1eeaddc6-29a7-4974-8c80-7049e9d9a225", 00:20:54.859 "assigned_rate_limits": { 00:20:54.859 "rw_ios_per_sec": 0, 00:20:54.859 "rw_mbytes_per_sec": 0, 00:20:54.859 "r_mbytes_per_sec": 0, 00:20:54.859 "w_mbytes_per_sec": 0 00:20:54.859 }, 00:20:54.859 "claimed": true, 00:20:54.859 "claim_type": "exclusive_write", 00:20:54.859 "zoned": false, 00:20:54.859 "supported_io_types": { 00:20:54.859 "read": true, 00:20:54.859 "write": true, 00:20:54.859 "unmap": true, 00:20:54.859 "flush": true, 00:20:54.859 "reset": true, 00:20:54.859 "nvme_admin": false, 00:20:54.859 "nvme_io": false, 00:20:54.859 "nvme_io_md": false, 00:20:54.859 "write_zeroes": true, 00:20:54.859 "zcopy": true, 00:20:54.859 "get_zone_info": false, 00:20:54.859 "zone_management": false, 00:20:54.859 "zone_append": false, 00:20:54.859 "compare": false, 00:20:54.859 "compare_and_write": false, 00:20:54.859 "abort": true, 00:20:54.859 "seek_hole": false, 00:20:54.859 "seek_data": false, 00:20:54.859 "copy": true, 00:20:54.859 "nvme_iov_md": false 00:20:54.859 }, 00:20:54.859 "memory_domains": [ 00:20:54.859 { 00:20:54.859 "dma_device_id": "system", 00:20:54.859 "dma_device_type": 1 00:20:54.859 }, 00:20:54.859 { 00:20:54.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:54.859 "dma_device_type": 2 00:20:54.859 } 00:20:54.859 ], 00:20:54.859 "driver_specific": {} 00:20:54.859 } 00:20:54.859 ] 00:20:54.859 22:04:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:54.859 22:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:54.859 22:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:54.859 22:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:54.859 22:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:54.859 22:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:54.859 22:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:54.859 22:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:54.859 22:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:54.859 22:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:54.859 22:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:54.859 22:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:54.859 22:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:54.859 22:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:54.859 22:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:54.859 22:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:54.859 "name": "Existed_Raid", 00:20:54.859 "uuid": "8b6bf41d-b033-4584-820f-29da10a62b1a", 00:20:54.859 "strip_size_kb": 0, 00:20:54.859 "state": "configuring", 00:20:54.859 "raid_level": "raid1", 00:20:54.859 "superblock": true, 00:20:54.859 "num_base_bdevs": 4, 00:20:54.859 "num_base_bdevs_discovered": 3, 00:20:54.859 "num_base_bdevs_operational": 4, 00:20:54.859 "base_bdevs_list": [ 00:20:54.859 { 00:20:54.859 "name": "BaseBdev1", 00:20:54.859 "uuid": "9e5d406f-540c-4be1-990f-64db11690521", 00:20:54.859 "is_configured": true, 00:20:54.859 "data_offset": 2048, 00:20:54.859 "data_size": 63488 00:20:54.859 }, 00:20:54.859 { 00:20:54.859 "name": "BaseBdev2", 00:20:54.859 "uuid": "09a73d36-0f89-4319-8d03-0e7268359bc0", 00:20:54.859 "is_configured": true, 00:20:54.859 "data_offset": 2048, 00:20:54.859 "data_size": 63488 00:20:54.859 }, 00:20:54.859 { 00:20:54.859 "name": "BaseBdev3", 00:20:54.859 "uuid": "1eeaddc6-29a7-4974-8c80-7049e9d9a225", 00:20:54.859 "is_configured": true, 00:20:54.859 "data_offset": 2048, 00:20:54.859 "data_size": 63488 00:20:54.859 }, 00:20:54.859 { 00:20:54.859 "name": "BaseBdev4", 00:20:54.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:54.859 "is_configured": false, 00:20:54.859 "data_offset": 0, 00:20:54.859 "data_size": 0 00:20:54.859 } 00:20:54.859 ] 00:20:54.859 }' 00:20:54.859 22:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:54.859 22:04:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:55.426 22:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:55.685 [2024-07-13 22:04:14.838072] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:55.685 [2024-07-13 22:04:14.838330] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:20:55.685 [2024-07-13 22:04:14.838354] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:55.685 [2024-07-13 22:04:14.838615] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:20:55.685 [2024-07-13 22:04:14.838816] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:20:55.685 [2024-07-13 22:04:14.838830] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:20:55.685 [2024-07-13 22:04:14.838983] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:55.685 BaseBdev4 00:20:55.685 22:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:20:55.685 22:04:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:20:55.685 22:04:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:55.685 22:04:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:55.685 22:04:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:55.685 22:04:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:55.685 22:04:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:55.685 22:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:55.944 [ 00:20:55.944 { 00:20:55.944 "name": "BaseBdev4", 00:20:55.944 "aliases": [ 00:20:55.944 "020cc785-e210-40ef-9194-9ec9f14da7c9" 00:20:55.944 ], 00:20:55.944 "product_name": "Malloc disk", 00:20:55.944 "block_size": 512, 00:20:55.944 "num_blocks": 65536, 00:20:55.944 "uuid": "020cc785-e210-40ef-9194-9ec9f14da7c9", 00:20:55.944 "assigned_rate_limits": { 00:20:55.944 "rw_ios_per_sec": 0, 00:20:55.944 "rw_mbytes_per_sec": 0, 00:20:55.944 "r_mbytes_per_sec": 0, 00:20:55.944 "w_mbytes_per_sec": 0 00:20:55.944 }, 00:20:55.944 "claimed": true, 00:20:55.944 "claim_type": "exclusive_write", 00:20:55.944 "zoned": false, 00:20:55.944 "supported_io_types": { 00:20:55.944 "read": true, 00:20:55.944 "write": true, 00:20:55.944 "unmap": true, 00:20:55.944 "flush": true, 00:20:55.944 "reset": true, 00:20:55.944 "nvme_admin": false, 00:20:55.944 "nvme_io": false, 00:20:55.944 "nvme_io_md": false, 00:20:55.944 "write_zeroes": true, 00:20:55.944 "zcopy": true, 00:20:55.944 "get_zone_info": false, 00:20:55.944 "zone_management": false, 00:20:55.944 "zone_append": false, 00:20:55.944 "compare": false, 00:20:55.944 "compare_and_write": false, 00:20:55.944 "abort": true, 00:20:55.944 "seek_hole": false, 00:20:55.944 "seek_data": false, 00:20:55.944 "copy": true, 00:20:55.944 "nvme_iov_md": false 00:20:55.944 }, 00:20:55.944 "memory_domains": [ 00:20:55.944 { 00:20:55.944 "dma_device_id": "system", 00:20:55.944 "dma_device_type": 1 00:20:55.944 }, 00:20:55.944 { 00:20:55.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:55.944 "dma_device_type": 2 00:20:55.944 } 00:20:55.944 ], 00:20:55.944 "driver_specific": {} 00:20:55.944 } 00:20:55.944 ] 00:20:55.944 22:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:55.944 22:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:55.944 22:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:55.944 22:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:20:55.944 22:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:55.944 22:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:55.944 22:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:55.944 22:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:55.944 22:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:55.944 22:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:55.944 22:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:55.944 22:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:55.944 22:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:55.944 22:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:55.944 22:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:56.203 22:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:56.203 "name": "Existed_Raid", 00:20:56.203 "uuid": "8b6bf41d-b033-4584-820f-29da10a62b1a", 00:20:56.203 "strip_size_kb": 0, 00:20:56.203 "state": "online", 00:20:56.203 "raid_level": "raid1", 00:20:56.203 "superblock": true, 00:20:56.203 "num_base_bdevs": 4, 00:20:56.203 "num_base_bdevs_discovered": 4, 00:20:56.203 "num_base_bdevs_operational": 4, 00:20:56.203 "base_bdevs_list": [ 00:20:56.203 { 00:20:56.203 "name": "BaseBdev1", 00:20:56.203 "uuid": "9e5d406f-540c-4be1-990f-64db11690521", 00:20:56.203 "is_configured": true, 00:20:56.203 "data_offset": 2048, 00:20:56.203 "data_size": 63488 00:20:56.203 }, 00:20:56.203 { 00:20:56.203 "name": "BaseBdev2", 00:20:56.203 "uuid": "09a73d36-0f89-4319-8d03-0e7268359bc0", 00:20:56.203 "is_configured": true, 00:20:56.203 "data_offset": 2048, 00:20:56.203 "data_size": 63488 00:20:56.203 }, 00:20:56.203 { 00:20:56.203 "name": "BaseBdev3", 00:20:56.203 "uuid": "1eeaddc6-29a7-4974-8c80-7049e9d9a225", 00:20:56.203 "is_configured": true, 00:20:56.203 "data_offset": 2048, 00:20:56.203 "data_size": 63488 00:20:56.203 }, 00:20:56.203 { 00:20:56.203 "name": "BaseBdev4", 00:20:56.203 "uuid": "020cc785-e210-40ef-9194-9ec9f14da7c9", 00:20:56.203 "is_configured": true, 00:20:56.203 "data_offset": 2048, 00:20:56.203 "data_size": 63488 00:20:56.203 } 00:20:56.203 ] 00:20:56.203 }' 00:20:56.203 22:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:56.203 22:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:56.811 22:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:20:56.811 22:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:56.811 22:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:56.811 22:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:56.811 22:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:56.811 22:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:20:56.811 22:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:56.811 22:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:56.811 [2024-07-13 22:04:16.017564] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:56.811 22:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:56.811 "name": "Existed_Raid", 00:20:56.811 "aliases": [ 00:20:56.811 "8b6bf41d-b033-4584-820f-29da10a62b1a" 00:20:56.811 ], 00:20:56.811 "product_name": "Raid Volume", 00:20:56.811 "block_size": 512, 00:20:56.811 "num_blocks": 63488, 00:20:56.811 "uuid": "8b6bf41d-b033-4584-820f-29da10a62b1a", 00:20:56.811 "assigned_rate_limits": { 00:20:56.811 "rw_ios_per_sec": 0, 00:20:56.811 "rw_mbytes_per_sec": 0, 00:20:56.811 "r_mbytes_per_sec": 0, 00:20:56.811 "w_mbytes_per_sec": 0 00:20:56.811 }, 00:20:56.811 "claimed": false, 00:20:56.811 "zoned": false, 00:20:56.811 "supported_io_types": { 00:20:56.811 "read": true, 00:20:56.811 "write": true, 00:20:56.811 "unmap": false, 00:20:56.811 "flush": false, 00:20:56.811 "reset": true, 00:20:56.811 "nvme_admin": false, 00:20:56.811 "nvme_io": false, 00:20:56.811 "nvme_io_md": false, 00:20:56.811 "write_zeroes": true, 00:20:56.811 "zcopy": false, 00:20:56.811 "get_zone_info": false, 00:20:56.811 "zone_management": false, 00:20:56.811 "zone_append": false, 00:20:56.811 "compare": false, 00:20:56.811 "compare_and_write": false, 00:20:56.811 "abort": false, 00:20:56.811 "seek_hole": false, 00:20:56.811 "seek_data": false, 00:20:56.811 "copy": false, 00:20:56.811 "nvme_iov_md": false 00:20:56.811 }, 00:20:56.811 "memory_domains": [ 00:20:56.811 { 00:20:56.811 "dma_device_id": "system", 00:20:56.811 "dma_device_type": 1 00:20:56.811 }, 00:20:56.811 { 00:20:56.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:56.811 "dma_device_type": 2 00:20:56.811 }, 00:20:56.811 { 00:20:56.811 "dma_device_id": "system", 00:20:56.811 "dma_device_type": 1 00:20:56.811 }, 00:20:56.811 { 00:20:56.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:56.811 "dma_device_type": 2 00:20:56.811 }, 00:20:56.811 { 00:20:56.811 "dma_device_id": "system", 00:20:56.811 "dma_device_type": 1 00:20:56.811 }, 00:20:56.811 { 00:20:56.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:56.811 "dma_device_type": 2 00:20:56.811 }, 00:20:56.811 { 00:20:56.811 "dma_device_id": "system", 00:20:56.811 "dma_device_type": 1 00:20:56.811 }, 00:20:56.811 { 00:20:56.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:56.811 "dma_device_type": 2 00:20:56.811 } 00:20:56.811 ], 00:20:56.811 "driver_specific": { 00:20:56.811 "raid": { 00:20:56.811 "uuid": "8b6bf41d-b033-4584-820f-29da10a62b1a", 00:20:56.811 "strip_size_kb": 0, 00:20:56.811 "state": "online", 00:20:56.811 "raid_level": "raid1", 00:20:56.811 "superblock": true, 00:20:56.811 "num_base_bdevs": 4, 00:20:56.811 "num_base_bdevs_discovered": 4, 00:20:56.811 "num_base_bdevs_operational": 4, 00:20:56.811 "base_bdevs_list": [ 00:20:56.811 { 00:20:56.811 "name": "BaseBdev1", 00:20:56.811 "uuid": "9e5d406f-540c-4be1-990f-64db11690521", 00:20:56.811 "is_configured": true, 00:20:56.811 "data_offset": 2048, 00:20:56.811 "data_size": 63488 00:20:56.811 }, 00:20:56.811 { 00:20:56.811 "name": "BaseBdev2", 00:20:56.811 "uuid": "09a73d36-0f89-4319-8d03-0e7268359bc0", 00:20:56.811 "is_configured": true, 00:20:56.811 "data_offset": 2048, 00:20:56.811 "data_size": 63488 00:20:56.811 }, 00:20:56.811 { 00:20:56.811 "name": "BaseBdev3", 00:20:56.811 "uuid": "1eeaddc6-29a7-4974-8c80-7049e9d9a225", 00:20:56.811 "is_configured": true, 00:20:56.811 "data_offset": 2048, 00:20:56.811 "data_size": 63488 00:20:56.811 }, 00:20:56.811 { 00:20:56.811 "name": "BaseBdev4", 00:20:56.811 "uuid": "020cc785-e210-40ef-9194-9ec9f14da7c9", 00:20:56.811 "is_configured": true, 00:20:56.811 "data_offset": 2048, 00:20:56.811 "data_size": 63488 00:20:56.811 } 00:20:56.811 ] 00:20:56.811 } 00:20:56.811 } 00:20:56.811 }' 00:20:56.811 22:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:56.811 22:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:20:56.811 BaseBdev2 00:20:56.811 BaseBdev3 00:20:56.811 BaseBdev4' 00:20:56.811 22:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:56.811 22:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:20:56.811 22:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:57.072 22:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:57.072 "name": "BaseBdev1", 00:20:57.072 "aliases": [ 00:20:57.072 "9e5d406f-540c-4be1-990f-64db11690521" 00:20:57.072 ], 00:20:57.072 "product_name": "Malloc disk", 00:20:57.072 "block_size": 512, 00:20:57.072 "num_blocks": 65536, 00:20:57.072 "uuid": "9e5d406f-540c-4be1-990f-64db11690521", 00:20:57.072 "assigned_rate_limits": { 00:20:57.072 "rw_ios_per_sec": 0, 00:20:57.072 "rw_mbytes_per_sec": 0, 00:20:57.072 "r_mbytes_per_sec": 0, 00:20:57.072 "w_mbytes_per_sec": 0 00:20:57.072 }, 00:20:57.072 "claimed": true, 00:20:57.072 "claim_type": "exclusive_write", 00:20:57.072 "zoned": false, 00:20:57.073 "supported_io_types": { 00:20:57.073 "read": true, 00:20:57.073 "write": true, 00:20:57.073 "unmap": true, 00:20:57.073 "flush": true, 00:20:57.073 "reset": true, 00:20:57.073 "nvme_admin": false, 00:20:57.073 "nvme_io": false, 00:20:57.073 "nvme_io_md": false, 00:20:57.073 "write_zeroes": true, 00:20:57.073 "zcopy": true, 00:20:57.073 "get_zone_info": false, 00:20:57.073 "zone_management": false, 00:20:57.073 "zone_append": false, 00:20:57.073 "compare": false, 00:20:57.073 "compare_and_write": false, 00:20:57.073 "abort": true, 00:20:57.073 "seek_hole": false, 00:20:57.073 "seek_data": false, 00:20:57.073 "copy": true, 00:20:57.073 "nvme_iov_md": false 00:20:57.073 }, 00:20:57.073 "memory_domains": [ 00:20:57.073 { 00:20:57.073 "dma_device_id": "system", 00:20:57.073 "dma_device_type": 1 00:20:57.073 }, 00:20:57.073 { 00:20:57.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:57.073 "dma_device_type": 2 00:20:57.073 } 00:20:57.073 ], 00:20:57.073 "driver_specific": {} 00:20:57.073 }' 00:20:57.073 22:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:57.073 22:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:57.073 22:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:57.073 22:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:57.073 22:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:57.073 22:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:57.073 22:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:57.073 22:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:57.332 22:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:57.332 22:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:57.332 22:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:57.332 22:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:57.332 22:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:57.332 22:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:57.332 22:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:57.332 22:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:57.332 "name": "BaseBdev2", 00:20:57.332 "aliases": [ 00:20:57.332 "09a73d36-0f89-4319-8d03-0e7268359bc0" 00:20:57.332 ], 00:20:57.332 "product_name": "Malloc disk", 00:20:57.332 "block_size": 512, 00:20:57.332 "num_blocks": 65536, 00:20:57.332 "uuid": "09a73d36-0f89-4319-8d03-0e7268359bc0", 00:20:57.332 "assigned_rate_limits": { 00:20:57.332 "rw_ios_per_sec": 0, 00:20:57.332 "rw_mbytes_per_sec": 0, 00:20:57.332 "r_mbytes_per_sec": 0, 00:20:57.332 "w_mbytes_per_sec": 0 00:20:57.332 }, 00:20:57.332 "claimed": true, 00:20:57.332 "claim_type": "exclusive_write", 00:20:57.332 "zoned": false, 00:20:57.332 "supported_io_types": { 00:20:57.332 "read": true, 00:20:57.332 "write": true, 00:20:57.332 "unmap": true, 00:20:57.332 "flush": true, 00:20:57.332 "reset": true, 00:20:57.332 "nvme_admin": false, 00:20:57.332 "nvme_io": false, 00:20:57.332 "nvme_io_md": false, 00:20:57.332 "write_zeroes": true, 00:20:57.332 "zcopy": true, 00:20:57.332 "get_zone_info": false, 00:20:57.332 "zone_management": false, 00:20:57.332 "zone_append": false, 00:20:57.332 "compare": false, 00:20:57.332 "compare_and_write": false, 00:20:57.332 "abort": true, 00:20:57.332 "seek_hole": false, 00:20:57.332 "seek_data": false, 00:20:57.332 "copy": true, 00:20:57.332 "nvme_iov_md": false 00:20:57.332 }, 00:20:57.332 "memory_domains": [ 00:20:57.332 { 00:20:57.332 "dma_device_id": "system", 00:20:57.332 "dma_device_type": 1 00:20:57.332 }, 00:20:57.332 { 00:20:57.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:57.332 "dma_device_type": 2 00:20:57.332 } 00:20:57.332 ], 00:20:57.332 "driver_specific": {} 00:20:57.332 }' 00:20:57.332 22:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:57.591 22:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:57.591 22:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:57.591 22:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:57.591 22:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:57.591 22:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:57.591 22:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:57.591 22:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:57.591 22:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:57.591 22:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:57.591 22:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:57.851 22:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:57.851 22:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:57.851 22:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:57.851 22:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:57.851 22:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:57.851 "name": "BaseBdev3", 00:20:57.851 "aliases": [ 00:20:57.851 "1eeaddc6-29a7-4974-8c80-7049e9d9a225" 00:20:57.851 ], 00:20:57.851 "product_name": "Malloc disk", 00:20:57.851 "block_size": 512, 00:20:57.851 "num_blocks": 65536, 00:20:57.851 "uuid": "1eeaddc6-29a7-4974-8c80-7049e9d9a225", 00:20:57.851 "assigned_rate_limits": { 00:20:57.851 "rw_ios_per_sec": 0, 00:20:57.851 "rw_mbytes_per_sec": 0, 00:20:57.851 "r_mbytes_per_sec": 0, 00:20:57.851 "w_mbytes_per_sec": 0 00:20:57.851 }, 00:20:57.851 "claimed": true, 00:20:57.851 "claim_type": "exclusive_write", 00:20:57.851 "zoned": false, 00:20:57.851 "supported_io_types": { 00:20:57.851 "read": true, 00:20:57.851 "write": true, 00:20:57.851 "unmap": true, 00:20:57.851 "flush": true, 00:20:57.851 "reset": true, 00:20:57.851 "nvme_admin": false, 00:20:57.851 "nvme_io": false, 00:20:57.851 "nvme_io_md": false, 00:20:57.851 "write_zeroes": true, 00:20:57.851 "zcopy": true, 00:20:57.851 "get_zone_info": false, 00:20:57.851 "zone_management": false, 00:20:57.851 "zone_append": false, 00:20:57.851 "compare": false, 00:20:57.851 "compare_and_write": false, 00:20:57.851 "abort": true, 00:20:57.851 "seek_hole": false, 00:20:57.851 "seek_data": false, 00:20:57.851 "copy": true, 00:20:57.851 "nvme_iov_md": false 00:20:57.851 }, 00:20:57.851 "memory_domains": [ 00:20:57.851 { 00:20:57.851 "dma_device_id": "system", 00:20:57.851 "dma_device_type": 1 00:20:57.851 }, 00:20:57.851 { 00:20:57.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:57.851 "dma_device_type": 2 00:20:57.851 } 00:20:57.851 ], 00:20:57.851 "driver_specific": {} 00:20:57.851 }' 00:20:57.851 22:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:57.851 22:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:58.110 22:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:58.110 22:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:58.110 22:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:58.110 22:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:58.110 22:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:58.110 22:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:58.110 22:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:58.110 22:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:58.110 22:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:58.370 22:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:58.370 22:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:58.370 22:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:20:58.370 22:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:58.370 22:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:58.370 "name": "BaseBdev4", 00:20:58.370 "aliases": [ 00:20:58.370 "020cc785-e210-40ef-9194-9ec9f14da7c9" 00:20:58.370 ], 00:20:58.370 "product_name": "Malloc disk", 00:20:58.370 "block_size": 512, 00:20:58.370 "num_blocks": 65536, 00:20:58.370 "uuid": "020cc785-e210-40ef-9194-9ec9f14da7c9", 00:20:58.370 "assigned_rate_limits": { 00:20:58.370 "rw_ios_per_sec": 0, 00:20:58.370 "rw_mbytes_per_sec": 0, 00:20:58.370 "r_mbytes_per_sec": 0, 00:20:58.370 "w_mbytes_per_sec": 0 00:20:58.370 }, 00:20:58.370 "claimed": true, 00:20:58.370 "claim_type": "exclusive_write", 00:20:58.370 "zoned": false, 00:20:58.370 "supported_io_types": { 00:20:58.370 "read": true, 00:20:58.370 "write": true, 00:20:58.370 "unmap": true, 00:20:58.370 "flush": true, 00:20:58.370 "reset": true, 00:20:58.370 "nvme_admin": false, 00:20:58.370 "nvme_io": false, 00:20:58.370 "nvme_io_md": false, 00:20:58.370 "write_zeroes": true, 00:20:58.370 "zcopy": true, 00:20:58.370 "get_zone_info": false, 00:20:58.370 "zone_management": false, 00:20:58.370 "zone_append": false, 00:20:58.370 "compare": false, 00:20:58.370 "compare_and_write": false, 00:20:58.370 "abort": true, 00:20:58.370 "seek_hole": false, 00:20:58.370 "seek_data": false, 00:20:58.370 "copy": true, 00:20:58.370 "nvme_iov_md": false 00:20:58.370 }, 00:20:58.370 "memory_domains": [ 00:20:58.370 { 00:20:58.370 "dma_device_id": "system", 00:20:58.370 "dma_device_type": 1 00:20:58.370 }, 00:20:58.370 { 00:20:58.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:58.370 "dma_device_type": 2 00:20:58.370 } 00:20:58.370 ], 00:20:58.370 "driver_specific": {} 00:20:58.370 }' 00:20:58.370 22:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:58.370 22:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:58.370 22:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:58.629 22:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:58.629 22:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:58.629 22:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:58.629 22:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:58.629 22:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:58.629 22:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:58.629 22:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:58.629 22:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:58.629 22:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:58.629 22:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:58.889 [2024-07-13 22:04:18.139085] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:58.889 22:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:20:58.889 22:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:20:58.889 22:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:58.889 22:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:20:58.889 22:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:20:58.889 22:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:20:58.889 22:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:58.889 22:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:58.889 22:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:58.889 22:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:58.889 22:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:58.889 22:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:58.889 22:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:58.889 22:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:58.889 22:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:58.889 22:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:58.889 22:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:59.147 22:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:59.147 "name": "Existed_Raid", 00:20:59.147 "uuid": "8b6bf41d-b033-4584-820f-29da10a62b1a", 00:20:59.147 "strip_size_kb": 0, 00:20:59.147 "state": "online", 00:20:59.147 "raid_level": "raid1", 00:20:59.147 "superblock": true, 00:20:59.147 "num_base_bdevs": 4, 00:20:59.147 "num_base_bdevs_discovered": 3, 00:20:59.147 "num_base_bdevs_operational": 3, 00:20:59.147 "base_bdevs_list": [ 00:20:59.147 { 00:20:59.147 "name": null, 00:20:59.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:59.147 "is_configured": false, 00:20:59.147 "data_offset": 2048, 00:20:59.147 "data_size": 63488 00:20:59.147 }, 00:20:59.147 { 00:20:59.147 "name": "BaseBdev2", 00:20:59.147 "uuid": "09a73d36-0f89-4319-8d03-0e7268359bc0", 00:20:59.147 "is_configured": true, 00:20:59.147 "data_offset": 2048, 00:20:59.147 "data_size": 63488 00:20:59.147 }, 00:20:59.147 { 00:20:59.147 "name": "BaseBdev3", 00:20:59.147 "uuid": "1eeaddc6-29a7-4974-8c80-7049e9d9a225", 00:20:59.147 "is_configured": true, 00:20:59.147 "data_offset": 2048, 00:20:59.148 "data_size": 63488 00:20:59.148 }, 00:20:59.148 { 00:20:59.148 "name": "BaseBdev4", 00:20:59.148 "uuid": "020cc785-e210-40ef-9194-9ec9f14da7c9", 00:20:59.148 "is_configured": true, 00:20:59.148 "data_offset": 2048, 00:20:59.148 "data_size": 63488 00:20:59.148 } 00:20:59.148 ] 00:20:59.148 }' 00:20:59.148 22:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:59.148 22:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:59.715 22:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:20:59.715 22:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:59.715 22:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:59.715 22:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:59.715 22:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:59.715 22:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:59.715 22:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:59.974 [2024-07-13 22:04:19.141331] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:59.974 22:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:59.974 22:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:59.974 22:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:59.974 22:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:00.234 22:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:00.234 22:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:00.234 22:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:00.234 [2024-07-13 22:04:19.592002] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:00.493 22:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:00.493 22:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:00.493 22:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:00.493 22:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:00.493 22:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:00.493 22:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:00.493 22:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:21:00.752 [2024-07-13 22:04:20.032604] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:00.752 [2024-07-13 22:04:20.032716] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:00.752 [2024-07-13 22:04:20.129420] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:00.752 [2024-07-13 22:04:20.129471] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:00.752 [2024-07-13 22:04:20.129485] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:21:01.014 22:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:01.014 22:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:01.014 22:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:01.014 22:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:21:01.014 22:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:21:01.014 22:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:21:01.014 22:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:21:01.014 22:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:21:01.014 22:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:01.014 22:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:01.273 BaseBdev2 00:21:01.273 22:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:21:01.273 22:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:21:01.273 22:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:01.273 22:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:01.273 22:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:01.273 22:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:01.273 22:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:01.532 22:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:01.533 [ 00:21:01.533 { 00:21:01.533 "name": "BaseBdev2", 00:21:01.533 "aliases": [ 00:21:01.533 "e1cced3a-60fe-47e5-ab1c-fc1b8203709c" 00:21:01.533 ], 00:21:01.533 "product_name": "Malloc disk", 00:21:01.533 "block_size": 512, 00:21:01.533 "num_blocks": 65536, 00:21:01.533 "uuid": "e1cced3a-60fe-47e5-ab1c-fc1b8203709c", 00:21:01.533 "assigned_rate_limits": { 00:21:01.533 "rw_ios_per_sec": 0, 00:21:01.533 "rw_mbytes_per_sec": 0, 00:21:01.533 "r_mbytes_per_sec": 0, 00:21:01.533 "w_mbytes_per_sec": 0 00:21:01.533 }, 00:21:01.533 "claimed": false, 00:21:01.533 "zoned": false, 00:21:01.533 "supported_io_types": { 00:21:01.533 "read": true, 00:21:01.533 "write": true, 00:21:01.533 "unmap": true, 00:21:01.533 "flush": true, 00:21:01.533 "reset": true, 00:21:01.533 "nvme_admin": false, 00:21:01.533 "nvme_io": false, 00:21:01.533 "nvme_io_md": false, 00:21:01.533 "write_zeroes": true, 00:21:01.533 "zcopy": true, 00:21:01.533 "get_zone_info": false, 00:21:01.533 "zone_management": false, 00:21:01.533 "zone_append": false, 00:21:01.533 "compare": false, 00:21:01.533 "compare_and_write": false, 00:21:01.533 "abort": true, 00:21:01.533 "seek_hole": false, 00:21:01.533 "seek_data": false, 00:21:01.533 "copy": true, 00:21:01.533 "nvme_iov_md": false 00:21:01.533 }, 00:21:01.533 "memory_domains": [ 00:21:01.533 { 00:21:01.533 "dma_device_id": "system", 00:21:01.533 "dma_device_type": 1 00:21:01.533 }, 00:21:01.533 { 00:21:01.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:01.533 "dma_device_type": 2 00:21:01.533 } 00:21:01.533 ], 00:21:01.533 "driver_specific": {} 00:21:01.533 } 00:21:01.533 ] 00:21:01.533 22:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:01.533 22:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:01.533 22:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:01.533 22:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:01.792 BaseBdev3 00:21:01.792 22:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:21:01.792 22:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:21:01.792 22:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:01.792 22:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:01.792 22:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:01.792 22:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:01.792 22:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:02.052 22:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:02.052 [ 00:21:02.052 { 00:21:02.052 "name": "BaseBdev3", 00:21:02.052 "aliases": [ 00:21:02.052 "18cefc96-82b3-44aa-8f44-0e5fa598741d" 00:21:02.052 ], 00:21:02.052 "product_name": "Malloc disk", 00:21:02.052 "block_size": 512, 00:21:02.052 "num_blocks": 65536, 00:21:02.052 "uuid": "18cefc96-82b3-44aa-8f44-0e5fa598741d", 00:21:02.052 "assigned_rate_limits": { 00:21:02.052 "rw_ios_per_sec": 0, 00:21:02.052 "rw_mbytes_per_sec": 0, 00:21:02.052 "r_mbytes_per_sec": 0, 00:21:02.052 "w_mbytes_per_sec": 0 00:21:02.052 }, 00:21:02.052 "claimed": false, 00:21:02.052 "zoned": false, 00:21:02.052 "supported_io_types": { 00:21:02.052 "read": true, 00:21:02.052 "write": true, 00:21:02.052 "unmap": true, 00:21:02.052 "flush": true, 00:21:02.052 "reset": true, 00:21:02.052 "nvme_admin": false, 00:21:02.052 "nvme_io": false, 00:21:02.052 "nvme_io_md": false, 00:21:02.052 "write_zeroes": true, 00:21:02.052 "zcopy": true, 00:21:02.052 "get_zone_info": false, 00:21:02.052 "zone_management": false, 00:21:02.052 "zone_append": false, 00:21:02.052 "compare": false, 00:21:02.052 "compare_and_write": false, 00:21:02.052 "abort": true, 00:21:02.052 "seek_hole": false, 00:21:02.052 "seek_data": false, 00:21:02.052 "copy": true, 00:21:02.052 "nvme_iov_md": false 00:21:02.052 }, 00:21:02.052 "memory_domains": [ 00:21:02.052 { 00:21:02.052 "dma_device_id": "system", 00:21:02.052 "dma_device_type": 1 00:21:02.052 }, 00:21:02.052 { 00:21:02.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:02.052 "dma_device_type": 2 00:21:02.052 } 00:21:02.052 ], 00:21:02.052 "driver_specific": {} 00:21:02.052 } 00:21:02.052 ] 00:21:02.052 22:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:02.052 22:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:02.052 22:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:02.052 22:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:02.311 BaseBdev4 00:21:02.311 22:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:21:02.311 22:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:21:02.311 22:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:02.311 22:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:02.311 22:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:02.311 22:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:02.311 22:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:02.571 22:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:02.830 [ 00:21:02.830 { 00:21:02.831 "name": "BaseBdev4", 00:21:02.831 "aliases": [ 00:21:02.831 "fb07f0a5-0d71-4a44-a95a-2eecad4b8a13" 00:21:02.831 ], 00:21:02.831 "product_name": "Malloc disk", 00:21:02.831 "block_size": 512, 00:21:02.831 "num_blocks": 65536, 00:21:02.831 "uuid": "fb07f0a5-0d71-4a44-a95a-2eecad4b8a13", 00:21:02.831 "assigned_rate_limits": { 00:21:02.831 "rw_ios_per_sec": 0, 00:21:02.831 "rw_mbytes_per_sec": 0, 00:21:02.831 "r_mbytes_per_sec": 0, 00:21:02.831 "w_mbytes_per_sec": 0 00:21:02.831 }, 00:21:02.831 "claimed": false, 00:21:02.831 "zoned": false, 00:21:02.831 "supported_io_types": { 00:21:02.831 "read": true, 00:21:02.831 "write": true, 00:21:02.831 "unmap": true, 00:21:02.831 "flush": true, 00:21:02.831 "reset": true, 00:21:02.831 "nvme_admin": false, 00:21:02.831 "nvme_io": false, 00:21:02.831 "nvme_io_md": false, 00:21:02.831 "write_zeroes": true, 00:21:02.831 "zcopy": true, 00:21:02.831 "get_zone_info": false, 00:21:02.831 "zone_management": false, 00:21:02.831 "zone_append": false, 00:21:02.831 "compare": false, 00:21:02.831 "compare_and_write": false, 00:21:02.831 "abort": true, 00:21:02.831 "seek_hole": false, 00:21:02.831 "seek_data": false, 00:21:02.831 "copy": true, 00:21:02.831 "nvme_iov_md": false 00:21:02.831 }, 00:21:02.831 "memory_domains": [ 00:21:02.831 { 00:21:02.831 "dma_device_id": "system", 00:21:02.831 "dma_device_type": 1 00:21:02.831 }, 00:21:02.831 { 00:21:02.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:02.831 "dma_device_type": 2 00:21:02.831 } 00:21:02.831 ], 00:21:02.831 "driver_specific": {} 00:21:02.831 } 00:21:02.831 ] 00:21:02.831 22:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:02.831 22:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:02.831 22:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:02.831 22:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:02.831 [2024-07-13 22:04:22.135272] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:02.831 [2024-07-13 22:04:22.135318] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:02.831 [2024-07-13 22:04:22.135343] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:02.831 [2024-07-13 22:04:22.137118] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:02.831 [2024-07-13 22:04:22.137166] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:02.831 22:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:02.831 22:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:02.831 22:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:02.831 22:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:02.831 22:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:02.831 22:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:02.831 22:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:02.831 22:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:02.831 22:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:02.831 22:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:02.831 22:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:02.831 22:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:03.091 22:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:03.091 "name": "Existed_Raid", 00:21:03.091 "uuid": "30af6f5c-e2b5-4b98-a53c-88d6432ac83e", 00:21:03.091 "strip_size_kb": 0, 00:21:03.091 "state": "configuring", 00:21:03.091 "raid_level": "raid1", 00:21:03.091 "superblock": true, 00:21:03.091 "num_base_bdevs": 4, 00:21:03.091 "num_base_bdevs_discovered": 3, 00:21:03.091 "num_base_bdevs_operational": 4, 00:21:03.091 "base_bdevs_list": [ 00:21:03.091 { 00:21:03.091 "name": "BaseBdev1", 00:21:03.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:03.091 "is_configured": false, 00:21:03.091 "data_offset": 0, 00:21:03.091 "data_size": 0 00:21:03.091 }, 00:21:03.091 { 00:21:03.091 "name": "BaseBdev2", 00:21:03.091 "uuid": "e1cced3a-60fe-47e5-ab1c-fc1b8203709c", 00:21:03.091 "is_configured": true, 00:21:03.091 "data_offset": 2048, 00:21:03.091 "data_size": 63488 00:21:03.091 }, 00:21:03.091 { 00:21:03.091 "name": "BaseBdev3", 00:21:03.091 "uuid": "18cefc96-82b3-44aa-8f44-0e5fa598741d", 00:21:03.091 "is_configured": true, 00:21:03.091 "data_offset": 2048, 00:21:03.091 "data_size": 63488 00:21:03.091 }, 00:21:03.091 { 00:21:03.091 "name": "BaseBdev4", 00:21:03.091 "uuid": "fb07f0a5-0d71-4a44-a95a-2eecad4b8a13", 00:21:03.091 "is_configured": true, 00:21:03.091 "data_offset": 2048, 00:21:03.091 "data_size": 63488 00:21:03.091 } 00:21:03.091 ] 00:21:03.091 }' 00:21:03.091 22:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:03.091 22:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:03.658 22:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:03.658 [2024-07-13 22:04:23.001527] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:03.658 22:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:03.658 22:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:03.658 22:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:03.658 22:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:03.658 22:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:03.658 22:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:03.658 22:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:03.658 22:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:03.658 22:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:03.658 22:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:03.658 22:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:03.658 22:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:03.918 22:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:03.918 "name": "Existed_Raid", 00:21:03.918 "uuid": "30af6f5c-e2b5-4b98-a53c-88d6432ac83e", 00:21:03.918 "strip_size_kb": 0, 00:21:03.918 "state": "configuring", 00:21:03.918 "raid_level": "raid1", 00:21:03.918 "superblock": true, 00:21:03.918 "num_base_bdevs": 4, 00:21:03.918 "num_base_bdevs_discovered": 2, 00:21:03.918 "num_base_bdevs_operational": 4, 00:21:03.918 "base_bdevs_list": [ 00:21:03.918 { 00:21:03.918 "name": "BaseBdev1", 00:21:03.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:03.918 "is_configured": false, 00:21:03.918 "data_offset": 0, 00:21:03.918 "data_size": 0 00:21:03.918 }, 00:21:03.918 { 00:21:03.918 "name": null, 00:21:03.918 "uuid": "e1cced3a-60fe-47e5-ab1c-fc1b8203709c", 00:21:03.918 "is_configured": false, 00:21:03.918 "data_offset": 2048, 00:21:03.918 "data_size": 63488 00:21:03.918 }, 00:21:03.918 { 00:21:03.918 "name": "BaseBdev3", 00:21:03.918 "uuid": "18cefc96-82b3-44aa-8f44-0e5fa598741d", 00:21:03.918 "is_configured": true, 00:21:03.918 "data_offset": 2048, 00:21:03.918 "data_size": 63488 00:21:03.918 }, 00:21:03.918 { 00:21:03.918 "name": "BaseBdev4", 00:21:03.918 "uuid": "fb07f0a5-0d71-4a44-a95a-2eecad4b8a13", 00:21:03.918 "is_configured": true, 00:21:03.918 "data_offset": 2048, 00:21:03.918 "data_size": 63488 00:21:03.918 } 00:21:03.918 ] 00:21:03.918 }' 00:21:03.918 22:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:03.918 22:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:04.484 22:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:04.484 22:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:04.484 22:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:21:04.484 22:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:04.743 [2024-07-13 22:04:24.031756] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:04.743 BaseBdev1 00:21:04.743 22:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:21:04.743 22:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:21:04.743 22:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:04.743 22:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:04.743 22:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:04.743 22:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:04.743 22:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:05.001 22:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:05.001 [ 00:21:05.001 { 00:21:05.001 "name": "BaseBdev1", 00:21:05.001 "aliases": [ 00:21:05.001 "4d187270-760f-469b-8121-cf5cd1bbe231" 00:21:05.001 ], 00:21:05.001 "product_name": "Malloc disk", 00:21:05.001 "block_size": 512, 00:21:05.001 "num_blocks": 65536, 00:21:05.001 "uuid": "4d187270-760f-469b-8121-cf5cd1bbe231", 00:21:05.001 "assigned_rate_limits": { 00:21:05.001 "rw_ios_per_sec": 0, 00:21:05.001 "rw_mbytes_per_sec": 0, 00:21:05.001 "r_mbytes_per_sec": 0, 00:21:05.001 "w_mbytes_per_sec": 0 00:21:05.001 }, 00:21:05.001 "claimed": true, 00:21:05.001 "claim_type": "exclusive_write", 00:21:05.001 "zoned": false, 00:21:05.001 "supported_io_types": { 00:21:05.001 "read": true, 00:21:05.001 "write": true, 00:21:05.001 "unmap": true, 00:21:05.001 "flush": true, 00:21:05.001 "reset": true, 00:21:05.001 "nvme_admin": false, 00:21:05.001 "nvme_io": false, 00:21:05.001 "nvme_io_md": false, 00:21:05.001 "write_zeroes": true, 00:21:05.001 "zcopy": true, 00:21:05.001 "get_zone_info": false, 00:21:05.001 "zone_management": false, 00:21:05.001 "zone_append": false, 00:21:05.001 "compare": false, 00:21:05.001 "compare_and_write": false, 00:21:05.001 "abort": true, 00:21:05.001 "seek_hole": false, 00:21:05.001 "seek_data": false, 00:21:05.001 "copy": true, 00:21:05.001 "nvme_iov_md": false 00:21:05.001 }, 00:21:05.001 "memory_domains": [ 00:21:05.001 { 00:21:05.001 "dma_device_id": "system", 00:21:05.001 "dma_device_type": 1 00:21:05.001 }, 00:21:05.001 { 00:21:05.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:05.001 "dma_device_type": 2 00:21:05.001 } 00:21:05.001 ], 00:21:05.001 "driver_specific": {} 00:21:05.001 } 00:21:05.001 ] 00:21:05.001 22:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:05.001 22:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:05.001 22:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:05.001 22:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:05.001 22:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:05.001 22:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:05.001 22:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:05.001 22:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:05.001 22:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:05.001 22:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:05.001 22:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:05.001 22:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:05.001 22:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:05.260 22:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:05.260 "name": "Existed_Raid", 00:21:05.260 "uuid": "30af6f5c-e2b5-4b98-a53c-88d6432ac83e", 00:21:05.260 "strip_size_kb": 0, 00:21:05.260 "state": "configuring", 00:21:05.260 "raid_level": "raid1", 00:21:05.260 "superblock": true, 00:21:05.260 "num_base_bdevs": 4, 00:21:05.260 "num_base_bdevs_discovered": 3, 00:21:05.260 "num_base_bdevs_operational": 4, 00:21:05.260 "base_bdevs_list": [ 00:21:05.260 { 00:21:05.260 "name": "BaseBdev1", 00:21:05.260 "uuid": "4d187270-760f-469b-8121-cf5cd1bbe231", 00:21:05.260 "is_configured": true, 00:21:05.260 "data_offset": 2048, 00:21:05.260 "data_size": 63488 00:21:05.260 }, 00:21:05.260 { 00:21:05.260 "name": null, 00:21:05.260 "uuid": "e1cced3a-60fe-47e5-ab1c-fc1b8203709c", 00:21:05.260 "is_configured": false, 00:21:05.260 "data_offset": 2048, 00:21:05.260 "data_size": 63488 00:21:05.260 }, 00:21:05.260 { 00:21:05.260 "name": "BaseBdev3", 00:21:05.260 "uuid": "18cefc96-82b3-44aa-8f44-0e5fa598741d", 00:21:05.260 "is_configured": true, 00:21:05.260 "data_offset": 2048, 00:21:05.260 "data_size": 63488 00:21:05.260 }, 00:21:05.260 { 00:21:05.260 "name": "BaseBdev4", 00:21:05.260 "uuid": "fb07f0a5-0d71-4a44-a95a-2eecad4b8a13", 00:21:05.260 "is_configured": true, 00:21:05.260 "data_offset": 2048, 00:21:05.260 "data_size": 63488 00:21:05.260 } 00:21:05.260 ] 00:21:05.260 }' 00:21:05.260 22:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:05.260 22:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.826 22:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:05.826 22:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:05.826 22:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:21:05.826 22:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:21:06.084 [2024-07-13 22:04:25.355322] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:06.084 22:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:06.084 22:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:06.084 22:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:06.084 22:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:06.084 22:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:06.084 22:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:06.084 22:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:06.084 22:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:06.084 22:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:06.084 22:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:06.084 22:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.084 22:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:06.343 22:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:06.343 "name": "Existed_Raid", 00:21:06.343 "uuid": "30af6f5c-e2b5-4b98-a53c-88d6432ac83e", 00:21:06.343 "strip_size_kb": 0, 00:21:06.343 "state": "configuring", 00:21:06.343 "raid_level": "raid1", 00:21:06.343 "superblock": true, 00:21:06.343 "num_base_bdevs": 4, 00:21:06.343 "num_base_bdevs_discovered": 2, 00:21:06.343 "num_base_bdevs_operational": 4, 00:21:06.343 "base_bdevs_list": [ 00:21:06.343 { 00:21:06.343 "name": "BaseBdev1", 00:21:06.343 "uuid": "4d187270-760f-469b-8121-cf5cd1bbe231", 00:21:06.343 "is_configured": true, 00:21:06.343 "data_offset": 2048, 00:21:06.343 "data_size": 63488 00:21:06.343 }, 00:21:06.343 { 00:21:06.343 "name": null, 00:21:06.343 "uuid": "e1cced3a-60fe-47e5-ab1c-fc1b8203709c", 00:21:06.343 "is_configured": false, 00:21:06.343 "data_offset": 2048, 00:21:06.343 "data_size": 63488 00:21:06.343 }, 00:21:06.343 { 00:21:06.343 "name": null, 00:21:06.343 "uuid": "18cefc96-82b3-44aa-8f44-0e5fa598741d", 00:21:06.343 "is_configured": false, 00:21:06.343 "data_offset": 2048, 00:21:06.343 "data_size": 63488 00:21:06.343 }, 00:21:06.343 { 00:21:06.343 "name": "BaseBdev4", 00:21:06.343 "uuid": "fb07f0a5-0d71-4a44-a95a-2eecad4b8a13", 00:21:06.343 "is_configured": true, 00:21:06.343 "data_offset": 2048, 00:21:06.343 "data_size": 63488 00:21:06.343 } 00:21:06.343 ] 00:21:06.343 }' 00:21:06.343 22:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:06.343 22:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.912 22:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.912 22:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:06.912 22:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:21:06.912 22:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:07.171 [2024-07-13 22:04:26.345932] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:07.171 22:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:07.171 22:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:07.171 22:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:07.171 22:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:07.171 22:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:07.171 22:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:07.171 22:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:07.171 22:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:07.171 22:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:07.171 22:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:07.171 22:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:07.171 22:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:07.171 22:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:07.171 "name": "Existed_Raid", 00:21:07.171 "uuid": "30af6f5c-e2b5-4b98-a53c-88d6432ac83e", 00:21:07.171 "strip_size_kb": 0, 00:21:07.171 "state": "configuring", 00:21:07.171 "raid_level": "raid1", 00:21:07.171 "superblock": true, 00:21:07.171 "num_base_bdevs": 4, 00:21:07.171 "num_base_bdevs_discovered": 3, 00:21:07.171 "num_base_bdevs_operational": 4, 00:21:07.171 "base_bdevs_list": [ 00:21:07.171 { 00:21:07.171 "name": "BaseBdev1", 00:21:07.171 "uuid": "4d187270-760f-469b-8121-cf5cd1bbe231", 00:21:07.171 "is_configured": true, 00:21:07.171 "data_offset": 2048, 00:21:07.171 "data_size": 63488 00:21:07.171 }, 00:21:07.171 { 00:21:07.171 "name": null, 00:21:07.172 "uuid": "e1cced3a-60fe-47e5-ab1c-fc1b8203709c", 00:21:07.172 "is_configured": false, 00:21:07.172 "data_offset": 2048, 00:21:07.172 "data_size": 63488 00:21:07.172 }, 00:21:07.172 { 00:21:07.172 "name": "BaseBdev3", 00:21:07.172 "uuid": "18cefc96-82b3-44aa-8f44-0e5fa598741d", 00:21:07.172 "is_configured": true, 00:21:07.172 "data_offset": 2048, 00:21:07.172 "data_size": 63488 00:21:07.172 }, 00:21:07.172 { 00:21:07.172 "name": "BaseBdev4", 00:21:07.172 "uuid": "fb07f0a5-0d71-4a44-a95a-2eecad4b8a13", 00:21:07.172 "is_configured": true, 00:21:07.172 "data_offset": 2048, 00:21:07.172 "data_size": 63488 00:21:07.172 } 00:21:07.172 ] 00:21:07.172 }' 00:21:07.172 22:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:07.172 22:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:07.739 22:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:07.739 22:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:07.998 22:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:21:07.999 22:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:07.999 [2024-07-13 22:04:27.316495] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:08.258 22:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:08.258 22:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:08.258 22:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:08.258 22:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:08.258 22:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:08.258 22:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:08.258 22:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:08.258 22:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:08.258 22:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:08.258 22:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:08.258 22:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:08.258 22:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:08.258 22:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:08.258 "name": "Existed_Raid", 00:21:08.258 "uuid": "30af6f5c-e2b5-4b98-a53c-88d6432ac83e", 00:21:08.258 "strip_size_kb": 0, 00:21:08.258 "state": "configuring", 00:21:08.258 "raid_level": "raid1", 00:21:08.258 "superblock": true, 00:21:08.258 "num_base_bdevs": 4, 00:21:08.258 "num_base_bdevs_discovered": 2, 00:21:08.258 "num_base_bdevs_operational": 4, 00:21:08.258 "base_bdevs_list": [ 00:21:08.258 { 00:21:08.258 "name": null, 00:21:08.258 "uuid": "4d187270-760f-469b-8121-cf5cd1bbe231", 00:21:08.258 "is_configured": false, 00:21:08.258 "data_offset": 2048, 00:21:08.258 "data_size": 63488 00:21:08.258 }, 00:21:08.258 { 00:21:08.258 "name": null, 00:21:08.258 "uuid": "e1cced3a-60fe-47e5-ab1c-fc1b8203709c", 00:21:08.258 "is_configured": false, 00:21:08.258 "data_offset": 2048, 00:21:08.258 "data_size": 63488 00:21:08.258 }, 00:21:08.258 { 00:21:08.258 "name": "BaseBdev3", 00:21:08.258 "uuid": "18cefc96-82b3-44aa-8f44-0e5fa598741d", 00:21:08.258 "is_configured": true, 00:21:08.258 "data_offset": 2048, 00:21:08.258 "data_size": 63488 00:21:08.258 }, 00:21:08.258 { 00:21:08.258 "name": "BaseBdev4", 00:21:08.258 "uuid": "fb07f0a5-0d71-4a44-a95a-2eecad4b8a13", 00:21:08.258 "is_configured": true, 00:21:08.258 "data_offset": 2048, 00:21:08.258 "data_size": 63488 00:21:08.258 } 00:21:08.258 ] 00:21:08.258 }' 00:21:08.258 22:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:08.258 22:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.826 22:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:08.826 22:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:09.085 22:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:21:09.085 22:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:09.085 [2024-07-13 22:04:28.399355] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:09.085 22:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:09.086 22:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:09.086 22:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:09.086 22:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:09.086 22:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:09.086 22:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:09.086 22:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:09.086 22:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:09.086 22:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:09.086 22:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:09.086 22:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:09.086 22:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:09.345 22:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:09.345 "name": "Existed_Raid", 00:21:09.345 "uuid": "30af6f5c-e2b5-4b98-a53c-88d6432ac83e", 00:21:09.345 "strip_size_kb": 0, 00:21:09.345 "state": "configuring", 00:21:09.345 "raid_level": "raid1", 00:21:09.345 "superblock": true, 00:21:09.345 "num_base_bdevs": 4, 00:21:09.345 "num_base_bdevs_discovered": 3, 00:21:09.345 "num_base_bdevs_operational": 4, 00:21:09.345 "base_bdevs_list": [ 00:21:09.345 { 00:21:09.345 "name": null, 00:21:09.345 "uuid": "4d187270-760f-469b-8121-cf5cd1bbe231", 00:21:09.345 "is_configured": false, 00:21:09.345 "data_offset": 2048, 00:21:09.345 "data_size": 63488 00:21:09.345 }, 00:21:09.345 { 00:21:09.345 "name": "BaseBdev2", 00:21:09.345 "uuid": "e1cced3a-60fe-47e5-ab1c-fc1b8203709c", 00:21:09.345 "is_configured": true, 00:21:09.345 "data_offset": 2048, 00:21:09.345 "data_size": 63488 00:21:09.345 }, 00:21:09.345 { 00:21:09.345 "name": "BaseBdev3", 00:21:09.345 "uuid": "18cefc96-82b3-44aa-8f44-0e5fa598741d", 00:21:09.345 "is_configured": true, 00:21:09.345 "data_offset": 2048, 00:21:09.345 "data_size": 63488 00:21:09.345 }, 00:21:09.345 { 00:21:09.345 "name": "BaseBdev4", 00:21:09.345 "uuid": "fb07f0a5-0d71-4a44-a95a-2eecad4b8a13", 00:21:09.345 "is_configured": true, 00:21:09.345 "data_offset": 2048, 00:21:09.345 "data_size": 63488 00:21:09.345 } 00:21:09.345 ] 00:21:09.345 }' 00:21:09.345 22:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:09.345 22:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.945 22:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:09.945 22:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:09.945 22:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:21:09.945 22:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:09.945 22:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:10.204 22:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 4d187270-760f-469b-8121-cf5cd1bbe231 00:21:10.204 [2024-07-13 22:04:29.566849] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:10.204 [2024-07-13 22:04:29.567105] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042080 00:21:10.204 [2024-07-13 22:04:29.567129] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:10.204 [2024-07-13 22:04:29.567389] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010b20 00:21:10.204 [2024-07-13 22:04:29.567558] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042080 00:21:10.204 [2024-07-13 22:04:29.567569] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000042080 00:21:10.204 NewBaseBdev 00:21:10.204 [2024-07-13 22:04:29.567709] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:10.204 22:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:21:10.204 22:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:21:10.204 22:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:10.204 22:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:10.204 22:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:10.204 22:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:10.204 22:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:10.463 22:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:10.722 [ 00:21:10.722 { 00:21:10.722 "name": "NewBaseBdev", 00:21:10.722 "aliases": [ 00:21:10.722 "4d187270-760f-469b-8121-cf5cd1bbe231" 00:21:10.722 ], 00:21:10.722 "product_name": "Malloc disk", 00:21:10.722 "block_size": 512, 00:21:10.722 "num_blocks": 65536, 00:21:10.722 "uuid": "4d187270-760f-469b-8121-cf5cd1bbe231", 00:21:10.722 "assigned_rate_limits": { 00:21:10.722 "rw_ios_per_sec": 0, 00:21:10.722 "rw_mbytes_per_sec": 0, 00:21:10.722 "r_mbytes_per_sec": 0, 00:21:10.722 "w_mbytes_per_sec": 0 00:21:10.722 }, 00:21:10.722 "claimed": true, 00:21:10.722 "claim_type": "exclusive_write", 00:21:10.722 "zoned": false, 00:21:10.722 "supported_io_types": { 00:21:10.722 "read": true, 00:21:10.722 "write": true, 00:21:10.722 "unmap": true, 00:21:10.722 "flush": true, 00:21:10.722 "reset": true, 00:21:10.722 "nvme_admin": false, 00:21:10.722 "nvme_io": false, 00:21:10.722 "nvme_io_md": false, 00:21:10.722 "write_zeroes": true, 00:21:10.722 "zcopy": true, 00:21:10.722 "get_zone_info": false, 00:21:10.722 "zone_management": false, 00:21:10.722 "zone_append": false, 00:21:10.722 "compare": false, 00:21:10.722 "compare_and_write": false, 00:21:10.722 "abort": true, 00:21:10.722 "seek_hole": false, 00:21:10.722 "seek_data": false, 00:21:10.722 "copy": true, 00:21:10.722 "nvme_iov_md": false 00:21:10.722 }, 00:21:10.722 "memory_domains": [ 00:21:10.722 { 00:21:10.722 "dma_device_id": "system", 00:21:10.722 "dma_device_type": 1 00:21:10.722 }, 00:21:10.722 { 00:21:10.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:10.722 "dma_device_type": 2 00:21:10.722 } 00:21:10.722 ], 00:21:10.722 "driver_specific": {} 00:21:10.722 } 00:21:10.722 ] 00:21:10.722 22:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:10.722 22:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:21:10.722 22:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:10.722 22:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:10.722 22:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:10.722 22:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:10.722 22:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:10.722 22:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:10.722 22:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:10.722 22:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:10.722 22:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:10.722 22:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:10.722 22:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:10.722 22:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:10.722 "name": "Existed_Raid", 00:21:10.722 "uuid": "30af6f5c-e2b5-4b98-a53c-88d6432ac83e", 00:21:10.722 "strip_size_kb": 0, 00:21:10.722 "state": "online", 00:21:10.722 "raid_level": "raid1", 00:21:10.722 "superblock": true, 00:21:10.722 "num_base_bdevs": 4, 00:21:10.722 "num_base_bdevs_discovered": 4, 00:21:10.722 "num_base_bdevs_operational": 4, 00:21:10.722 "base_bdevs_list": [ 00:21:10.722 { 00:21:10.722 "name": "NewBaseBdev", 00:21:10.722 "uuid": "4d187270-760f-469b-8121-cf5cd1bbe231", 00:21:10.722 "is_configured": true, 00:21:10.722 "data_offset": 2048, 00:21:10.722 "data_size": 63488 00:21:10.722 }, 00:21:10.722 { 00:21:10.722 "name": "BaseBdev2", 00:21:10.722 "uuid": "e1cced3a-60fe-47e5-ab1c-fc1b8203709c", 00:21:10.722 "is_configured": true, 00:21:10.722 "data_offset": 2048, 00:21:10.722 "data_size": 63488 00:21:10.722 }, 00:21:10.722 { 00:21:10.722 "name": "BaseBdev3", 00:21:10.722 "uuid": "18cefc96-82b3-44aa-8f44-0e5fa598741d", 00:21:10.722 "is_configured": true, 00:21:10.722 "data_offset": 2048, 00:21:10.722 "data_size": 63488 00:21:10.722 }, 00:21:10.722 { 00:21:10.722 "name": "BaseBdev4", 00:21:10.722 "uuid": "fb07f0a5-0d71-4a44-a95a-2eecad4b8a13", 00:21:10.722 "is_configured": true, 00:21:10.722 "data_offset": 2048, 00:21:10.722 "data_size": 63488 00:21:10.722 } 00:21:10.722 ] 00:21:10.722 }' 00:21:10.722 22:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:10.722 22:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.290 22:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:21:11.290 22:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:11.290 22:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:11.290 22:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:11.290 22:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:11.290 22:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:21:11.290 22:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:11.290 22:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:11.550 [2024-07-13 22:04:30.754383] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:11.550 22:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:11.550 "name": "Existed_Raid", 00:21:11.550 "aliases": [ 00:21:11.550 "30af6f5c-e2b5-4b98-a53c-88d6432ac83e" 00:21:11.550 ], 00:21:11.550 "product_name": "Raid Volume", 00:21:11.550 "block_size": 512, 00:21:11.550 "num_blocks": 63488, 00:21:11.550 "uuid": "30af6f5c-e2b5-4b98-a53c-88d6432ac83e", 00:21:11.550 "assigned_rate_limits": { 00:21:11.550 "rw_ios_per_sec": 0, 00:21:11.550 "rw_mbytes_per_sec": 0, 00:21:11.550 "r_mbytes_per_sec": 0, 00:21:11.550 "w_mbytes_per_sec": 0 00:21:11.550 }, 00:21:11.550 "claimed": false, 00:21:11.550 "zoned": false, 00:21:11.550 "supported_io_types": { 00:21:11.550 "read": true, 00:21:11.550 "write": true, 00:21:11.550 "unmap": false, 00:21:11.550 "flush": false, 00:21:11.550 "reset": true, 00:21:11.550 "nvme_admin": false, 00:21:11.550 "nvme_io": false, 00:21:11.550 "nvme_io_md": false, 00:21:11.550 "write_zeroes": true, 00:21:11.550 "zcopy": false, 00:21:11.550 "get_zone_info": false, 00:21:11.550 "zone_management": false, 00:21:11.550 "zone_append": false, 00:21:11.550 "compare": false, 00:21:11.550 "compare_and_write": false, 00:21:11.550 "abort": false, 00:21:11.550 "seek_hole": false, 00:21:11.550 "seek_data": false, 00:21:11.550 "copy": false, 00:21:11.550 "nvme_iov_md": false 00:21:11.550 }, 00:21:11.550 "memory_domains": [ 00:21:11.550 { 00:21:11.550 "dma_device_id": "system", 00:21:11.550 "dma_device_type": 1 00:21:11.550 }, 00:21:11.550 { 00:21:11.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:11.550 "dma_device_type": 2 00:21:11.550 }, 00:21:11.550 { 00:21:11.550 "dma_device_id": "system", 00:21:11.550 "dma_device_type": 1 00:21:11.550 }, 00:21:11.550 { 00:21:11.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:11.550 "dma_device_type": 2 00:21:11.550 }, 00:21:11.550 { 00:21:11.550 "dma_device_id": "system", 00:21:11.550 "dma_device_type": 1 00:21:11.550 }, 00:21:11.550 { 00:21:11.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:11.550 "dma_device_type": 2 00:21:11.550 }, 00:21:11.550 { 00:21:11.550 "dma_device_id": "system", 00:21:11.550 "dma_device_type": 1 00:21:11.550 }, 00:21:11.550 { 00:21:11.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:11.550 "dma_device_type": 2 00:21:11.550 } 00:21:11.550 ], 00:21:11.550 "driver_specific": { 00:21:11.550 "raid": { 00:21:11.550 "uuid": "30af6f5c-e2b5-4b98-a53c-88d6432ac83e", 00:21:11.550 "strip_size_kb": 0, 00:21:11.550 "state": "online", 00:21:11.550 "raid_level": "raid1", 00:21:11.550 "superblock": true, 00:21:11.550 "num_base_bdevs": 4, 00:21:11.550 "num_base_bdevs_discovered": 4, 00:21:11.550 "num_base_bdevs_operational": 4, 00:21:11.550 "base_bdevs_list": [ 00:21:11.550 { 00:21:11.550 "name": "NewBaseBdev", 00:21:11.550 "uuid": "4d187270-760f-469b-8121-cf5cd1bbe231", 00:21:11.550 "is_configured": true, 00:21:11.550 "data_offset": 2048, 00:21:11.550 "data_size": 63488 00:21:11.550 }, 00:21:11.550 { 00:21:11.550 "name": "BaseBdev2", 00:21:11.550 "uuid": "e1cced3a-60fe-47e5-ab1c-fc1b8203709c", 00:21:11.550 "is_configured": true, 00:21:11.550 "data_offset": 2048, 00:21:11.550 "data_size": 63488 00:21:11.550 }, 00:21:11.550 { 00:21:11.550 "name": "BaseBdev3", 00:21:11.550 "uuid": "18cefc96-82b3-44aa-8f44-0e5fa598741d", 00:21:11.550 "is_configured": true, 00:21:11.550 "data_offset": 2048, 00:21:11.550 "data_size": 63488 00:21:11.550 }, 00:21:11.550 { 00:21:11.550 "name": "BaseBdev4", 00:21:11.550 "uuid": "fb07f0a5-0d71-4a44-a95a-2eecad4b8a13", 00:21:11.550 "is_configured": true, 00:21:11.550 "data_offset": 2048, 00:21:11.550 "data_size": 63488 00:21:11.550 } 00:21:11.550 ] 00:21:11.550 } 00:21:11.550 } 00:21:11.550 }' 00:21:11.551 22:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:11.551 22:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:21:11.551 BaseBdev2 00:21:11.551 BaseBdev3 00:21:11.551 BaseBdev4' 00:21:11.551 22:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:11.551 22:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:21:11.551 22:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:11.810 22:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:11.810 "name": "NewBaseBdev", 00:21:11.810 "aliases": [ 00:21:11.810 "4d187270-760f-469b-8121-cf5cd1bbe231" 00:21:11.810 ], 00:21:11.810 "product_name": "Malloc disk", 00:21:11.810 "block_size": 512, 00:21:11.810 "num_blocks": 65536, 00:21:11.810 "uuid": "4d187270-760f-469b-8121-cf5cd1bbe231", 00:21:11.810 "assigned_rate_limits": { 00:21:11.810 "rw_ios_per_sec": 0, 00:21:11.810 "rw_mbytes_per_sec": 0, 00:21:11.810 "r_mbytes_per_sec": 0, 00:21:11.810 "w_mbytes_per_sec": 0 00:21:11.810 }, 00:21:11.810 "claimed": true, 00:21:11.810 "claim_type": "exclusive_write", 00:21:11.810 "zoned": false, 00:21:11.810 "supported_io_types": { 00:21:11.810 "read": true, 00:21:11.810 "write": true, 00:21:11.810 "unmap": true, 00:21:11.810 "flush": true, 00:21:11.810 "reset": true, 00:21:11.810 "nvme_admin": false, 00:21:11.810 "nvme_io": false, 00:21:11.810 "nvme_io_md": false, 00:21:11.810 "write_zeroes": true, 00:21:11.810 "zcopy": true, 00:21:11.810 "get_zone_info": false, 00:21:11.810 "zone_management": false, 00:21:11.810 "zone_append": false, 00:21:11.810 "compare": false, 00:21:11.810 "compare_and_write": false, 00:21:11.810 "abort": true, 00:21:11.810 "seek_hole": false, 00:21:11.810 "seek_data": false, 00:21:11.810 "copy": true, 00:21:11.810 "nvme_iov_md": false 00:21:11.810 }, 00:21:11.810 "memory_domains": [ 00:21:11.810 { 00:21:11.810 "dma_device_id": "system", 00:21:11.810 "dma_device_type": 1 00:21:11.810 }, 00:21:11.810 { 00:21:11.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:11.810 "dma_device_type": 2 00:21:11.810 } 00:21:11.810 ], 00:21:11.810 "driver_specific": {} 00:21:11.810 }' 00:21:11.810 22:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:11.810 22:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:11.810 22:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:11.810 22:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:11.810 22:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:11.810 22:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:11.810 22:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:12.069 22:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:12.069 22:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:12.069 22:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:12.069 22:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:12.069 22:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:12.069 22:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:12.069 22:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:12.069 22:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:12.328 22:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:12.328 "name": "BaseBdev2", 00:21:12.328 "aliases": [ 00:21:12.328 "e1cced3a-60fe-47e5-ab1c-fc1b8203709c" 00:21:12.328 ], 00:21:12.328 "product_name": "Malloc disk", 00:21:12.328 "block_size": 512, 00:21:12.328 "num_blocks": 65536, 00:21:12.328 "uuid": "e1cced3a-60fe-47e5-ab1c-fc1b8203709c", 00:21:12.328 "assigned_rate_limits": { 00:21:12.328 "rw_ios_per_sec": 0, 00:21:12.328 "rw_mbytes_per_sec": 0, 00:21:12.328 "r_mbytes_per_sec": 0, 00:21:12.328 "w_mbytes_per_sec": 0 00:21:12.328 }, 00:21:12.328 "claimed": true, 00:21:12.328 "claim_type": "exclusive_write", 00:21:12.328 "zoned": false, 00:21:12.328 "supported_io_types": { 00:21:12.328 "read": true, 00:21:12.328 "write": true, 00:21:12.328 "unmap": true, 00:21:12.328 "flush": true, 00:21:12.328 "reset": true, 00:21:12.328 "nvme_admin": false, 00:21:12.328 "nvme_io": false, 00:21:12.328 "nvme_io_md": false, 00:21:12.328 "write_zeroes": true, 00:21:12.328 "zcopy": true, 00:21:12.328 "get_zone_info": false, 00:21:12.328 "zone_management": false, 00:21:12.328 "zone_append": false, 00:21:12.328 "compare": false, 00:21:12.328 "compare_and_write": false, 00:21:12.328 "abort": true, 00:21:12.328 "seek_hole": false, 00:21:12.328 "seek_data": false, 00:21:12.328 "copy": true, 00:21:12.328 "nvme_iov_md": false 00:21:12.328 }, 00:21:12.328 "memory_domains": [ 00:21:12.328 { 00:21:12.328 "dma_device_id": "system", 00:21:12.328 "dma_device_type": 1 00:21:12.328 }, 00:21:12.328 { 00:21:12.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:12.328 "dma_device_type": 2 00:21:12.328 } 00:21:12.328 ], 00:21:12.328 "driver_specific": {} 00:21:12.328 }' 00:21:12.328 22:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:12.328 22:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:12.328 22:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:12.328 22:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:12.328 22:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:12.328 22:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:12.328 22:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:12.328 22:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:12.328 22:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:12.328 22:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:12.587 22:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:12.587 22:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:12.587 22:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:12.587 22:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:12.587 22:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:12.587 22:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:12.587 "name": "BaseBdev3", 00:21:12.587 "aliases": [ 00:21:12.587 "18cefc96-82b3-44aa-8f44-0e5fa598741d" 00:21:12.587 ], 00:21:12.587 "product_name": "Malloc disk", 00:21:12.587 "block_size": 512, 00:21:12.587 "num_blocks": 65536, 00:21:12.587 "uuid": "18cefc96-82b3-44aa-8f44-0e5fa598741d", 00:21:12.587 "assigned_rate_limits": { 00:21:12.587 "rw_ios_per_sec": 0, 00:21:12.587 "rw_mbytes_per_sec": 0, 00:21:12.587 "r_mbytes_per_sec": 0, 00:21:12.587 "w_mbytes_per_sec": 0 00:21:12.587 }, 00:21:12.587 "claimed": true, 00:21:12.587 "claim_type": "exclusive_write", 00:21:12.587 "zoned": false, 00:21:12.587 "supported_io_types": { 00:21:12.587 "read": true, 00:21:12.587 "write": true, 00:21:12.587 "unmap": true, 00:21:12.587 "flush": true, 00:21:12.587 "reset": true, 00:21:12.587 "nvme_admin": false, 00:21:12.587 "nvme_io": false, 00:21:12.587 "nvme_io_md": false, 00:21:12.587 "write_zeroes": true, 00:21:12.587 "zcopy": true, 00:21:12.587 "get_zone_info": false, 00:21:12.587 "zone_management": false, 00:21:12.587 "zone_append": false, 00:21:12.587 "compare": false, 00:21:12.587 "compare_and_write": false, 00:21:12.587 "abort": true, 00:21:12.587 "seek_hole": false, 00:21:12.587 "seek_data": false, 00:21:12.587 "copy": true, 00:21:12.587 "nvme_iov_md": false 00:21:12.587 }, 00:21:12.587 "memory_domains": [ 00:21:12.587 { 00:21:12.587 "dma_device_id": "system", 00:21:12.587 "dma_device_type": 1 00:21:12.587 }, 00:21:12.587 { 00:21:12.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:12.587 "dma_device_type": 2 00:21:12.587 } 00:21:12.587 ], 00:21:12.587 "driver_specific": {} 00:21:12.587 }' 00:21:12.587 22:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:12.845 22:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:12.845 22:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:12.845 22:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:12.845 22:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:12.845 22:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:12.845 22:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:12.845 22:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:12.845 22:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:12.845 22:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:13.103 22:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:13.103 22:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:13.103 22:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:13.103 22:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:21:13.103 22:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:13.103 22:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:13.103 "name": "BaseBdev4", 00:21:13.103 "aliases": [ 00:21:13.103 "fb07f0a5-0d71-4a44-a95a-2eecad4b8a13" 00:21:13.103 ], 00:21:13.103 "product_name": "Malloc disk", 00:21:13.103 "block_size": 512, 00:21:13.103 "num_blocks": 65536, 00:21:13.103 "uuid": "fb07f0a5-0d71-4a44-a95a-2eecad4b8a13", 00:21:13.103 "assigned_rate_limits": { 00:21:13.103 "rw_ios_per_sec": 0, 00:21:13.103 "rw_mbytes_per_sec": 0, 00:21:13.103 "r_mbytes_per_sec": 0, 00:21:13.103 "w_mbytes_per_sec": 0 00:21:13.103 }, 00:21:13.103 "claimed": true, 00:21:13.103 "claim_type": "exclusive_write", 00:21:13.103 "zoned": false, 00:21:13.103 "supported_io_types": { 00:21:13.103 "read": true, 00:21:13.103 "write": true, 00:21:13.103 "unmap": true, 00:21:13.103 "flush": true, 00:21:13.103 "reset": true, 00:21:13.103 "nvme_admin": false, 00:21:13.103 "nvme_io": false, 00:21:13.103 "nvme_io_md": false, 00:21:13.103 "write_zeroes": true, 00:21:13.103 "zcopy": true, 00:21:13.103 "get_zone_info": false, 00:21:13.103 "zone_management": false, 00:21:13.103 "zone_append": false, 00:21:13.103 "compare": false, 00:21:13.103 "compare_and_write": false, 00:21:13.103 "abort": true, 00:21:13.104 "seek_hole": false, 00:21:13.104 "seek_data": false, 00:21:13.104 "copy": true, 00:21:13.104 "nvme_iov_md": false 00:21:13.104 }, 00:21:13.104 "memory_domains": [ 00:21:13.104 { 00:21:13.104 "dma_device_id": "system", 00:21:13.104 "dma_device_type": 1 00:21:13.104 }, 00:21:13.104 { 00:21:13.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:13.104 "dma_device_type": 2 00:21:13.104 } 00:21:13.104 ], 00:21:13.104 "driver_specific": {} 00:21:13.104 }' 00:21:13.104 22:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:13.362 22:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:13.362 22:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:13.362 22:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:13.362 22:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:13.362 22:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:13.362 22:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:13.362 22:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:13.362 22:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:13.362 22:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:13.362 22:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:13.362 22:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:13.362 22:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:13.621 [2024-07-13 22:04:32.895757] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:13.621 [2024-07-13 22:04:32.895786] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:13.621 [2024-07-13 22:04:32.895862] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:13.621 [2024-07-13 22:04:32.896131] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:13.621 [2024-07-13 22:04:32.896148] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042080 name Existed_Raid, state offline 00:21:13.621 22:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 1448644 00:21:13.621 22:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 1448644 ']' 00:21:13.621 22:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 1448644 00:21:13.621 22:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:21:13.621 22:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:13.621 22:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1448644 00:21:13.621 22:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:13.621 22:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:13.621 22:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1448644' 00:21:13.621 killing process with pid 1448644 00:21:13.621 22:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 1448644 00:21:13.621 [2024-07-13 22:04:32.964333] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:13.621 22:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 1448644 00:21:14.186 [2024-07-13 22:04:33.281260] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:15.558 22:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:21:15.558 00:21:15.558 real 0m26.319s 00:21:15.558 user 0m46.119s 00:21:15.558 sys 0m4.931s 00:21:15.558 22:04:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:15.558 22:04:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:15.558 ************************************ 00:21:15.558 END TEST raid_state_function_test_sb 00:21:15.558 ************************************ 00:21:15.558 22:04:34 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:21:15.558 22:04:34 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:21:15.558 22:04:34 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:21:15.558 22:04:34 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:15.558 22:04:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:15.558 ************************************ 00:21:15.558 START TEST raid_superblock_test 00:21:15.558 ************************************ 00:21:15.558 22:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 4 00:21:15.558 22:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:21:15.558 22:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:21:15.558 22:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:21:15.558 22:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:21:15.558 22:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:21:15.558 22:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:21:15.558 22:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:21:15.558 22:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:21:15.558 22:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:21:15.558 22:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:21:15.558 22:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:21:15.558 22:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:21:15.558 22:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:21:15.558 22:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:21:15.558 22:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:21:15.558 22:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=1453802 00:21:15.558 22:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 1453802 /var/tmp/spdk-raid.sock 00:21:15.558 22:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:21:15.558 22:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 1453802 ']' 00:21:15.558 22:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:15.558 22:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:15.558 22:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:15.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:15.558 22:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:15.558 22:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.558 [2024-07-13 22:04:34.696642] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:21:15.558 [2024-07-13 22:04:34.696733] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1453802 ] 00:21:15.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:15.558 EAL: Requested device 0000:3d:01.0 cannot be used 00:21:15.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:15.558 EAL: Requested device 0000:3d:01.1 cannot be used 00:21:15.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:15.558 EAL: Requested device 0000:3d:01.2 cannot be used 00:21:15.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:15.558 EAL: Requested device 0000:3d:01.3 cannot be used 00:21:15.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:15.558 EAL: Requested device 0000:3d:01.4 cannot be used 00:21:15.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:15.558 EAL: Requested device 0000:3d:01.5 cannot be used 00:21:15.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:15.558 EAL: Requested device 0000:3d:01.6 cannot be used 00:21:15.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:15.558 EAL: Requested device 0000:3d:01.7 cannot be used 00:21:15.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:15.558 EAL: Requested device 0000:3d:02.0 cannot be used 00:21:15.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:15.558 EAL: Requested device 0000:3d:02.1 cannot be used 00:21:15.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:15.559 EAL: Requested device 0000:3d:02.2 cannot be used 00:21:15.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:15.559 EAL: Requested device 0000:3d:02.3 cannot be used 00:21:15.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:15.559 EAL: Requested device 0000:3d:02.4 cannot be used 00:21:15.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:15.559 EAL: Requested device 0000:3d:02.5 cannot be used 00:21:15.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:15.559 EAL: Requested device 0000:3d:02.6 cannot be used 00:21:15.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:15.559 EAL: Requested device 0000:3d:02.7 cannot be used 00:21:15.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:15.559 EAL: Requested device 0000:3f:01.0 cannot be used 00:21:15.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:15.559 EAL: Requested device 0000:3f:01.1 cannot be used 00:21:15.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:15.559 EAL: Requested device 0000:3f:01.2 cannot be used 00:21:15.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:15.559 EAL: Requested device 0000:3f:01.3 cannot be used 00:21:15.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:15.559 EAL: Requested device 0000:3f:01.4 cannot be used 00:21:15.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:15.559 EAL: Requested device 0000:3f:01.5 cannot be used 00:21:15.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:15.559 EAL: Requested device 0000:3f:01.6 cannot be used 00:21:15.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:15.559 EAL: Requested device 0000:3f:01.7 cannot be used 00:21:15.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:15.559 EAL: Requested device 0000:3f:02.0 cannot be used 00:21:15.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:15.559 EAL: Requested device 0000:3f:02.1 cannot be used 00:21:15.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:15.559 EAL: Requested device 0000:3f:02.2 cannot be used 00:21:15.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:15.559 EAL: Requested device 0000:3f:02.3 cannot be used 00:21:15.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:15.559 EAL: Requested device 0000:3f:02.4 cannot be used 00:21:15.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:15.559 EAL: Requested device 0000:3f:02.5 cannot be used 00:21:15.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:15.559 EAL: Requested device 0000:3f:02.6 cannot be used 00:21:15.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:15.559 EAL: Requested device 0000:3f:02.7 cannot be used 00:21:15.559 [2024-07-13 22:04:34.856151] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.817 [2024-07-13 22:04:35.072014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.075 [2024-07-13 22:04:35.320038] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:16.075 [2024-07-13 22:04:35.320078] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:16.075 22:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:16.075 22:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:21:16.075 22:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:21:16.075 22:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:16.075 22:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:21:16.075 22:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:21:16.075 22:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:16.075 22:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:16.075 22:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:16.075 22:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:16.075 22:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:21:16.333 malloc1 00:21:16.333 22:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:16.592 [2024-07-13 22:04:35.816837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:16.592 [2024-07-13 22:04:35.816895] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:16.592 [2024-07-13 22:04:35.816927] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:21:16.592 [2024-07-13 22:04:35.816943] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:16.592 [2024-07-13 22:04:35.819008] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:16.592 [2024-07-13 22:04:35.819036] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:16.592 pt1 00:21:16.592 22:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:16.592 22:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:16.592 22:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:21:16.592 22:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:21:16.592 22:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:16.592 22:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:16.592 22:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:16.592 22:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:16.592 22:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:21:16.851 malloc2 00:21:16.851 22:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:16.851 [2024-07-13 22:04:36.179822] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:16.851 [2024-07-13 22:04:36.179863] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:16.851 [2024-07-13 22:04:36.179907] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:21:16.851 [2024-07-13 22:04:36.179919] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:16.851 [2024-07-13 22:04:36.181964] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:16.851 [2024-07-13 22:04:36.181995] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:16.851 pt2 00:21:16.851 22:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:16.851 22:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:16.851 22:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:21:16.851 22:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:21:16.851 22:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:21:16.851 22:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:16.851 22:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:16.851 22:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:16.851 22:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:21:17.110 malloc3 00:21:17.110 22:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:17.369 [2024-07-13 22:04:36.532355] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:17.369 [2024-07-13 22:04:36.532400] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:17.369 [2024-07-13 22:04:36.532439] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040e80 00:21:17.369 [2024-07-13 22:04:36.532451] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:17.369 [2024-07-13 22:04:36.534522] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:17.369 [2024-07-13 22:04:36.534549] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:17.369 pt3 00:21:17.369 22:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:17.369 22:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:17.369 22:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:21:17.369 22:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:21:17.369 22:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:21:17.369 22:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:17.369 22:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:17.369 22:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:17.369 22:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:21:17.369 malloc4 00:21:17.369 22:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:17.628 [2024-07-13 22:04:36.910533] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:17.628 [2024-07-13 22:04:36.910590] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:17.628 [2024-07-13 22:04:36.910628] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041a80 00:21:17.628 [2024-07-13 22:04:36.910639] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:17.628 [2024-07-13 22:04:36.912967] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:17.628 [2024-07-13 22:04:36.912995] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:17.628 pt4 00:21:17.628 22:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:17.628 22:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:17.628 22:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:21:17.887 [2024-07-13 22:04:37.091065] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:17.887 [2024-07-13 22:04:37.092757] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:17.887 [2024-07-13 22:04:37.092824] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:17.887 [2024-07-13 22:04:37.092865] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:17.887 [2024-07-13 22:04:37.093069] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042080 00:21:17.887 [2024-07-13 22:04:37.093084] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:17.887 [2024-07-13 22:04:37.093337] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:21:17.887 [2024-07-13 22:04:37.093527] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042080 00:21:17.887 [2024-07-13 22:04:37.093541] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000042080 00:21:17.887 [2024-07-13 22:04:37.093675] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:17.887 22:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:17.887 22:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:17.887 22:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:17.887 22:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:17.887 22:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:17.887 22:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:17.887 22:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:17.887 22:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:17.887 22:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:17.887 22:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:17.887 22:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:17.887 22:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.146 22:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:18.146 "name": "raid_bdev1", 00:21:18.146 "uuid": "2dbffd1b-d1ac-4cf1-b0ac-8ca9e3d7bb9f", 00:21:18.146 "strip_size_kb": 0, 00:21:18.146 "state": "online", 00:21:18.146 "raid_level": "raid1", 00:21:18.146 "superblock": true, 00:21:18.146 "num_base_bdevs": 4, 00:21:18.146 "num_base_bdevs_discovered": 4, 00:21:18.146 "num_base_bdevs_operational": 4, 00:21:18.146 "base_bdevs_list": [ 00:21:18.146 { 00:21:18.146 "name": "pt1", 00:21:18.146 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:18.146 "is_configured": true, 00:21:18.146 "data_offset": 2048, 00:21:18.146 "data_size": 63488 00:21:18.146 }, 00:21:18.146 { 00:21:18.146 "name": "pt2", 00:21:18.146 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:18.146 "is_configured": true, 00:21:18.146 "data_offset": 2048, 00:21:18.146 "data_size": 63488 00:21:18.146 }, 00:21:18.146 { 00:21:18.146 "name": "pt3", 00:21:18.146 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:18.146 "is_configured": true, 00:21:18.146 "data_offset": 2048, 00:21:18.146 "data_size": 63488 00:21:18.146 }, 00:21:18.146 { 00:21:18.146 "name": "pt4", 00:21:18.146 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:18.146 "is_configured": true, 00:21:18.146 "data_offset": 2048, 00:21:18.146 "data_size": 63488 00:21:18.146 } 00:21:18.146 ] 00:21:18.146 }' 00:21:18.146 22:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:18.146 22:04:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.405 22:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:21:18.405 22:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:21:18.405 22:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:18.405 22:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:18.405 22:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:18.405 22:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:18.405 22:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:18.405 22:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:18.664 [2024-07-13 22:04:37.877420] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:18.665 22:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:18.665 "name": "raid_bdev1", 00:21:18.665 "aliases": [ 00:21:18.665 "2dbffd1b-d1ac-4cf1-b0ac-8ca9e3d7bb9f" 00:21:18.665 ], 00:21:18.665 "product_name": "Raid Volume", 00:21:18.665 "block_size": 512, 00:21:18.665 "num_blocks": 63488, 00:21:18.665 "uuid": "2dbffd1b-d1ac-4cf1-b0ac-8ca9e3d7bb9f", 00:21:18.665 "assigned_rate_limits": { 00:21:18.665 "rw_ios_per_sec": 0, 00:21:18.665 "rw_mbytes_per_sec": 0, 00:21:18.665 "r_mbytes_per_sec": 0, 00:21:18.665 "w_mbytes_per_sec": 0 00:21:18.665 }, 00:21:18.665 "claimed": false, 00:21:18.665 "zoned": false, 00:21:18.665 "supported_io_types": { 00:21:18.665 "read": true, 00:21:18.665 "write": true, 00:21:18.665 "unmap": false, 00:21:18.665 "flush": false, 00:21:18.665 "reset": true, 00:21:18.665 "nvme_admin": false, 00:21:18.665 "nvme_io": false, 00:21:18.665 "nvme_io_md": false, 00:21:18.665 "write_zeroes": true, 00:21:18.665 "zcopy": false, 00:21:18.665 "get_zone_info": false, 00:21:18.665 "zone_management": false, 00:21:18.665 "zone_append": false, 00:21:18.665 "compare": false, 00:21:18.665 "compare_and_write": false, 00:21:18.665 "abort": false, 00:21:18.665 "seek_hole": false, 00:21:18.665 "seek_data": false, 00:21:18.665 "copy": false, 00:21:18.665 "nvme_iov_md": false 00:21:18.665 }, 00:21:18.665 "memory_domains": [ 00:21:18.665 { 00:21:18.665 "dma_device_id": "system", 00:21:18.665 "dma_device_type": 1 00:21:18.665 }, 00:21:18.665 { 00:21:18.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:18.665 "dma_device_type": 2 00:21:18.665 }, 00:21:18.665 { 00:21:18.665 "dma_device_id": "system", 00:21:18.665 "dma_device_type": 1 00:21:18.665 }, 00:21:18.665 { 00:21:18.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:18.665 "dma_device_type": 2 00:21:18.665 }, 00:21:18.665 { 00:21:18.665 "dma_device_id": "system", 00:21:18.665 "dma_device_type": 1 00:21:18.665 }, 00:21:18.665 { 00:21:18.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:18.665 "dma_device_type": 2 00:21:18.665 }, 00:21:18.665 { 00:21:18.665 "dma_device_id": "system", 00:21:18.665 "dma_device_type": 1 00:21:18.665 }, 00:21:18.665 { 00:21:18.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:18.665 "dma_device_type": 2 00:21:18.665 } 00:21:18.665 ], 00:21:18.665 "driver_specific": { 00:21:18.665 "raid": { 00:21:18.665 "uuid": "2dbffd1b-d1ac-4cf1-b0ac-8ca9e3d7bb9f", 00:21:18.665 "strip_size_kb": 0, 00:21:18.665 "state": "online", 00:21:18.665 "raid_level": "raid1", 00:21:18.665 "superblock": true, 00:21:18.665 "num_base_bdevs": 4, 00:21:18.665 "num_base_bdevs_discovered": 4, 00:21:18.665 "num_base_bdevs_operational": 4, 00:21:18.665 "base_bdevs_list": [ 00:21:18.665 { 00:21:18.665 "name": "pt1", 00:21:18.665 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:18.665 "is_configured": true, 00:21:18.665 "data_offset": 2048, 00:21:18.665 "data_size": 63488 00:21:18.665 }, 00:21:18.665 { 00:21:18.665 "name": "pt2", 00:21:18.665 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:18.665 "is_configured": true, 00:21:18.665 "data_offset": 2048, 00:21:18.665 "data_size": 63488 00:21:18.665 }, 00:21:18.665 { 00:21:18.665 "name": "pt3", 00:21:18.665 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:18.665 "is_configured": true, 00:21:18.665 "data_offset": 2048, 00:21:18.665 "data_size": 63488 00:21:18.665 }, 00:21:18.665 { 00:21:18.665 "name": "pt4", 00:21:18.665 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:18.665 "is_configured": true, 00:21:18.665 "data_offset": 2048, 00:21:18.665 "data_size": 63488 00:21:18.665 } 00:21:18.665 ] 00:21:18.665 } 00:21:18.665 } 00:21:18.665 }' 00:21:18.665 22:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:18.665 22:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:21:18.665 pt2 00:21:18.665 pt3 00:21:18.665 pt4' 00:21:18.665 22:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:18.665 22:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:18.665 22:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:18.925 22:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:18.925 "name": "pt1", 00:21:18.925 "aliases": [ 00:21:18.925 "00000000-0000-0000-0000-000000000001" 00:21:18.925 ], 00:21:18.925 "product_name": "passthru", 00:21:18.925 "block_size": 512, 00:21:18.925 "num_blocks": 65536, 00:21:18.925 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:18.925 "assigned_rate_limits": { 00:21:18.925 "rw_ios_per_sec": 0, 00:21:18.925 "rw_mbytes_per_sec": 0, 00:21:18.925 "r_mbytes_per_sec": 0, 00:21:18.925 "w_mbytes_per_sec": 0 00:21:18.925 }, 00:21:18.925 "claimed": true, 00:21:18.925 "claim_type": "exclusive_write", 00:21:18.925 "zoned": false, 00:21:18.925 "supported_io_types": { 00:21:18.925 "read": true, 00:21:18.925 "write": true, 00:21:18.925 "unmap": true, 00:21:18.925 "flush": true, 00:21:18.925 "reset": true, 00:21:18.925 "nvme_admin": false, 00:21:18.925 "nvme_io": false, 00:21:18.925 "nvme_io_md": false, 00:21:18.925 "write_zeroes": true, 00:21:18.925 "zcopy": true, 00:21:18.925 "get_zone_info": false, 00:21:18.925 "zone_management": false, 00:21:18.925 "zone_append": false, 00:21:18.925 "compare": false, 00:21:18.925 "compare_and_write": false, 00:21:18.925 "abort": true, 00:21:18.925 "seek_hole": false, 00:21:18.925 "seek_data": false, 00:21:18.925 "copy": true, 00:21:18.925 "nvme_iov_md": false 00:21:18.925 }, 00:21:18.925 "memory_domains": [ 00:21:18.925 { 00:21:18.925 "dma_device_id": "system", 00:21:18.925 "dma_device_type": 1 00:21:18.925 }, 00:21:18.925 { 00:21:18.925 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:18.925 "dma_device_type": 2 00:21:18.925 } 00:21:18.925 ], 00:21:18.925 "driver_specific": { 00:21:18.925 "passthru": { 00:21:18.925 "name": "pt1", 00:21:18.925 "base_bdev_name": "malloc1" 00:21:18.925 } 00:21:18.925 } 00:21:18.925 }' 00:21:18.925 22:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:18.925 22:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:18.925 22:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:18.925 22:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:18.925 22:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:18.925 22:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:18.925 22:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:18.925 22:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:19.185 22:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:19.185 22:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:19.185 22:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:19.185 22:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:19.185 22:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:19.185 22:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:19.185 22:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:19.185 22:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:19.185 "name": "pt2", 00:21:19.185 "aliases": [ 00:21:19.185 "00000000-0000-0000-0000-000000000002" 00:21:19.185 ], 00:21:19.185 "product_name": "passthru", 00:21:19.185 "block_size": 512, 00:21:19.185 "num_blocks": 65536, 00:21:19.185 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:19.185 "assigned_rate_limits": { 00:21:19.185 "rw_ios_per_sec": 0, 00:21:19.185 "rw_mbytes_per_sec": 0, 00:21:19.185 "r_mbytes_per_sec": 0, 00:21:19.185 "w_mbytes_per_sec": 0 00:21:19.185 }, 00:21:19.185 "claimed": true, 00:21:19.185 "claim_type": "exclusive_write", 00:21:19.185 "zoned": false, 00:21:19.185 "supported_io_types": { 00:21:19.185 "read": true, 00:21:19.185 "write": true, 00:21:19.185 "unmap": true, 00:21:19.185 "flush": true, 00:21:19.185 "reset": true, 00:21:19.185 "nvme_admin": false, 00:21:19.185 "nvme_io": false, 00:21:19.185 "nvme_io_md": false, 00:21:19.185 "write_zeroes": true, 00:21:19.185 "zcopy": true, 00:21:19.185 "get_zone_info": false, 00:21:19.185 "zone_management": false, 00:21:19.185 "zone_append": false, 00:21:19.185 "compare": false, 00:21:19.185 "compare_and_write": false, 00:21:19.185 "abort": true, 00:21:19.185 "seek_hole": false, 00:21:19.185 "seek_data": false, 00:21:19.185 "copy": true, 00:21:19.185 "nvme_iov_md": false 00:21:19.185 }, 00:21:19.185 "memory_domains": [ 00:21:19.185 { 00:21:19.185 "dma_device_id": "system", 00:21:19.185 "dma_device_type": 1 00:21:19.185 }, 00:21:19.185 { 00:21:19.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:19.185 "dma_device_type": 2 00:21:19.185 } 00:21:19.185 ], 00:21:19.185 "driver_specific": { 00:21:19.185 "passthru": { 00:21:19.185 "name": "pt2", 00:21:19.185 "base_bdev_name": "malloc2" 00:21:19.185 } 00:21:19.185 } 00:21:19.185 }' 00:21:19.185 22:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:19.445 22:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:19.445 22:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:19.445 22:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:19.445 22:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:19.445 22:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:19.445 22:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:19.445 22:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:19.445 22:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:19.445 22:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:19.705 22:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:19.705 22:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:19.705 22:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:19.705 22:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:19.705 22:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:21:19.705 22:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:19.705 "name": "pt3", 00:21:19.705 "aliases": [ 00:21:19.705 "00000000-0000-0000-0000-000000000003" 00:21:19.705 ], 00:21:19.705 "product_name": "passthru", 00:21:19.705 "block_size": 512, 00:21:19.705 "num_blocks": 65536, 00:21:19.705 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:19.705 "assigned_rate_limits": { 00:21:19.705 "rw_ios_per_sec": 0, 00:21:19.705 "rw_mbytes_per_sec": 0, 00:21:19.705 "r_mbytes_per_sec": 0, 00:21:19.705 "w_mbytes_per_sec": 0 00:21:19.705 }, 00:21:19.705 "claimed": true, 00:21:19.705 "claim_type": "exclusive_write", 00:21:19.705 "zoned": false, 00:21:19.705 "supported_io_types": { 00:21:19.705 "read": true, 00:21:19.705 "write": true, 00:21:19.705 "unmap": true, 00:21:19.705 "flush": true, 00:21:19.705 "reset": true, 00:21:19.705 "nvme_admin": false, 00:21:19.705 "nvme_io": false, 00:21:19.705 "nvme_io_md": false, 00:21:19.705 "write_zeroes": true, 00:21:19.705 "zcopy": true, 00:21:19.705 "get_zone_info": false, 00:21:19.705 "zone_management": false, 00:21:19.705 "zone_append": false, 00:21:19.705 "compare": false, 00:21:19.705 "compare_and_write": false, 00:21:19.705 "abort": true, 00:21:19.705 "seek_hole": false, 00:21:19.705 "seek_data": false, 00:21:19.705 "copy": true, 00:21:19.705 "nvme_iov_md": false 00:21:19.705 }, 00:21:19.705 "memory_domains": [ 00:21:19.705 { 00:21:19.705 "dma_device_id": "system", 00:21:19.705 "dma_device_type": 1 00:21:19.705 }, 00:21:19.705 { 00:21:19.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:19.705 "dma_device_type": 2 00:21:19.705 } 00:21:19.705 ], 00:21:19.705 "driver_specific": { 00:21:19.705 "passthru": { 00:21:19.705 "name": "pt3", 00:21:19.705 "base_bdev_name": "malloc3" 00:21:19.705 } 00:21:19.705 } 00:21:19.705 }' 00:21:19.705 22:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:19.964 22:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:19.964 22:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:19.964 22:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:19.964 22:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:19.964 22:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:19.964 22:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:19.964 22:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:19.964 22:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:19.964 22:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:19.964 22:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:19.964 22:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:19.964 22:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:19.964 22:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:19.964 22:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:21:20.223 22:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:20.223 "name": "pt4", 00:21:20.223 "aliases": [ 00:21:20.223 "00000000-0000-0000-0000-000000000004" 00:21:20.223 ], 00:21:20.223 "product_name": "passthru", 00:21:20.223 "block_size": 512, 00:21:20.223 "num_blocks": 65536, 00:21:20.223 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:20.223 "assigned_rate_limits": { 00:21:20.223 "rw_ios_per_sec": 0, 00:21:20.223 "rw_mbytes_per_sec": 0, 00:21:20.223 "r_mbytes_per_sec": 0, 00:21:20.223 "w_mbytes_per_sec": 0 00:21:20.223 }, 00:21:20.223 "claimed": true, 00:21:20.223 "claim_type": "exclusive_write", 00:21:20.223 "zoned": false, 00:21:20.223 "supported_io_types": { 00:21:20.223 "read": true, 00:21:20.223 "write": true, 00:21:20.223 "unmap": true, 00:21:20.223 "flush": true, 00:21:20.223 "reset": true, 00:21:20.223 "nvme_admin": false, 00:21:20.223 "nvme_io": false, 00:21:20.223 "nvme_io_md": false, 00:21:20.223 "write_zeroes": true, 00:21:20.223 "zcopy": true, 00:21:20.223 "get_zone_info": false, 00:21:20.223 "zone_management": false, 00:21:20.223 "zone_append": false, 00:21:20.223 "compare": false, 00:21:20.223 "compare_and_write": false, 00:21:20.223 "abort": true, 00:21:20.223 "seek_hole": false, 00:21:20.223 "seek_data": false, 00:21:20.223 "copy": true, 00:21:20.223 "nvme_iov_md": false 00:21:20.223 }, 00:21:20.223 "memory_domains": [ 00:21:20.223 { 00:21:20.223 "dma_device_id": "system", 00:21:20.223 "dma_device_type": 1 00:21:20.223 }, 00:21:20.223 { 00:21:20.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:20.223 "dma_device_type": 2 00:21:20.223 } 00:21:20.223 ], 00:21:20.223 "driver_specific": { 00:21:20.223 "passthru": { 00:21:20.223 "name": "pt4", 00:21:20.223 "base_bdev_name": "malloc4" 00:21:20.223 } 00:21:20.223 } 00:21:20.223 }' 00:21:20.223 22:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:20.223 22:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:20.223 22:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:20.223 22:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:20.482 22:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:20.482 22:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:20.482 22:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:20.482 22:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:20.482 22:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:20.482 22:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:20.482 22:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:20.482 22:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:20.482 22:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:20.482 22:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:21:20.742 [2024-07-13 22:04:39.963015] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:20.742 22:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=2dbffd1b-d1ac-4cf1-b0ac-8ca9e3d7bb9f 00:21:20.742 22:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 2dbffd1b-d1ac-4cf1-b0ac-8ca9e3d7bb9f ']' 00:21:20.742 22:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:21.001 [2024-07-13 22:04:40.139181] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:21.001 [2024-07-13 22:04:40.139215] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:21.001 [2024-07-13 22:04:40.139300] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:21.001 [2024-07-13 22:04:40.139384] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:21.001 [2024-07-13 22:04:40.139401] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042080 name raid_bdev1, state offline 00:21:21.001 22:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:21.001 22:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:21:21.001 22:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:21:21.001 22:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:21:21.001 22:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:21:21.001 22:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:21.261 22:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:21:21.261 22:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:21.520 22:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:21:21.520 22:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:21.520 22:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:21:21.520 22:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:21:21.780 22:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:21:21.780 22:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:21.780 22:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:21:21.780 22:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:21:21.780 22:04:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:21:21.780 22:04:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:21:21.780 22:04:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:21.780 22:04:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:21.780 22:04:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:22.040 22:04:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:22.040 22:04:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:22.040 22:04:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:22.040 22:04:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:22.040 22:04:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:21:22.040 22:04:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:21:22.040 [2024-07-13 22:04:41.318407] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:22.040 [2024-07-13 22:04:41.320196] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:22.040 [2024-07-13 22:04:41.320247] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:21:22.040 [2024-07-13 22:04:41.320281] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:21:22.040 [2024-07-13 22:04:41.320328] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:22.040 [2024-07-13 22:04:41.320374] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:22.040 [2024-07-13 22:04:41.320394] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:21:22.040 [2024-07-13 22:04:41.320416] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:21:22.040 [2024-07-13 22:04:41.320432] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:22.040 [2024-07-13 22:04:41.320445] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042680 name raid_bdev1, state configuring 00:21:22.040 request: 00:21:22.040 { 00:21:22.040 "name": "raid_bdev1", 00:21:22.040 "raid_level": "raid1", 00:21:22.040 "base_bdevs": [ 00:21:22.040 "malloc1", 00:21:22.040 "malloc2", 00:21:22.040 "malloc3", 00:21:22.040 "malloc4" 00:21:22.040 ], 00:21:22.040 "superblock": false, 00:21:22.040 "method": "bdev_raid_create", 00:21:22.040 "req_id": 1 00:21:22.040 } 00:21:22.040 Got JSON-RPC error response 00:21:22.040 response: 00:21:22.040 { 00:21:22.040 "code": -17, 00:21:22.040 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:22.040 } 00:21:22.040 22:04:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:21:22.040 22:04:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:22.040 22:04:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:22.040 22:04:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:22.040 22:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:22.040 22:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:21:22.299 22:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:21:22.299 22:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:21:22.299 22:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:22.299 [2024-07-13 22:04:41.663240] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:22.299 [2024-07-13 22:04:41.663304] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:22.299 [2024-07-13 22:04:41.663324] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042c80 00:21:22.299 [2024-07-13 22:04:41.663338] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:22.299 [2024-07-13 22:04:41.665581] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:22.299 [2024-07-13 22:04:41.665614] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:22.299 [2024-07-13 22:04:41.665709] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:22.299 [2024-07-13 22:04:41.665760] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:22.299 pt1 00:21:22.299 22:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:21:22.299 22:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:22.299 22:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:22.299 22:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:22.299 22:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:22.299 22:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:22.299 22:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:22.299 22:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:22.299 22:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:22.299 22:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:22.299 22:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:22.299 22:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.568 22:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:22.568 "name": "raid_bdev1", 00:21:22.568 "uuid": "2dbffd1b-d1ac-4cf1-b0ac-8ca9e3d7bb9f", 00:21:22.568 "strip_size_kb": 0, 00:21:22.568 "state": "configuring", 00:21:22.568 "raid_level": "raid1", 00:21:22.568 "superblock": true, 00:21:22.568 "num_base_bdevs": 4, 00:21:22.568 "num_base_bdevs_discovered": 1, 00:21:22.568 "num_base_bdevs_operational": 4, 00:21:22.568 "base_bdevs_list": [ 00:21:22.568 { 00:21:22.568 "name": "pt1", 00:21:22.568 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:22.568 "is_configured": true, 00:21:22.568 "data_offset": 2048, 00:21:22.568 "data_size": 63488 00:21:22.568 }, 00:21:22.568 { 00:21:22.568 "name": null, 00:21:22.568 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:22.568 "is_configured": false, 00:21:22.568 "data_offset": 2048, 00:21:22.568 "data_size": 63488 00:21:22.568 }, 00:21:22.568 { 00:21:22.568 "name": null, 00:21:22.568 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:22.568 "is_configured": false, 00:21:22.568 "data_offset": 2048, 00:21:22.568 "data_size": 63488 00:21:22.568 }, 00:21:22.568 { 00:21:22.568 "name": null, 00:21:22.568 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:22.568 "is_configured": false, 00:21:22.568 "data_offset": 2048, 00:21:22.568 "data_size": 63488 00:21:22.568 } 00:21:22.568 ] 00:21:22.568 }' 00:21:22.568 22:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:22.568 22:04:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.165 22:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:21:23.166 22:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:23.166 [2024-07-13 22:04:42.449315] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:23.166 [2024-07-13 22:04:42.449395] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:23.166 [2024-07-13 22:04:42.449423] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043580 00:21:23.166 [2024-07-13 22:04:42.449436] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:23.166 [2024-07-13 22:04:42.449922] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:23.166 [2024-07-13 22:04:42.449948] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:23.166 [2024-07-13 22:04:42.450031] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:23.166 [2024-07-13 22:04:42.450060] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:23.166 pt2 00:21:23.166 22:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:23.425 [2024-07-13 22:04:42.617786] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:21:23.425 22:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:21:23.425 22:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:23.425 22:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:23.425 22:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:23.425 22:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:23.425 22:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:23.425 22:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:23.425 22:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:23.425 22:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:23.425 22:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:23.425 22:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:23.425 22:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.426 22:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:23.426 "name": "raid_bdev1", 00:21:23.426 "uuid": "2dbffd1b-d1ac-4cf1-b0ac-8ca9e3d7bb9f", 00:21:23.426 "strip_size_kb": 0, 00:21:23.426 "state": "configuring", 00:21:23.426 "raid_level": "raid1", 00:21:23.426 "superblock": true, 00:21:23.426 "num_base_bdevs": 4, 00:21:23.426 "num_base_bdevs_discovered": 1, 00:21:23.426 "num_base_bdevs_operational": 4, 00:21:23.426 "base_bdevs_list": [ 00:21:23.426 { 00:21:23.426 "name": "pt1", 00:21:23.426 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:23.426 "is_configured": true, 00:21:23.426 "data_offset": 2048, 00:21:23.426 "data_size": 63488 00:21:23.426 }, 00:21:23.426 { 00:21:23.426 "name": null, 00:21:23.426 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:23.426 "is_configured": false, 00:21:23.426 "data_offset": 2048, 00:21:23.426 "data_size": 63488 00:21:23.426 }, 00:21:23.426 { 00:21:23.426 "name": null, 00:21:23.426 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:23.426 "is_configured": false, 00:21:23.426 "data_offset": 2048, 00:21:23.426 "data_size": 63488 00:21:23.426 }, 00:21:23.426 { 00:21:23.426 "name": null, 00:21:23.426 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:23.426 "is_configured": false, 00:21:23.426 "data_offset": 2048, 00:21:23.426 "data_size": 63488 00:21:23.426 } 00:21:23.426 ] 00:21:23.426 }' 00:21:23.426 22:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:23.426 22:04:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.992 22:04:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:21:23.992 22:04:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:21:23.992 22:04:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:24.250 [2024-07-13 22:04:43.447909] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:24.250 [2024-07-13 22:04:43.447989] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:24.250 [2024-07-13 22:04:43.448022] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043880 00:21:24.250 [2024-07-13 22:04:43.448034] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:24.250 [2024-07-13 22:04:43.448505] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:24.250 [2024-07-13 22:04:43.448525] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:24.250 [2024-07-13 22:04:43.448609] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:24.250 [2024-07-13 22:04:43.448631] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:24.250 pt2 00:21:24.250 22:04:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:21:24.250 22:04:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:21:24.250 22:04:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:24.250 [2024-07-13 22:04:43.616335] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:24.250 [2024-07-13 22:04:43.616379] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:24.250 [2024-07-13 22:04:43.616415] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043b80 00:21:24.250 [2024-07-13 22:04:43.616426] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:24.250 [2024-07-13 22:04:43.616860] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:24.250 [2024-07-13 22:04:43.616877] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:24.250 [2024-07-13 22:04:43.616949] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:24.250 [2024-07-13 22:04:43.616970] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:24.250 pt3 00:21:24.250 22:04:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:21:24.250 22:04:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:21:24.250 22:04:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:24.509 [2024-07-13 22:04:43.776799] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:24.509 [2024-07-13 22:04:43.776841] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:24.509 [2024-07-13 22:04:43.776859] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043e80 00:21:24.509 [2024-07-13 22:04:43.776870] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:24.509 [2024-07-13 22:04:43.777230] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:24.509 [2024-07-13 22:04:43.777247] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:24.509 [2024-07-13 22:04:43.777308] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:21:24.509 [2024-07-13 22:04:43.777326] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:24.509 [2024-07-13 22:04:43.777476] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000043280 00:21:24.509 [2024-07-13 22:04:43.777486] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:24.509 [2024-07-13 22:04:43.777728] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:21:24.509 [2024-07-13 22:04:43.777897] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000043280 00:21:24.509 [2024-07-13 22:04:43.777920] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000043280 00:21:24.509 [2024-07-13 22:04:43.778045] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:24.509 pt4 00:21:24.509 22:04:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:21:24.509 22:04:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:21:24.509 22:04:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:24.509 22:04:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:24.509 22:04:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:24.509 22:04:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:24.509 22:04:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:24.509 22:04:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:24.509 22:04:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:24.509 22:04:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:24.509 22:04:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:24.509 22:04:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:24.509 22:04:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:24.509 22:04:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.768 22:04:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:24.768 "name": "raid_bdev1", 00:21:24.768 "uuid": "2dbffd1b-d1ac-4cf1-b0ac-8ca9e3d7bb9f", 00:21:24.768 "strip_size_kb": 0, 00:21:24.768 "state": "online", 00:21:24.768 "raid_level": "raid1", 00:21:24.768 "superblock": true, 00:21:24.768 "num_base_bdevs": 4, 00:21:24.768 "num_base_bdevs_discovered": 4, 00:21:24.768 "num_base_bdevs_operational": 4, 00:21:24.768 "base_bdevs_list": [ 00:21:24.768 { 00:21:24.768 "name": "pt1", 00:21:24.768 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:24.768 "is_configured": true, 00:21:24.768 "data_offset": 2048, 00:21:24.768 "data_size": 63488 00:21:24.768 }, 00:21:24.768 { 00:21:24.768 "name": "pt2", 00:21:24.768 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:24.768 "is_configured": true, 00:21:24.768 "data_offset": 2048, 00:21:24.768 "data_size": 63488 00:21:24.768 }, 00:21:24.768 { 00:21:24.768 "name": "pt3", 00:21:24.768 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:24.768 "is_configured": true, 00:21:24.768 "data_offset": 2048, 00:21:24.768 "data_size": 63488 00:21:24.768 }, 00:21:24.768 { 00:21:24.768 "name": "pt4", 00:21:24.768 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:24.768 "is_configured": true, 00:21:24.768 "data_offset": 2048, 00:21:24.768 "data_size": 63488 00:21:24.768 } 00:21:24.768 ] 00:21:24.768 }' 00:21:24.768 22:04:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:24.768 22:04:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.337 22:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:21:25.337 22:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:21:25.337 22:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:25.337 22:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:25.337 22:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:25.337 22:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:25.337 22:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:25.337 22:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:25.337 [2024-07-13 22:04:44.591352] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:25.337 22:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:25.337 "name": "raid_bdev1", 00:21:25.337 "aliases": [ 00:21:25.337 "2dbffd1b-d1ac-4cf1-b0ac-8ca9e3d7bb9f" 00:21:25.337 ], 00:21:25.337 "product_name": "Raid Volume", 00:21:25.337 "block_size": 512, 00:21:25.337 "num_blocks": 63488, 00:21:25.337 "uuid": "2dbffd1b-d1ac-4cf1-b0ac-8ca9e3d7bb9f", 00:21:25.337 "assigned_rate_limits": { 00:21:25.337 "rw_ios_per_sec": 0, 00:21:25.337 "rw_mbytes_per_sec": 0, 00:21:25.337 "r_mbytes_per_sec": 0, 00:21:25.337 "w_mbytes_per_sec": 0 00:21:25.337 }, 00:21:25.337 "claimed": false, 00:21:25.337 "zoned": false, 00:21:25.337 "supported_io_types": { 00:21:25.337 "read": true, 00:21:25.337 "write": true, 00:21:25.337 "unmap": false, 00:21:25.337 "flush": false, 00:21:25.337 "reset": true, 00:21:25.337 "nvme_admin": false, 00:21:25.337 "nvme_io": false, 00:21:25.337 "nvme_io_md": false, 00:21:25.337 "write_zeroes": true, 00:21:25.337 "zcopy": false, 00:21:25.337 "get_zone_info": false, 00:21:25.337 "zone_management": false, 00:21:25.337 "zone_append": false, 00:21:25.337 "compare": false, 00:21:25.337 "compare_and_write": false, 00:21:25.337 "abort": false, 00:21:25.337 "seek_hole": false, 00:21:25.337 "seek_data": false, 00:21:25.337 "copy": false, 00:21:25.337 "nvme_iov_md": false 00:21:25.337 }, 00:21:25.337 "memory_domains": [ 00:21:25.337 { 00:21:25.337 "dma_device_id": "system", 00:21:25.337 "dma_device_type": 1 00:21:25.337 }, 00:21:25.337 { 00:21:25.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:25.337 "dma_device_type": 2 00:21:25.337 }, 00:21:25.337 { 00:21:25.337 "dma_device_id": "system", 00:21:25.337 "dma_device_type": 1 00:21:25.337 }, 00:21:25.337 { 00:21:25.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:25.337 "dma_device_type": 2 00:21:25.337 }, 00:21:25.337 { 00:21:25.337 "dma_device_id": "system", 00:21:25.337 "dma_device_type": 1 00:21:25.337 }, 00:21:25.337 { 00:21:25.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:25.337 "dma_device_type": 2 00:21:25.337 }, 00:21:25.337 { 00:21:25.337 "dma_device_id": "system", 00:21:25.337 "dma_device_type": 1 00:21:25.337 }, 00:21:25.337 { 00:21:25.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:25.337 "dma_device_type": 2 00:21:25.337 } 00:21:25.337 ], 00:21:25.337 "driver_specific": { 00:21:25.337 "raid": { 00:21:25.337 "uuid": "2dbffd1b-d1ac-4cf1-b0ac-8ca9e3d7bb9f", 00:21:25.337 "strip_size_kb": 0, 00:21:25.337 "state": "online", 00:21:25.337 "raid_level": "raid1", 00:21:25.337 "superblock": true, 00:21:25.337 "num_base_bdevs": 4, 00:21:25.337 "num_base_bdevs_discovered": 4, 00:21:25.337 "num_base_bdevs_operational": 4, 00:21:25.337 "base_bdevs_list": [ 00:21:25.337 { 00:21:25.337 "name": "pt1", 00:21:25.337 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:25.337 "is_configured": true, 00:21:25.337 "data_offset": 2048, 00:21:25.337 "data_size": 63488 00:21:25.337 }, 00:21:25.337 { 00:21:25.337 "name": "pt2", 00:21:25.337 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:25.337 "is_configured": true, 00:21:25.337 "data_offset": 2048, 00:21:25.337 "data_size": 63488 00:21:25.337 }, 00:21:25.337 { 00:21:25.337 "name": "pt3", 00:21:25.337 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:25.337 "is_configured": true, 00:21:25.337 "data_offset": 2048, 00:21:25.337 "data_size": 63488 00:21:25.337 }, 00:21:25.337 { 00:21:25.337 "name": "pt4", 00:21:25.337 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:25.337 "is_configured": true, 00:21:25.337 "data_offset": 2048, 00:21:25.337 "data_size": 63488 00:21:25.337 } 00:21:25.337 ] 00:21:25.337 } 00:21:25.337 } 00:21:25.337 }' 00:21:25.337 22:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:25.337 22:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:21:25.337 pt2 00:21:25.337 pt3 00:21:25.337 pt4' 00:21:25.337 22:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:25.337 22:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:25.337 22:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:25.597 22:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:25.597 "name": "pt1", 00:21:25.597 "aliases": [ 00:21:25.597 "00000000-0000-0000-0000-000000000001" 00:21:25.597 ], 00:21:25.597 "product_name": "passthru", 00:21:25.597 "block_size": 512, 00:21:25.597 "num_blocks": 65536, 00:21:25.597 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:25.597 "assigned_rate_limits": { 00:21:25.597 "rw_ios_per_sec": 0, 00:21:25.597 "rw_mbytes_per_sec": 0, 00:21:25.597 "r_mbytes_per_sec": 0, 00:21:25.597 "w_mbytes_per_sec": 0 00:21:25.597 }, 00:21:25.597 "claimed": true, 00:21:25.597 "claim_type": "exclusive_write", 00:21:25.597 "zoned": false, 00:21:25.597 "supported_io_types": { 00:21:25.597 "read": true, 00:21:25.597 "write": true, 00:21:25.597 "unmap": true, 00:21:25.597 "flush": true, 00:21:25.597 "reset": true, 00:21:25.597 "nvme_admin": false, 00:21:25.597 "nvme_io": false, 00:21:25.597 "nvme_io_md": false, 00:21:25.597 "write_zeroes": true, 00:21:25.597 "zcopy": true, 00:21:25.597 "get_zone_info": false, 00:21:25.597 "zone_management": false, 00:21:25.597 "zone_append": false, 00:21:25.597 "compare": false, 00:21:25.597 "compare_and_write": false, 00:21:25.597 "abort": true, 00:21:25.597 "seek_hole": false, 00:21:25.597 "seek_data": false, 00:21:25.597 "copy": true, 00:21:25.597 "nvme_iov_md": false 00:21:25.597 }, 00:21:25.597 "memory_domains": [ 00:21:25.597 { 00:21:25.597 "dma_device_id": "system", 00:21:25.597 "dma_device_type": 1 00:21:25.597 }, 00:21:25.597 { 00:21:25.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:25.597 "dma_device_type": 2 00:21:25.597 } 00:21:25.597 ], 00:21:25.597 "driver_specific": { 00:21:25.597 "passthru": { 00:21:25.597 "name": "pt1", 00:21:25.597 "base_bdev_name": "malloc1" 00:21:25.597 } 00:21:25.597 } 00:21:25.597 }' 00:21:25.597 22:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:25.597 22:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:25.597 22:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:25.597 22:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:25.597 22:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:25.597 22:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:25.597 22:04:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:25.857 22:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:25.857 22:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:25.857 22:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:25.857 22:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:25.857 22:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:25.857 22:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:25.857 22:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:25.857 22:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:26.117 22:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:26.117 "name": "pt2", 00:21:26.117 "aliases": [ 00:21:26.117 "00000000-0000-0000-0000-000000000002" 00:21:26.117 ], 00:21:26.117 "product_name": "passthru", 00:21:26.117 "block_size": 512, 00:21:26.117 "num_blocks": 65536, 00:21:26.117 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:26.117 "assigned_rate_limits": { 00:21:26.117 "rw_ios_per_sec": 0, 00:21:26.117 "rw_mbytes_per_sec": 0, 00:21:26.117 "r_mbytes_per_sec": 0, 00:21:26.117 "w_mbytes_per_sec": 0 00:21:26.117 }, 00:21:26.117 "claimed": true, 00:21:26.117 "claim_type": "exclusive_write", 00:21:26.117 "zoned": false, 00:21:26.117 "supported_io_types": { 00:21:26.117 "read": true, 00:21:26.117 "write": true, 00:21:26.117 "unmap": true, 00:21:26.117 "flush": true, 00:21:26.117 "reset": true, 00:21:26.117 "nvme_admin": false, 00:21:26.117 "nvme_io": false, 00:21:26.117 "nvme_io_md": false, 00:21:26.117 "write_zeroes": true, 00:21:26.117 "zcopy": true, 00:21:26.117 "get_zone_info": false, 00:21:26.117 "zone_management": false, 00:21:26.117 "zone_append": false, 00:21:26.117 "compare": false, 00:21:26.117 "compare_and_write": false, 00:21:26.117 "abort": true, 00:21:26.117 "seek_hole": false, 00:21:26.117 "seek_data": false, 00:21:26.117 "copy": true, 00:21:26.117 "nvme_iov_md": false 00:21:26.117 }, 00:21:26.117 "memory_domains": [ 00:21:26.117 { 00:21:26.117 "dma_device_id": "system", 00:21:26.117 "dma_device_type": 1 00:21:26.117 }, 00:21:26.117 { 00:21:26.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:26.117 "dma_device_type": 2 00:21:26.117 } 00:21:26.117 ], 00:21:26.117 "driver_specific": { 00:21:26.117 "passthru": { 00:21:26.117 "name": "pt2", 00:21:26.117 "base_bdev_name": "malloc2" 00:21:26.117 } 00:21:26.117 } 00:21:26.117 }' 00:21:26.117 22:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:26.117 22:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:26.117 22:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:26.117 22:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:26.117 22:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:26.117 22:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:26.117 22:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:26.117 22:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:26.376 22:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:26.376 22:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:26.376 22:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:26.376 22:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:26.376 22:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:26.376 22:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:26.376 22:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:21:26.376 22:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:26.376 "name": "pt3", 00:21:26.376 "aliases": [ 00:21:26.376 "00000000-0000-0000-0000-000000000003" 00:21:26.376 ], 00:21:26.376 "product_name": "passthru", 00:21:26.376 "block_size": 512, 00:21:26.377 "num_blocks": 65536, 00:21:26.377 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:26.377 "assigned_rate_limits": { 00:21:26.377 "rw_ios_per_sec": 0, 00:21:26.377 "rw_mbytes_per_sec": 0, 00:21:26.377 "r_mbytes_per_sec": 0, 00:21:26.377 "w_mbytes_per_sec": 0 00:21:26.377 }, 00:21:26.377 "claimed": true, 00:21:26.377 "claim_type": "exclusive_write", 00:21:26.377 "zoned": false, 00:21:26.377 "supported_io_types": { 00:21:26.377 "read": true, 00:21:26.377 "write": true, 00:21:26.377 "unmap": true, 00:21:26.377 "flush": true, 00:21:26.377 "reset": true, 00:21:26.377 "nvme_admin": false, 00:21:26.377 "nvme_io": false, 00:21:26.377 "nvme_io_md": false, 00:21:26.377 "write_zeroes": true, 00:21:26.377 "zcopy": true, 00:21:26.377 "get_zone_info": false, 00:21:26.377 "zone_management": false, 00:21:26.377 "zone_append": false, 00:21:26.377 "compare": false, 00:21:26.377 "compare_and_write": false, 00:21:26.377 "abort": true, 00:21:26.377 "seek_hole": false, 00:21:26.377 "seek_data": false, 00:21:26.377 "copy": true, 00:21:26.377 "nvme_iov_md": false 00:21:26.377 }, 00:21:26.377 "memory_domains": [ 00:21:26.377 { 00:21:26.377 "dma_device_id": "system", 00:21:26.377 "dma_device_type": 1 00:21:26.377 }, 00:21:26.377 { 00:21:26.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:26.377 "dma_device_type": 2 00:21:26.377 } 00:21:26.377 ], 00:21:26.377 "driver_specific": { 00:21:26.377 "passthru": { 00:21:26.377 "name": "pt3", 00:21:26.377 "base_bdev_name": "malloc3" 00:21:26.377 } 00:21:26.377 } 00:21:26.377 }' 00:21:26.377 22:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:26.636 22:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:26.636 22:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:26.636 22:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:26.636 22:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:26.636 22:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:26.636 22:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:26.636 22:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:26.636 22:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:26.636 22:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:26.636 22:04:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:26.636 22:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:26.636 22:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:26.636 22:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:21:26.636 22:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:26.896 22:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:26.896 "name": "pt4", 00:21:26.896 "aliases": [ 00:21:26.896 "00000000-0000-0000-0000-000000000004" 00:21:26.896 ], 00:21:26.896 "product_name": "passthru", 00:21:26.896 "block_size": 512, 00:21:26.896 "num_blocks": 65536, 00:21:26.896 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:26.896 "assigned_rate_limits": { 00:21:26.896 "rw_ios_per_sec": 0, 00:21:26.896 "rw_mbytes_per_sec": 0, 00:21:26.896 "r_mbytes_per_sec": 0, 00:21:26.896 "w_mbytes_per_sec": 0 00:21:26.896 }, 00:21:26.896 "claimed": true, 00:21:26.896 "claim_type": "exclusive_write", 00:21:26.896 "zoned": false, 00:21:26.896 "supported_io_types": { 00:21:26.896 "read": true, 00:21:26.896 "write": true, 00:21:26.896 "unmap": true, 00:21:26.896 "flush": true, 00:21:26.896 "reset": true, 00:21:26.896 "nvme_admin": false, 00:21:26.896 "nvme_io": false, 00:21:26.896 "nvme_io_md": false, 00:21:26.896 "write_zeroes": true, 00:21:26.896 "zcopy": true, 00:21:26.896 "get_zone_info": false, 00:21:26.896 "zone_management": false, 00:21:26.896 "zone_append": false, 00:21:26.896 "compare": false, 00:21:26.896 "compare_and_write": false, 00:21:26.896 "abort": true, 00:21:26.896 "seek_hole": false, 00:21:26.896 "seek_data": false, 00:21:26.896 "copy": true, 00:21:26.896 "nvme_iov_md": false 00:21:26.896 }, 00:21:26.896 "memory_domains": [ 00:21:26.896 { 00:21:26.896 "dma_device_id": "system", 00:21:26.896 "dma_device_type": 1 00:21:26.896 }, 00:21:26.896 { 00:21:26.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:26.896 "dma_device_type": 2 00:21:26.896 } 00:21:26.896 ], 00:21:26.896 "driver_specific": { 00:21:26.896 "passthru": { 00:21:26.896 "name": "pt4", 00:21:26.896 "base_bdev_name": "malloc4" 00:21:26.896 } 00:21:26.896 } 00:21:26.896 }' 00:21:26.896 22:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:26.896 22:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:26.896 22:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:26.896 22:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:26.896 22:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:27.156 22:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:27.156 22:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:27.156 22:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:27.156 22:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:27.156 22:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:27.156 22:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:27.156 22:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:27.156 22:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:27.156 22:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:21:27.416 [2024-07-13 22:04:46.588553] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:27.416 22:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 2dbffd1b-d1ac-4cf1-b0ac-8ca9e3d7bb9f '!=' 2dbffd1b-d1ac-4cf1-b0ac-8ca9e3d7bb9f ']' 00:21:27.416 22:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:21:27.416 22:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:27.416 22:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:21:27.416 22:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:27.416 [2024-07-13 22:04:46.760744] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:27.416 22:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:27.416 22:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:27.416 22:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:27.416 22:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:27.416 22:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:27.416 22:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:27.416 22:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:27.416 22:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:27.416 22:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:27.416 22:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:27.416 22:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:27.416 22:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:27.676 22:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:27.676 "name": "raid_bdev1", 00:21:27.676 "uuid": "2dbffd1b-d1ac-4cf1-b0ac-8ca9e3d7bb9f", 00:21:27.676 "strip_size_kb": 0, 00:21:27.676 "state": "online", 00:21:27.676 "raid_level": "raid1", 00:21:27.676 "superblock": true, 00:21:27.676 "num_base_bdevs": 4, 00:21:27.676 "num_base_bdevs_discovered": 3, 00:21:27.676 "num_base_bdevs_operational": 3, 00:21:27.676 "base_bdevs_list": [ 00:21:27.676 { 00:21:27.676 "name": null, 00:21:27.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.676 "is_configured": false, 00:21:27.676 "data_offset": 2048, 00:21:27.676 "data_size": 63488 00:21:27.676 }, 00:21:27.676 { 00:21:27.676 "name": "pt2", 00:21:27.676 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:27.676 "is_configured": true, 00:21:27.676 "data_offset": 2048, 00:21:27.676 "data_size": 63488 00:21:27.676 }, 00:21:27.676 { 00:21:27.676 "name": "pt3", 00:21:27.676 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:27.676 "is_configured": true, 00:21:27.676 "data_offset": 2048, 00:21:27.676 "data_size": 63488 00:21:27.676 }, 00:21:27.676 { 00:21:27.676 "name": "pt4", 00:21:27.676 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:27.676 "is_configured": true, 00:21:27.676 "data_offset": 2048, 00:21:27.676 "data_size": 63488 00:21:27.676 } 00:21:27.676 ] 00:21:27.676 }' 00:21:27.676 22:04:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:27.676 22:04:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.245 22:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:28.245 [2024-07-13 22:04:47.595009] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:28.245 [2024-07-13 22:04:47.595044] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:28.245 [2024-07-13 22:04:47.595124] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:28.245 [2024-07-13 22:04:47.595199] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:28.245 [2024-07-13 22:04:47.595211] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000043280 name raid_bdev1, state offline 00:21:28.245 22:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:28.245 22:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:21:28.504 22:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:21:28.504 22:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:21:28.504 22:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:21:28.504 22:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:21:28.504 22:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:28.764 22:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:21:28.764 22:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:21:28.764 22:04:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:28.764 22:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:21:28.764 22:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:21:28.764 22:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:21:29.024 22:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:21:29.024 22:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:21:29.024 22:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:21:29.024 22:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:21:29.024 22:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:29.283 [2024-07-13 22:04:48.417143] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:29.283 [2024-07-13 22:04:48.417205] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:29.283 [2024-07-13 22:04:48.417229] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000044180 00:21:29.283 [2024-07-13 22:04:48.417242] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:29.283 [2024-07-13 22:04:48.419416] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:29.283 [2024-07-13 22:04:48.419447] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:29.283 [2024-07-13 22:04:48.419530] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:29.283 [2024-07-13 22:04:48.419576] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:29.283 pt2 00:21:29.283 22:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:21:29.283 22:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:29.283 22:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:29.283 22:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:29.283 22:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:29.283 22:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:29.283 22:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:29.283 22:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:29.283 22:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:29.283 22:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:29.283 22:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:29.283 22:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:29.283 22:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:29.283 "name": "raid_bdev1", 00:21:29.283 "uuid": "2dbffd1b-d1ac-4cf1-b0ac-8ca9e3d7bb9f", 00:21:29.283 "strip_size_kb": 0, 00:21:29.283 "state": "configuring", 00:21:29.283 "raid_level": "raid1", 00:21:29.283 "superblock": true, 00:21:29.283 "num_base_bdevs": 4, 00:21:29.283 "num_base_bdevs_discovered": 1, 00:21:29.283 "num_base_bdevs_operational": 3, 00:21:29.283 "base_bdevs_list": [ 00:21:29.283 { 00:21:29.283 "name": null, 00:21:29.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.283 "is_configured": false, 00:21:29.283 "data_offset": 2048, 00:21:29.283 "data_size": 63488 00:21:29.283 }, 00:21:29.283 { 00:21:29.283 "name": "pt2", 00:21:29.283 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:29.283 "is_configured": true, 00:21:29.283 "data_offset": 2048, 00:21:29.283 "data_size": 63488 00:21:29.283 }, 00:21:29.283 { 00:21:29.283 "name": null, 00:21:29.283 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:29.283 "is_configured": false, 00:21:29.283 "data_offset": 2048, 00:21:29.283 "data_size": 63488 00:21:29.283 }, 00:21:29.283 { 00:21:29.283 "name": null, 00:21:29.283 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:29.283 "is_configured": false, 00:21:29.283 "data_offset": 2048, 00:21:29.283 "data_size": 63488 00:21:29.283 } 00:21:29.283 ] 00:21:29.283 }' 00:21:29.283 22:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:29.283 22:04:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:29.851 22:04:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:21:29.851 22:04:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:21:29.851 22:04:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:29.851 [2024-07-13 22:04:49.239295] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:29.851 [2024-07-13 22:04:49.239358] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:29.851 [2024-07-13 22:04:49.239384] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000044a80 00:21:29.851 [2024-07-13 22:04:49.239395] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:29.851 [2024-07-13 22:04:49.239870] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:29.851 [2024-07-13 22:04:49.239890] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:29.851 [2024-07-13 22:04:49.239981] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:29.851 [2024-07-13 22:04:49.240003] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:30.109 pt3 00:21:30.109 22:04:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:21:30.109 22:04:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:30.109 22:04:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:30.109 22:04:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:30.109 22:04:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:30.109 22:04:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:30.109 22:04:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:30.109 22:04:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:30.109 22:04:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:30.109 22:04:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:30.109 22:04:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:30.109 22:04:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:30.109 22:04:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:30.109 "name": "raid_bdev1", 00:21:30.109 "uuid": "2dbffd1b-d1ac-4cf1-b0ac-8ca9e3d7bb9f", 00:21:30.109 "strip_size_kb": 0, 00:21:30.109 "state": "configuring", 00:21:30.109 "raid_level": "raid1", 00:21:30.109 "superblock": true, 00:21:30.109 "num_base_bdevs": 4, 00:21:30.109 "num_base_bdevs_discovered": 2, 00:21:30.109 "num_base_bdevs_operational": 3, 00:21:30.109 "base_bdevs_list": [ 00:21:30.109 { 00:21:30.109 "name": null, 00:21:30.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.109 "is_configured": false, 00:21:30.109 "data_offset": 2048, 00:21:30.109 "data_size": 63488 00:21:30.109 }, 00:21:30.109 { 00:21:30.109 "name": "pt2", 00:21:30.109 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:30.109 "is_configured": true, 00:21:30.109 "data_offset": 2048, 00:21:30.109 "data_size": 63488 00:21:30.109 }, 00:21:30.109 { 00:21:30.109 "name": "pt3", 00:21:30.110 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:30.110 "is_configured": true, 00:21:30.110 "data_offset": 2048, 00:21:30.110 "data_size": 63488 00:21:30.110 }, 00:21:30.110 { 00:21:30.110 "name": null, 00:21:30.110 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:30.110 "is_configured": false, 00:21:30.110 "data_offset": 2048, 00:21:30.110 "data_size": 63488 00:21:30.110 } 00:21:30.110 ] 00:21:30.110 }' 00:21:30.110 22:04:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:30.110 22:04:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.677 22:04:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:21:30.677 22:04:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:21:30.677 22:04:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=3 00:21:30.677 22:04:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:30.937 [2024-07-13 22:04:50.073539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:30.937 [2024-07-13 22:04:50.073602] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:30.937 [2024-07-13 22:04:50.073628] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000044d80 00:21:30.937 [2024-07-13 22:04:50.073640] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:30.937 [2024-07-13 22:04:50.074139] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:30.937 [2024-07-13 22:04:50.074161] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:30.937 [2024-07-13 22:04:50.074241] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:21:30.937 [2024-07-13 22:04:50.074265] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:30.937 [2024-07-13 22:04:50.074412] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000044780 00:21:30.937 [2024-07-13 22:04:50.074423] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:30.937 [2024-07-13 22:04:50.074718] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000108b0 00:21:30.937 [2024-07-13 22:04:50.074890] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000044780 00:21:30.937 [2024-07-13 22:04:50.074916] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000044780 00:21:30.937 [2024-07-13 22:04:50.075065] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:30.937 pt4 00:21:30.937 22:04:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:30.937 22:04:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:30.937 22:04:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:30.937 22:04:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:30.937 22:04:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:30.937 22:04:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:30.937 22:04:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:30.937 22:04:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:30.937 22:04:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:30.937 22:04:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:30.937 22:04:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:30.937 22:04:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:30.937 22:04:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:30.937 "name": "raid_bdev1", 00:21:30.937 "uuid": "2dbffd1b-d1ac-4cf1-b0ac-8ca9e3d7bb9f", 00:21:30.937 "strip_size_kb": 0, 00:21:30.937 "state": "online", 00:21:30.937 "raid_level": "raid1", 00:21:30.937 "superblock": true, 00:21:30.937 "num_base_bdevs": 4, 00:21:30.937 "num_base_bdevs_discovered": 3, 00:21:30.937 "num_base_bdevs_operational": 3, 00:21:30.937 "base_bdevs_list": [ 00:21:30.937 { 00:21:30.937 "name": null, 00:21:30.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.937 "is_configured": false, 00:21:30.937 "data_offset": 2048, 00:21:30.937 "data_size": 63488 00:21:30.937 }, 00:21:30.937 { 00:21:30.937 "name": "pt2", 00:21:30.937 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:30.937 "is_configured": true, 00:21:30.937 "data_offset": 2048, 00:21:30.937 "data_size": 63488 00:21:30.937 }, 00:21:30.937 { 00:21:30.937 "name": "pt3", 00:21:30.937 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:30.937 "is_configured": true, 00:21:30.937 "data_offset": 2048, 00:21:30.937 "data_size": 63488 00:21:30.937 }, 00:21:30.937 { 00:21:30.937 "name": "pt4", 00:21:30.937 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:30.937 "is_configured": true, 00:21:30.937 "data_offset": 2048, 00:21:30.937 "data_size": 63488 00:21:30.937 } 00:21:30.937 ] 00:21:30.937 }' 00:21:30.937 22:04:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:30.937 22:04:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.506 22:04:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:31.506 [2024-07-13 22:04:50.891791] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:31.506 [2024-07-13 22:04:50.891822] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:31.506 [2024-07-13 22:04:50.891911] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:31.506 [2024-07-13 22:04:50.891983] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:31.506 [2024-07-13 22:04:50.891998] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000044780 name raid_bdev1, state offline 00:21:31.764 22:04:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:21:31.764 22:04:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:31.764 22:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:21:31.764 22:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:21:31.764 22:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 4 -gt 2 ']' 00:21:31.764 22:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=3 00:21:31.764 22:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:21:32.021 22:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:32.286 [2024-07-13 22:04:51.425146] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:32.287 [2024-07-13 22:04:51.425222] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:32.287 [2024-07-13 22:04:51.425242] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000045080 00:21:32.287 [2024-07-13 22:04:51.425255] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:32.287 [2024-07-13 22:04:51.427442] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:32.287 [2024-07-13 22:04:51.427470] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:32.287 [2024-07-13 22:04:51.427539] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:32.287 [2024-07-13 22:04:51.427590] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:32.287 [2024-07-13 22:04:51.427734] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:32.287 [2024-07-13 22:04:51.427750] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:32.287 [2024-07-13 22:04:51.427768] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000045680 name raid_bdev1, state configuring 00:21:32.287 [2024-07-13 22:04:51.427828] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:32.287 [2024-07-13 22:04:51.427929] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:32.287 pt1 00:21:32.287 22:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 4 -gt 2 ']' 00:21:32.287 22:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:21:32.287 22:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:32.287 22:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:32.287 22:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:32.287 22:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:32.287 22:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:32.287 22:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:32.287 22:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:32.287 22:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:32.287 22:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:32.287 22:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:32.287 22:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:32.287 22:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:32.287 "name": "raid_bdev1", 00:21:32.287 "uuid": "2dbffd1b-d1ac-4cf1-b0ac-8ca9e3d7bb9f", 00:21:32.287 "strip_size_kb": 0, 00:21:32.287 "state": "configuring", 00:21:32.287 "raid_level": "raid1", 00:21:32.287 "superblock": true, 00:21:32.287 "num_base_bdevs": 4, 00:21:32.287 "num_base_bdevs_discovered": 2, 00:21:32.287 "num_base_bdevs_operational": 3, 00:21:32.287 "base_bdevs_list": [ 00:21:32.287 { 00:21:32.287 "name": null, 00:21:32.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.287 "is_configured": false, 00:21:32.287 "data_offset": 2048, 00:21:32.287 "data_size": 63488 00:21:32.287 }, 00:21:32.287 { 00:21:32.287 "name": "pt2", 00:21:32.287 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:32.287 "is_configured": true, 00:21:32.287 "data_offset": 2048, 00:21:32.287 "data_size": 63488 00:21:32.287 }, 00:21:32.287 { 00:21:32.287 "name": "pt3", 00:21:32.287 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:32.287 "is_configured": true, 00:21:32.287 "data_offset": 2048, 00:21:32.287 "data_size": 63488 00:21:32.287 }, 00:21:32.287 { 00:21:32.287 "name": null, 00:21:32.287 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:32.287 "is_configured": false, 00:21:32.287 "data_offset": 2048, 00:21:32.287 "data_size": 63488 00:21:32.287 } 00:21:32.287 ] 00:21:32.287 }' 00:21:32.287 22:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:32.287 22:04:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.859 22:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:32.859 22:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:21:33.119 22:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:21:33.119 22:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:33.119 [2024-07-13 22:04:52.431814] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:33.119 [2024-07-13 22:04:52.431877] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:33.119 [2024-07-13 22:04:52.431912] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000045c80 00:21:33.119 [2024-07-13 22:04:52.431925] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:33.119 [2024-07-13 22:04:52.432405] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:33.119 [2024-07-13 22:04:52.432426] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:33.119 [2024-07-13 22:04:52.432504] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:21:33.119 [2024-07-13 22:04:52.432527] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:33.119 [2024-07-13 22:04:52.432683] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000045980 00:21:33.119 [2024-07-13 22:04:52.432694] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:33.119 [2024-07-13 22:04:52.432944] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010980 00:21:33.119 [2024-07-13 22:04:52.433114] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000045980 00:21:33.119 [2024-07-13 22:04:52.433128] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000045980 00:21:33.119 [2024-07-13 22:04:52.433261] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:33.119 pt4 00:21:33.119 22:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:33.119 22:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:33.119 22:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:33.119 22:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:33.119 22:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:33.119 22:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:33.119 22:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:33.119 22:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:33.119 22:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:33.119 22:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:33.119 22:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:33.119 22:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:33.378 22:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:33.378 "name": "raid_bdev1", 00:21:33.378 "uuid": "2dbffd1b-d1ac-4cf1-b0ac-8ca9e3d7bb9f", 00:21:33.378 "strip_size_kb": 0, 00:21:33.378 "state": "online", 00:21:33.378 "raid_level": "raid1", 00:21:33.378 "superblock": true, 00:21:33.378 "num_base_bdevs": 4, 00:21:33.378 "num_base_bdevs_discovered": 3, 00:21:33.378 "num_base_bdevs_operational": 3, 00:21:33.378 "base_bdevs_list": [ 00:21:33.378 { 00:21:33.378 "name": null, 00:21:33.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.378 "is_configured": false, 00:21:33.378 "data_offset": 2048, 00:21:33.378 "data_size": 63488 00:21:33.378 }, 00:21:33.378 { 00:21:33.378 "name": "pt2", 00:21:33.378 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:33.378 "is_configured": true, 00:21:33.378 "data_offset": 2048, 00:21:33.378 "data_size": 63488 00:21:33.378 }, 00:21:33.378 { 00:21:33.378 "name": "pt3", 00:21:33.378 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:33.378 "is_configured": true, 00:21:33.378 "data_offset": 2048, 00:21:33.378 "data_size": 63488 00:21:33.378 }, 00:21:33.378 { 00:21:33.378 "name": "pt4", 00:21:33.378 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:33.378 "is_configured": true, 00:21:33.378 "data_offset": 2048, 00:21:33.378 "data_size": 63488 00:21:33.378 } 00:21:33.378 ] 00:21:33.378 }' 00:21:33.378 22:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:33.378 22:04:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.946 22:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:33.946 22:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:21:33.946 22:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:21:33.946 22:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:33.946 22:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:21:34.206 [2024-07-13 22:04:53.410766] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:34.206 22:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 2dbffd1b-d1ac-4cf1-b0ac-8ca9e3d7bb9f '!=' 2dbffd1b-d1ac-4cf1-b0ac-8ca9e3d7bb9f ']' 00:21:34.206 22:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 1453802 00:21:34.206 22:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 1453802 ']' 00:21:34.206 22:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 1453802 00:21:34.206 22:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:21:34.206 22:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:34.206 22:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1453802 00:21:34.206 22:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:34.206 22:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:34.206 22:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1453802' 00:21:34.206 killing process with pid 1453802 00:21:34.206 22:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 1453802 00:21:34.206 [2024-07-13 22:04:53.481146] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:34.206 22:04:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 1453802 00:21:34.206 [2024-07-13 22:04:53.481242] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:34.206 [2024-07-13 22:04:53.481315] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:34.206 [2024-07-13 22:04:53.481329] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000045980 name raid_bdev1, state offline 00:21:34.465 [2024-07-13 22:04:53.807346] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:35.881 22:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:21:35.881 00:21:35.881 real 0m20.429s 00:21:35.881 user 0m35.741s 00:21:35.881 sys 0m3.811s 00:21:35.881 22:04:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:35.881 22:04:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.881 ************************************ 00:21:35.881 END TEST raid_superblock_test 00:21:35.881 ************************************ 00:21:35.881 22:04:55 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:21:35.881 22:04:55 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:21:35.881 22:04:55 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:21:35.881 22:04:55 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:35.881 22:04:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:35.881 ************************************ 00:21:35.881 START TEST raid_read_error_test 00:21:35.881 ************************************ 00:21:35.881 22:04:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 4 read 00:21:35.881 22:04:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:21:35.881 22:04:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:21:35.881 22:04:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:21:35.881 22:04:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:21:35.881 22:04:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:35.881 22:04:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:21:35.881 22:04:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:35.881 22:04:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:35.881 22:04:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:21:35.881 22:04:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:35.881 22:04:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:35.881 22:04:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:21:35.881 22:04:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:35.881 22:04:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:35.881 22:04:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:21:35.881 22:04:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:35.881 22:04:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:35.881 22:04:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:35.881 22:04:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:21:35.881 22:04:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:21:35.881 22:04:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:21:35.881 22:04:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:21:35.881 22:04:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:21:35.881 22:04:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:21:35.881 22:04:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:21:35.881 22:04:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:21:35.881 22:04:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:21:35.881 22:04:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.0JP3wzutZR 00:21:35.881 22:04:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1457779 00:21:35.881 22:04:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1457779 /var/tmp/spdk-raid.sock 00:21:35.881 22:04:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 1457779 ']' 00:21:35.881 22:04:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:35.881 22:04:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:35.881 22:04:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:35.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:35.881 22:04:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:35.881 22:04:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.881 22:04:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:35.881 [2024-07-13 22:04:55.214600] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:21:35.881 [2024-07-13 22:04:55.214717] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1457779 ] 00:21:36.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.167 EAL: Requested device 0000:3d:01.0 cannot be used 00:21:36.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.167 EAL: Requested device 0000:3d:01.1 cannot be used 00:21:36.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.167 EAL: Requested device 0000:3d:01.2 cannot be used 00:21:36.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.167 EAL: Requested device 0000:3d:01.3 cannot be used 00:21:36.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.167 EAL: Requested device 0000:3d:01.4 cannot be used 00:21:36.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.167 EAL: Requested device 0000:3d:01.5 cannot be used 00:21:36.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.167 EAL: Requested device 0000:3d:01.6 cannot be used 00:21:36.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.167 EAL: Requested device 0000:3d:01.7 cannot be used 00:21:36.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.167 EAL: Requested device 0000:3d:02.0 cannot be used 00:21:36.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.167 EAL: Requested device 0000:3d:02.1 cannot be used 00:21:36.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.167 EAL: Requested device 0000:3d:02.2 cannot be used 00:21:36.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.167 EAL: Requested device 0000:3d:02.3 cannot be used 00:21:36.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.167 EAL: Requested device 0000:3d:02.4 cannot be used 00:21:36.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.167 EAL: Requested device 0000:3d:02.5 cannot be used 00:21:36.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.167 EAL: Requested device 0000:3d:02.6 cannot be used 00:21:36.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.167 EAL: Requested device 0000:3d:02.7 cannot be used 00:21:36.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.167 EAL: Requested device 0000:3f:01.0 cannot be used 00:21:36.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.167 EAL: Requested device 0000:3f:01.1 cannot be used 00:21:36.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.167 EAL: Requested device 0000:3f:01.2 cannot be used 00:21:36.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.167 EAL: Requested device 0000:3f:01.3 cannot be used 00:21:36.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.167 EAL: Requested device 0000:3f:01.4 cannot be used 00:21:36.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.167 EAL: Requested device 0000:3f:01.5 cannot be used 00:21:36.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.167 EAL: Requested device 0000:3f:01.6 cannot be used 00:21:36.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.167 EAL: Requested device 0000:3f:01.7 cannot be used 00:21:36.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.167 EAL: Requested device 0000:3f:02.0 cannot be used 00:21:36.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.167 EAL: Requested device 0000:3f:02.1 cannot be used 00:21:36.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.167 EAL: Requested device 0000:3f:02.2 cannot be used 00:21:36.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.167 EAL: Requested device 0000:3f:02.3 cannot be used 00:21:36.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.167 EAL: Requested device 0000:3f:02.4 cannot be used 00:21:36.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.167 EAL: Requested device 0000:3f:02.5 cannot be used 00:21:36.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.167 EAL: Requested device 0000:3f:02.6 cannot be used 00:21:36.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.167 EAL: Requested device 0000:3f:02.7 cannot be used 00:21:36.167 [2024-07-13 22:04:55.377572] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.427 [2024-07-13 22:04:55.582389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.686 [2024-07-13 22:04:55.823452] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:36.686 [2024-07-13 22:04:55.823480] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:36.686 22:04:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:36.686 22:04:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:21:36.686 22:04:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:36.686 22:04:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:36.944 BaseBdev1_malloc 00:21:36.944 22:04:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:21:37.203 true 00:21:37.203 22:04:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:37.203 [2024-07-13 22:04:56.498692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:37.203 [2024-07-13 22:04:56.498746] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:37.203 [2024-07-13 22:04:56.498784] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:21:37.203 [2024-07-13 22:04:56.498801] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:37.203 [2024-07-13 22:04:56.500919] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:37.203 [2024-07-13 22:04:56.500951] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:37.203 BaseBdev1 00:21:37.203 22:04:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:37.203 22:04:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:37.462 BaseBdev2_malloc 00:21:37.462 22:04:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:21:37.721 true 00:21:37.721 22:04:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:37.721 [2024-07-13 22:04:57.022868] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:37.721 [2024-07-13 22:04:57.022924] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:37.721 [2024-07-13 22:04:57.022961] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:21:37.721 [2024-07-13 22:04:57.022976] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:37.721 [2024-07-13 22:04:57.025028] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:37.721 [2024-07-13 22:04:57.025067] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:37.721 BaseBdev2 00:21:37.721 22:04:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:37.721 22:04:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:37.980 BaseBdev3_malloc 00:21:37.980 22:04:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:21:38.239 true 00:21:38.239 22:04:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:21:38.239 [2024-07-13 22:04:57.554622] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:21:38.239 [2024-07-13 22:04:57.554672] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:38.239 [2024-07-13 22:04:57.554694] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041780 00:21:38.239 [2024-07-13 22:04:57.554707] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:38.239 [2024-07-13 22:04:57.556854] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:38.239 [2024-07-13 22:04:57.556885] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:38.239 BaseBdev3 00:21:38.239 22:04:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:38.239 22:04:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:38.498 BaseBdev4_malloc 00:21:38.498 22:04:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:21:38.757 true 00:21:38.757 22:04:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:21:38.757 [2024-07-13 22:04:58.098171] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:21:38.757 [2024-07-13 22:04:58.098226] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:38.757 [2024-07-13 22:04:58.098248] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042680 00:21:38.757 [2024-07-13 22:04:58.098261] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:38.757 [2024-07-13 22:04:58.100387] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:38.757 [2024-07-13 22:04:58.100419] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:38.757 BaseBdev4 00:21:38.757 22:04:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:21:39.016 [2024-07-13 22:04:58.266649] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:39.016 [2024-07-13 22:04:58.268422] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:39.016 [2024-07-13 22:04:58.268500] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:39.016 [2024-07-13 22:04:58.268555] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:39.016 [2024-07-13 22:04:58.268770] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042c80 00:21:39.016 [2024-07-13 22:04:58.268786] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:39.016 [2024-07-13 22:04:58.269041] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:21:39.016 [2024-07-13 22:04:58.269249] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042c80 00:21:39.016 [2024-07-13 22:04:58.269259] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000042c80 00:21:39.016 [2024-07-13 22:04:58.269410] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:39.016 22:04:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:39.016 22:04:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:39.016 22:04:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:39.016 22:04:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:39.016 22:04:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:39.016 22:04:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:39.016 22:04:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:39.016 22:04:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:39.016 22:04:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:39.016 22:04:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:39.016 22:04:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:39.016 22:04:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:39.275 22:04:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:39.275 "name": "raid_bdev1", 00:21:39.275 "uuid": "351f7414-1c85-4b57-b0fd-bd2bed850620", 00:21:39.275 "strip_size_kb": 0, 00:21:39.275 "state": "online", 00:21:39.275 "raid_level": "raid1", 00:21:39.275 "superblock": true, 00:21:39.275 "num_base_bdevs": 4, 00:21:39.275 "num_base_bdevs_discovered": 4, 00:21:39.275 "num_base_bdevs_operational": 4, 00:21:39.275 "base_bdevs_list": [ 00:21:39.275 { 00:21:39.275 "name": "BaseBdev1", 00:21:39.275 "uuid": "81710fd7-1384-5510-97d1-324b2a90e017", 00:21:39.275 "is_configured": true, 00:21:39.275 "data_offset": 2048, 00:21:39.275 "data_size": 63488 00:21:39.275 }, 00:21:39.275 { 00:21:39.275 "name": "BaseBdev2", 00:21:39.275 "uuid": "4de5bba3-f6df-5813-a066-bea00ebbf4e7", 00:21:39.275 "is_configured": true, 00:21:39.275 "data_offset": 2048, 00:21:39.275 "data_size": 63488 00:21:39.275 }, 00:21:39.275 { 00:21:39.275 "name": "BaseBdev3", 00:21:39.275 "uuid": "ad616f15-5c4a-5e9f-8b04-191c0c83bd3c", 00:21:39.275 "is_configured": true, 00:21:39.275 "data_offset": 2048, 00:21:39.275 "data_size": 63488 00:21:39.275 }, 00:21:39.275 { 00:21:39.275 "name": "BaseBdev4", 00:21:39.275 "uuid": "fe16c485-aec6-577a-8331-2197fa7f4fca", 00:21:39.275 "is_configured": true, 00:21:39.275 "data_offset": 2048, 00:21:39.275 "data_size": 63488 00:21:39.275 } 00:21:39.275 ] 00:21:39.275 }' 00:21:39.275 22:04:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:39.275 22:04:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.842 22:04:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:21:39.842 22:04:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:39.842 [2024-07-13 22:04:59.046226] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000108b0 00:21:40.778 22:04:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:21:40.778 22:05:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:21:40.778 22:05:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:21:40.778 22:05:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:21:40.778 22:05:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:21:40.778 22:05:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:40.778 22:05:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:40.778 22:05:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:40.778 22:05:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:40.779 22:05:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:40.779 22:05:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:40.779 22:05:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:40.779 22:05:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:40.779 22:05:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:40.779 22:05:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:40.779 22:05:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:40.779 22:05:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:41.037 22:05:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:41.037 "name": "raid_bdev1", 00:21:41.037 "uuid": "351f7414-1c85-4b57-b0fd-bd2bed850620", 00:21:41.037 "strip_size_kb": 0, 00:21:41.037 "state": "online", 00:21:41.037 "raid_level": "raid1", 00:21:41.037 "superblock": true, 00:21:41.037 "num_base_bdevs": 4, 00:21:41.037 "num_base_bdevs_discovered": 4, 00:21:41.037 "num_base_bdevs_operational": 4, 00:21:41.037 "base_bdevs_list": [ 00:21:41.037 { 00:21:41.037 "name": "BaseBdev1", 00:21:41.037 "uuid": "81710fd7-1384-5510-97d1-324b2a90e017", 00:21:41.037 "is_configured": true, 00:21:41.037 "data_offset": 2048, 00:21:41.037 "data_size": 63488 00:21:41.037 }, 00:21:41.037 { 00:21:41.037 "name": "BaseBdev2", 00:21:41.037 "uuid": "4de5bba3-f6df-5813-a066-bea00ebbf4e7", 00:21:41.037 "is_configured": true, 00:21:41.037 "data_offset": 2048, 00:21:41.037 "data_size": 63488 00:21:41.037 }, 00:21:41.037 { 00:21:41.037 "name": "BaseBdev3", 00:21:41.037 "uuid": "ad616f15-5c4a-5e9f-8b04-191c0c83bd3c", 00:21:41.037 "is_configured": true, 00:21:41.037 "data_offset": 2048, 00:21:41.037 "data_size": 63488 00:21:41.037 }, 00:21:41.037 { 00:21:41.037 "name": "BaseBdev4", 00:21:41.037 "uuid": "fe16c485-aec6-577a-8331-2197fa7f4fca", 00:21:41.037 "is_configured": true, 00:21:41.037 "data_offset": 2048, 00:21:41.037 "data_size": 63488 00:21:41.037 } 00:21:41.037 ] 00:21:41.037 }' 00:21:41.037 22:05:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:41.037 22:05:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.606 22:05:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:41.606 [2024-07-13 22:05:00.996098] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:41.606 [2024-07-13 22:05:00.996136] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:41.865 [2024-07-13 22:05:00.998497] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:41.865 [2024-07-13 22:05:00.998547] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:41.865 [2024-07-13 22:05:00.998658] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:41.865 [2024-07-13 22:05:00.998678] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042c80 name raid_bdev1, state offline 00:21:41.865 0 00:21:41.865 22:05:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1457779 00:21:41.865 22:05:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 1457779 ']' 00:21:41.866 22:05:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 1457779 00:21:41.866 22:05:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:21:41.866 22:05:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:41.866 22:05:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1457779 00:21:41.866 22:05:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:41.866 22:05:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:41.866 22:05:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1457779' 00:21:41.866 killing process with pid 1457779 00:21:41.866 22:05:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 1457779 00:21:41.866 [2024-07-13 22:05:01.058115] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:41.866 22:05:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 1457779 00:21:42.124 [2024-07-13 22:05:01.317196] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:43.503 22:05:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.0JP3wzutZR 00:21:43.503 22:05:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:21:43.503 22:05:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:21:43.503 22:05:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:21:43.503 22:05:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:21:43.503 22:05:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:43.503 22:05:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:21:43.503 22:05:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:21:43.503 00:21:43.503 real 0m7.486s 00:21:43.503 user 0m10.640s 00:21:43.503 sys 0m1.219s 00:21:43.503 22:05:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:43.504 22:05:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.504 ************************************ 00:21:43.504 END TEST raid_read_error_test 00:21:43.504 ************************************ 00:21:43.504 22:05:02 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:21:43.504 22:05:02 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:21:43.504 22:05:02 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:21:43.504 22:05:02 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:43.504 22:05:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:43.504 ************************************ 00:21:43.504 START TEST raid_write_error_test 00:21:43.504 ************************************ 00:21:43.504 22:05:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 4 write 00:21:43.504 22:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:21:43.504 22:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:21:43.504 22:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:21:43.504 22:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:21:43.504 22:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:43.504 22:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:21:43.504 22:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:43.504 22:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:43.504 22:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:21:43.504 22:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:43.504 22:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:43.504 22:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:21:43.504 22:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:43.504 22:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:43.504 22:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:21:43.504 22:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:43.504 22:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:43.504 22:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:43.504 22:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:21:43.504 22:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:21:43.504 22:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:21:43.504 22:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:21:43.504 22:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:21:43.504 22:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:21:43.504 22:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:21:43.504 22:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:21:43.504 22:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:21:43.504 22:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.kKE5lLm6qF 00:21:43.504 22:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1459202 00:21:43.504 22:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1459202 /var/tmp/spdk-raid.sock 00:21:43.504 22:05:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:43.504 22:05:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 1459202 ']' 00:21:43.504 22:05:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:43.504 22:05:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:43.504 22:05:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:43.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:43.504 22:05:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:43.504 22:05:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.504 [2024-07-13 22:05:02.795632] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:21:43.504 [2024-07-13 22:05:02.795725] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1459202 ] 00:21:43.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.504 EAL: Requested device 0000:3d:01.0 cannot be used 00:21:43.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.504 EAL: Requested device 0000:3d:01.1 cannot be used 00:21:43.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.504 EAL: Requested device 0000:3d:01.2 cannot be used 00:21:43.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.504 EAL: Requested device 0000:3d:01.3 cannot be used 00:21:43.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.504 EAL: Requested device 0000:3d:01.4 cannot be used 00:21:43.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.504 EAL: Requested device 0000:3d:01.5 cannot be used 00:21:43.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.504 EAL: Requested device 0000:3d:01.6 cannot be used 00:21:43.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.504 EAL: Requested device 0000:3d:01.7 cannot be used 00:21:43.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.504 EAL: Requested device 0000:3d:02.0 cannot be used 00:21:43.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.504 EAL: Requested device 0000:3d:02.1 cannot be used 00:21:43.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.504 EAL: Requested device 0000:3d:02.2 cannot be used 00:21:43.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.504 EAL: Requested device 0000:3d:02.3 cannot be used 00:21:43.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.504 EAL: Requested device 0000:3d:02.4 cannot be used 00:21:43.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.504 EAL: Requested device 0000:3d:02.5 cannot be used 00:21:43.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.504 EAL: Requested device 0000:3d:02.6 cannot be used 00:21:43.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.504 EAL: Requested device 0000:3d:02.7 cannot be used 00:21:43.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.763 EAL: Requested device 0000:3f:01.0 cannot be used 00:21:43.763 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.763 EAL: Requested device 0000:3f:01.1 cannot be used 00:21:43.763 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.763 EAL: Requested device 0000:3f:01.2 cannot be used 00:21:43.763 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.763 EAL: Requested device 0000:3f:01.3 cannot be used 00:21:43.764 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.764 EAL: Requested device 0000:3f:01.4 cannot be used 00:21:43.764 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.764 EAL: Requested device 0000:3f:01.5 cannot be used 00:21:43.764 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.764 EAL: Requested device 0000:3f:01.6 cannot be used 00:21:43.764 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.764 EAL: Requested device 0000:3f:01.7 cannot be used 00:21:43.764 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.764 EAL: Requested device 0000:3f:02.0 cannot be used 00:21:43.764 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.764 EAL: Requested device 0000:3f:02.1 cannot be used 00:21:43.764 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.764 EAL: Requested device 0000:3f:02.2 cannot be used 00:21:43.764 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.764 EAL: Requested device 0000:3f:02.3 cannot be used 00:21:43.764 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.764 EAL: Requested device 0000:3f:02.4 cannot be used 00:21:43.764 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.764 EAL: Requested device 0000:3f:02.5 cannot be used 00:21:43.764 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.764 EAL: Requested device 0000:3f:02.6 cannot be used 00:21:43.764 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.764 EAL: Requested device 0000:3f:02.7 cannot be used 00:21:43.764 [2024-07-13 22:05:02.956092] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.034 [2024-07-13 22:05:03.159223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:44.034 [2024-07-13 22:05:03.398620] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:44.034 [2024-07-13 22:05:03.398651] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:44.302 22:05:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:44.302 22:05:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:21:44.302 22:05:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:44.302 22:05:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:44.560 BaseBdev1_malloc 00:21:44.560 22:05:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:21:44.560 true 00:21:44.560 22:05:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:44.818 [2024-07-13 22:05:04.083619] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:44.818 [2024-07-13 22:05:04.083673] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:44.818 [2024-07-13 22:05:04.083711] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:21:44.818 [2024-07-13 22:05:04.083728] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:44.818 [2024-07-13 22:05:04.085781] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:44.818 [2024-07-13 22:05:04.085811] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:44.818 BaseBdev1 00:21:44.818 22:05:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:44.818 22:05:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:45.078 BaseBdev2_malloc 00:21:45.078 22:05:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:21:45.078 true 00:21:45.078 22:05:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:45.337 [2024-07-13 22:05:04.607641] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:45.337 [2024-07-13 22:05:04.607690] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:45.337 [2024-07-13 22:05:04.607710] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:21:45.337 [2024-07-13 22:05:04.607724] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:45.337 [2024-07-13 22:05:04.609573] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:45.337 [2024-07-13 22:05:04.609601] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:45.337 BaseBdev2 00:21:45.337 22:05:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:45.337 22:05:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:45.595 BaseBdev3_malloc 00:21:45.595 22:05:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:21:45.854 true 00:21:45.854 22:05:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:21:45.854 [2024-07-13 22:05:05.157857] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:21:45.854 [2024-07-13 22:05:05.157910] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:45.854 [2024-07-13 22:05:05.157932] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041780 00:21:45.854 [2024-07-13 22:05:05.157946] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:45.854 [2024-07-13 22:05:05.159991] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:45.854 [2024-07-13 22:05:05.160019] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:45.854 BaseBdev3 00:21:45.854 22:05:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:45.854 22:05:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:46.113 BaseBdev4_malloc 00:21:46.113 22:05:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:21:46.373 true 00:21:46.373 22:05:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:21:46.373 [2024-07-13 22:05:05.706249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:21:46.373 [2024-07-13 22:05:05.706301] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:46.373 [2024-07-13 22:05:05.706338] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042680 00:21:46.373 [2024-07-13 22:05:05.706352] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:46.373 [2024-07-13 22:05:05.708398] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:46.373 [2024-07-13 22:05:05.708426] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:46.373 BaseBdev4 00:21:46.373 22:05:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:21:46.632 [2024-07-13 22:05:05.878729] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:46.632 [2024-07-13 22:05:05.880364] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:46.632 [2024-07-13 22:05:05.880436] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:46.632 [2024-07-13 22:05:05.880496] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:46.633 [2024-07-13 22:05:05.880692] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042c80 00:21:46.633 [2024-07-13 22:05:05.880707] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:46.633 [2024-07-13 22:05:05.880940] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:21:46.633 [2024-07-13 22:05:05.881161] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042c80 00:21:46.633 [2024-07-13 22:05:05.881172] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000042c80 00:21:46.633 [2024-07-13 22:05:05.881311] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:46.633 22:05:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:46.633 22:05:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:46.633 22:05:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:46.633 22:05:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:46.633 22:05:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:46.633 22:05:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:46.633 22:05:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:46.633 22:05:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:46.633 22:05:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:46.633 22:05:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:46.633 22:05:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:46.633 22:05:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:46.893 22:05:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:46.893 "name": "raid_bdev1", 00:21:46.893 "uuid": "2ab394d6-df93-4e86-ac18-7215477ac0e7", 00:21:46.893 "strip_size_kb": 0, 00:21:46.893 "state": "online", 00:21:46.893 "raid_level": "raid1", 00:21:46.893 "superblock": true, 00:21:46.893 "num_base_bdevs": 4, 00:21:46.893 "num_base_bdevs_discovered": 4, 00:21:46.893 "num_base_bdevs_operational": 4, 00:21:46.893 "base_bdevs_list": [ 00:21:46.893 { 00:21:46.893 "name": "BaseBdev1", 00:21:46.893 "uuid": "31292865-6f69-5920-a438-18604264d21c", 00:21:46.893 "is_configured": true, 00:21:46.893 "data_offset": 2048, 00:21:46.893 "data_size": 63488 00:21:46.893 }, 00:21:46.893 { 00:21:46.893 "name": "BaseBdev2", 00:21:46.893 "uuid": "d96c2369-dd37-5690-9bdb-547c9e604a2d", 00:21:46.893 "is_configured": true, 00:21:46.893 "data_offset": 2048, 00:21:46.893 "data_size": 63488 00:21:46.893 }, 00:21:46.893 { 00:21:46.893 "name": "BaseBdev3", 00:21:46.893 "uuid": "248a3496-08f1-5ef0-be69-44f817dbf927", 00:21:46.893 "is_configured": true, 00:21:46.893 "data_offset": 2048, 00:21:46.893 "data_size": 63488 00:21:46.893 }, 00:21:46.893 { 00:21:46.893 "name": "BaseBdev4", 00:21:46.893 "uuid": "05e513aa-8b21-5bd4-9f22-46670dc57c09", 00:21:46.893 "is_configured": true, 00:21:46.893 "data_offset": 2048, 00:21:46.893 "data_size": 63488 00:21:46.893 } 00:21:46.893 ] 00:21:46.893 }' 00:21:46.893 22:05:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:46.893 22:05:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.152 22:05:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:47.152 22:05:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:21:47.411 [2024-07-13 22:05:06.614067] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000108b0 00:21:48.350 22:05:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:21:48.350 [2024-07-13 22:05:07.706460] bdev_raid.c:2221:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:21:48.350 [2024-07-13 22:05:07.706524] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:48.350 [2024-07-13 22:05:07.706763] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000108b0 00:21:48.350 22:05:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:21:48.350 22:05:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:21:48.350 22:05:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:21:48.350 22:05:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=3 00:21:48.350 22:05:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:48.350 22:05:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:48.350 22:05:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:48.350 22:05:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:48.350 22:05:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:48.350 22:05:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:48.350 22:05:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:48.350 22:05:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:48.350 22:05:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:48.350 22:05:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:48.350 22:05:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:48.350 22:05:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.609 22:05:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:48.609 "name": "raid_bdev1", 00:21:48.609 "uuid": "2ab394d6-df93-4e86-ac18-7215477ac0e7", 00:21:48.609 "strip_size_kb": 0, 00:21:48.609 "state": "online", 00:21:48.609 "raid_level": "raid1", 00:21:48.609 "superblock": true, 00:21:48.609 "num_base_bdevs": 4, 00:21:48.609 "num_base_bdevs_discovered": 3, 00:21:48.609 "num_base_bdevs_operational": 3, 00:21:48.609 "base_bdevs_list": [ 00:21:48.609 { 00:21:48.609 "name": null, 00:21:48.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.609 "is_configured": false, 00:21:48.609 "data_offset": 2048, 00:21:48.609 "data_size": 63488 00:21:48.609 }, 00:21:48.609 { 00:21:48.609 "name": "BaseBdev2", 00:21:48.609 "uuid": "d96c2369-dd37-5690-9bdb-547c9e604a2d", 00:21:48.609 "is_configured": true, 00:21:48.609 "data_offset": 2048, 00:21:48.609 "data_size": 63488 00:21:48.609 }, 00:21:48.609 { 00:21:48.609 "name": "BaseBdev3", 00:21:48.609 "uuid": "248a3496-08f1-5ef0-be69-44f817dbf927", 00:21:48.609 "is_configured": true, 00:21:48.609 "data_offset": 2048, 00:21:48.609 "data_size": 63488 00:21:48.609 }, 00:21:48.609 { 00:21:48.609 "name": "BaseBdev4", 00:21:48.609 "uuid": "05e513aa-8b21-5bd4-9f22-46670dc57c09", 00:21:48.609 "is_configured": true, 00:21:48.609 "data_offset": 2048, 00:21:48.609 "data_size": 63488 00:21:48.609 } 00:21:48.609 ] 00:21:48.609 }' 00:21:48.609 22:05:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:48.609 22:05:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.177 22:05:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:49.177 [2024-07-13 22:05:08.527879] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:49.177 [2024-07-13 22:05:08.527929] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:49.177 [2024-07-13 22:05:08.530290] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:49.177 [2024-07-13 22:05:08.530332] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:49.178 [2024-07-13 22:05:08.530434] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:49.178 [2024-07-13 22:05:08.530446] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042c80 name raid_bdev1, state offline 00:21:49.178 0 00:21:49.178 22:05:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1459202 00:21:49.178 22:05:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 1459202 ']' 00:21:49.178 22:05:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 1459202 00:21:49.178 22:05:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:21:49.178 22:05:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:49.178 22:05:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1459202 00:21:49.495 22:05:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:49.495 22:05:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:49.495 22:05:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1459202' 00:21:49.495 killing process with pid 1459202 00:21:49.495 22:05:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 1459202 00:21:49.495 [2024-07-13 22:05:08.598910] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:49.495 22:05:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 1459202 00:21:49.755 [2024-07-13 22:05:08.859966] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:51.159 22:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.kKE5lLm6qF 00:21:51.159 22:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:21:51.159 22:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:21:51.159 22:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:21:51.159 22:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:21:51.159 22:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:51.159 22:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:21:51.159 22:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:21:51.159 00:21:51.159 real 0m7.450s 00:21:51.159 user 0m10.555s 00:21:51.159 sys 0m1.217s 00:21:51.159 22:05:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:51.159 22:05:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.159 ************************************ 00:21:51.159 END TEST raid_write_error_test 00:21:51.159 ************************************ 00:21:51.159 22:05:10 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:21:51.159 22:05:10 bdev_raid -- bdev/bdev_raid.sh@875 -- # '[' true = true ']' 00:21:51.159 22:05:10 bdev_raid -- bdev/bdev_raid.sh@876 -- # for n in 2 4 00:21:51.159 22:05:10 bdev_raid -- bdev/bdev_raid.sh@877 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:21:51.159 22:05:10 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:21:51.159 22:05:10 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:51.159 22:05:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:51.159 ************************************ 00:21:51.159 START TEST raid_rebuild_test 00:21:51.159 ************************************ 00:21:51.159 22:05:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 false false true 00:21:51.159 22:05:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:21:51.159 22:05:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:21:51.159 22:05:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:21:51.159 22:05:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:21:51.159 22:05:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:21:51.159 22:05:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:21:51.159 22:05:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:21:51.159 22:05:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:21:51.159 22:05:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:21:51.159 22:05:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:21:51.159 22:05:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:21:51.159 22:05:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:21:51.159 22:05:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:21:51.159 22:05:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:51.159 22:05:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:21:51.159 22:05:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:21:51.159 22:05:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:21:51.159 22:05:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:21:51.159 22:05:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:21:51.159 22:05:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:21:51.159 22:05:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:21:51.159 22:05:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:21:51.160 22:05:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:21:51.160 22:05:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=1460616 00:21:51.160 22:05:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 1460616 /var/tmp/spdk-raid.sock 00:21:51.160 22:05:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:51.160 22:05:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@829 -- # '[' -z 1460616 ']' 00:21:51.160 22:05:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:51.160 22:05:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:51.160 22:05:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:51.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:51.160 22:05:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:51.160 22:05:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.160 [2024-07-13 22:05:10.315516] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:21:51.160 [2024-07-13 22:05:10.315628] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1460616 ] 00:21:51.160 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:51.160 Zero copy mechanism will not be used. 00:21:51.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:51.160 EAL: Requested device 0000:3d:01.0 cannot be used 00:21:51.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:51.160 EAL: Requested device 0000:3d:01.1 cannot be used 00:21:51.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:51.160 EAL: Requested device 0000:3d:01.2 cannot be used 00:21:51.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:51.160 EAL: Requested device 0000:3d:01.3 cannot be used 00:21:51.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:51.160 EAL: Requested device 0000:3d:01.4 cannot be used 00:21:51.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:51.160 EAL: Requested device 0000:3d:01.5 cannot be used 00:21:51.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:51.160 EAL: Requested device 0000:3d:01.6 cannot be used 00:21:51.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:51.160 EAL: Requested device 0000:3d:01.7 cannot be used 00:21:51.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:51.160 EAL: Requested device 0000:3d:02.0 cannot be used 00:21:51.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:51.160 EAL: Requested device 0000:3d:02.1 cannot be used 00:21:51.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:51.160 EAL: Requested device 0000:3d:02.2 cannot be used 00:21:51.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:51.160 EAL: Requested device 0000:3d:02.3 cannot be used 00:21:51.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:51.160 EAL: Requested device 0000:3d:02.4 cannot be used 00:21:51.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:51.160 EAL: Requested device 0000:3d:02.5 cannot be used 00:21:51.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:51.160 EAL: Requested device 0000:3d:02.6 cannot be used 00:21:51.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:51.160 EAL: Requested device 0000:3d:02.7 cannot be used 00:21:51.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:51.160 EAL: Requested device 0000:3f:01.0 cannot be used 00:21:51.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:51.160 EAL: Requested device 0000:3f:01.1 cannot be used 00:21:51.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:51.160 EAL: Requested device 0000:3f:01.2 cannot be used 00:21:51.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:51.160 EAL: Requested device 0000:3f:01.3 cannot be used 00:21:51.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:51.160 EAL: Requested device 0000:3f:01.4 cannot be used 00:21:51.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:51.160 EAL: Requested device 0000:3f:01.5 cannot be used 00:21:51.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:51.160 EAL: Requested device 0000:3f:01.6 cannot be used 00:21:51.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:51.160 EAL: Requested device 0000:3f:01.7 cannot be used 00:21:51.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:51.160 EAL: Requested device 0000:3f:02.0 cannot be used 00:21:51.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:51.160 EAL: Requested device 0000:3f:02.1 cannot be used 00:21:51.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:51.160 EAL: Requested device 0000:3f:02.2 cannot be used 00:21:51.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:51.160 EAL: Requested device 0000:3f:02.3 cannot be used 00:21:51.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:51.160 EAL: Requested device 0000:3f:02.4 cannot be used 00:21:51.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:51.160 EAL: Requested device 0000:3f:02.5 cannot be used 00:21:51.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:51.160 EAL: Requested device 0000:3f:02.6 cannot be used 00:21:51.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:51.160 EAL: Requested device 0000:3f:02.7 cannot be used 00:21:51.160 [2024-07-13 22:05:10.488658] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:51.418 [2024-07-13 22:05:10.694195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:51.676 [2024-07-13 22:05:10.938911] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:51.676 [2024-07-13 22:05:10.938938] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:51.934 22:05:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:51.934 22:05:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # return 0 00:21:51.934 22:05:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:21:51.934 22:05:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:51.934 BaseBdev1_malloc 00:21:51.934 22:05:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:52.192 [2024-07-13 22:05:11.443831] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:52.192 [2024-07-13 22:05:11.443888] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:52.192 [2024-07-13 22:05:11.443937] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:21:52.192 [2024-07-13 22:05:11.443951] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:52.192 [2024-07-13 22:05:11.446054] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:52.192 [2024-07-13 22:05:11.446087] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:52.192 BaseBdev1 00:21:52.192 22:05:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:21:52.192 22:05:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:52.450 BaseBdev2_malloc 00:21:52.450 22:05:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:52.450 [2024-07-13 22:05:11.807560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:52.450 [2024-07-13 22:05:11.807613] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:52.450 [2024-07-13 22:05:11.807636] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:21:52.450 [2024-07-13 22:05:11.807652] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:52.450 [2024-07-13 22:05:11.809812] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:52.450 [2024-07-13 22:05:11.809845] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:52.450 BaseBdev2 00:21:52.450 22:05:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:52.708 spare_malloc 00:21:52.708 22:05:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:52.966 spare_delay 00:21:52.966 22:05:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:52.966 [2024-07-13 22:05:12.328423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:52.966 [2024-07-13 22:05:12.328473] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:52.966 [2024-07-13 22:05:12.328494] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:21:52.966 [2024-07-13 22:05:12.328507] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:52.966 [2024-07-13 22:05:12.330573] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:52.966 [2024-07-13 22:05:12.330604] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:52.966 spare 00:21:52.966 22:05:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:21:53.223 [2024-07-13 22:05:12.480833] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:53.223 [2024-07-13 22:05:12.482605] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:53.223 [2024-07-13 22:05:12.482686] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041a80 00:21:53.223 [2024-07-13 22:05:12.482701] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:53.223 [2024-07-13 22:05:12.482989] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:21:53.223 [2024-07-13 22:05:12.483180] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041a80 00:21:53.223 [2024-07-13 22:05:12.483193] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041a80 00:21:53.223 [2024-07-13 22:05:12.483361] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:53.223 22:05:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:53.223 22:05:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:53.223 22:05:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:53.223 22:05:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:53.223 22:05:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:53.223 22:05:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:53.223 22:05:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:53.223 22:05:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:53.223 22:05:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:53.223 22:05:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:53.223 22:05:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:53.223 22:05:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:53.480 22:05:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:53.480 "name": "raid_bdev1", 00:21:53.480 "uuid": "c790780a-2912-4278-9549-4bb781ff71b3", 00:21:53.480 "strip_size_kb": 0, 00:21:53.480 "state": "online", 00:21:53.480 "raid_level": "raid1", 00:21:53.480 "superblock": false, 00:21:53.480 "num_base_bdevs": 2, 00:21:53.480 "num_base_bdevs_discovered": 2, 00:21:53.480 "num_base_bdevs_operational": 2, 00:21:53.480 "base_bdevs_list": [ 00:21:53.480 { 00:21:53.480 "name": "BaseBdev1", 00:21:53.480 "uuid": "1765ea31-d47a-5107-ae8b-70af9e401820", 00:21:53.480 "is_configured": true, 00:21:53.480 "data_offset": 0, 00:21:53.480 "data_size": 65536 00:21:53.480 }, 00:21:53.480 { 00:21:53.480 "name": "BaseBdev2", 00:21:53.480 "uuid": "c5fe8f3c-51de-5b94-a450-c93d50766c19", 00:21:53.480 "is_configured": true, 00:21:53.480 "data_offset": 0, 00:21:53.480 "data_size": 65536 00:21:53.480 } 00:21:53.480 ] 00:21:53.480 }' 00:21:53.480 22:05:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:53.480 22:05:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.046 22:05:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:54.046 22:05:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:21:54.046 [2024-07-13 22:05:13.331312] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:54.046 22:05:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:21:54.046 22:05:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:54.046 22:05:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:54.303 22:05:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:21:54.303 22:05:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:21:54.303 22:05:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:21:54.303 22:05:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:21:54.303 22:05:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:21:54.303 22:05:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:54.303 22:05:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:54.303 22:05:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:54.303 22:05:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:54.303 22:05:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:54.303 22:05:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:21:54.303 22:05:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:54.303 22:05:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:54.303 22:05:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:54.303 [2024-07-13 22:05:13.688065] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:21:54.562 /dev/nbd0 00:21:54.562 22:05:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:54.562 22:05:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:54.562 22:05:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:21:54.562 22:05:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:21:54.562 22:05:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:54.562 22:05:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:54.562 22:05:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:21:54.562 22:05:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:21:54.562 22:05:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:54.562 22:05:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:54.562 22:05:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:54.562 1+0 records in 00:21:54.562 1+0 records out 00:21:54.562 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236559 s, 17.3 MB/s 00:21:54.562 22:05:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:54.562 22:05:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:21:54.562 22:05:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:54.562 22:05:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:54.562 22:05:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:21:54.562 22:05:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:54.562 22:05:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:54.562 22:05:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:21:54.562 22:05:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:21:54.562 22:05:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:21:57.850 65536+0 records in 00:21:57.850 65536+0 records out 00:21:57.850 33554432 bytes (34 MB, 32 MiB) copied, 3.4549 s, 9.7 MB/s 00:21:57.850 22:05:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:57.850 22:05:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:57.850 22:05:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:57.850 22:05:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:57.850 22:05:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:21:57.850 22:05:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:57.850 22:05:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:58.109 22:05:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:58.109 [2024-07-13 22:05:17.389127] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:58.109 22:05:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:58.109 22:05:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:58.109 22:05:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:58.109 22:05:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:58.109 22:05:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:58.109 22:05:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:21:58.109 22:05:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:21:58.109 22:05:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:58.368 [2024-07-13 22:05:17.545622] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:58.368 22:05:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:58.368 22:05:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:58.368 22:05:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:58.368 22:05:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:58.368 22:05:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:58.368 22:05:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:21:58.368 22:05:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:58.368 22:05:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:58.368 22:05:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:58.368 22:05:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:58.368 22:05:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:58.368 22:05:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:58.368 22:05:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:58.368 "name": "raid_bdev1", 00:21:58.368 "uuid": "c790780a-2912-4278-9549-4bb781ff71b3", 00:21:58.368 "strip_size_kb": 0, 00:21:58.368 "state": "online", 00:21:58.368 "raid_level": "raid1", 00:21:58.368 "superblock": false, 00:21:58.368 "num_base_bdevs": 2, 00:21:58.368 "num_base_bdevs_discovered": 1, 00:21:58.368 "num_base_bdevs_operational": 1, 00:21:58.368 "base_bdevs_list": [ 00:21:58.368 { 00:21:58.368 "name": null, 00:21:58.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:58.368 "is_configured": false, 00:21:58.368 "data_offset": 0, 00:21:58.369 "data_size": 65536 00:21:58.369 }, 00:21:58.369 { 00:21:58.369 "name": "BaseBdev2", 00:21:58.369 "uuid": "c5fe8f3c-51de-5b94-a450-c93d50766c19", 00:21:58.369 "is_configured": true, 00:21:58.369 "data_offset": 0, 00:21:58.369 "data_size": 65536 00:21:58.369 } 00:21:58.369 ] 00:21:58.369 }' 00:21:58.369 22:05:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:58.369 22:05:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.937 22:05:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:59.196 [2024-07-13 22:05:18.411922] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:59.196 [2024-07-13 22:05:18.432035] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d14400 00:21:59.196 [2024-07-13 22:05:18.433799] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:59.196 22:05:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:22:00.131 22:05:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:00.131 22:05:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:00.131 22:05:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:00.131 22:05:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:00.131 22:05:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:00.131 22:05:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:00.131 22:05:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:00.389 22:05:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:00.389 "name": "raid_bdev1", 00:22:00.389 "uuid": "c790780a-2912-4278-9549-4bb781ff71b3", 00:22:00.389 "strip_size_kb": 0, 00:22:00.389 "state": "online", 00:22:00.389 "raid_level": "raid1", 00:22:00.389 "superblock": false, 00:22:00.389 "num_base_bdevs": 2, 00:22:00.389 "num_base_bdevs_discovered": 2, 00:22:00.389 "num_base_bdevs_operational": 2, 00:22:00.389 "process": { 00:22:00.390 "type": "rebuild", 00:22:00.390 "target": "spare", 00:22:00.390 "progress": { 00:22:00.390 "blocks": 22528, 00:22:00.390 "percent": 34 00:22:00.390 } 00:22:00.390 }, 00:22:00.390 "base_bdevs_list": [ 00:22:00.390 { 00:22:00.390 "name": "spare", 00:22:00.390 "uuid": "81210682-3c80-565c-abcd-084d8f9c1bea", 00:22:00.390 "is_configured": true, 00:22:00.390 "data_offset": 0, 00:22:00.390 "data_size": 65536 00:22:00.390 }, 00:22:00.390 { 00:22:00.390 "name": "BaseBdev2", 00:22:00.390 "uuid": "c5fe8f3c-51de-5b94-a450-c93d50766c19", 00:22:00.390 "is_configured": true, 00:22:00.390 "data_offset": 0, 00:22:00.390 "data_size": 65536 00:22:00.390 } 00:22:00.390 ] 00:22:00.390 }' 00:22:00.390 22:05:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:00.390 22:05:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:00.390 22:05:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:00.390 22:05:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:00.390 22:05:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:00.648 [2024-07-13 22:05:19.878818] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:00.648 [2024-07-13 22:05:19.945302] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:00.648 [2024-07-13 22:05:19.945350] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:00.648 [2024-07-13 22:05:19.945381] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:00.648 [2024-07-13 22:05:19.945392] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:00.648 22:05:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:00.648 22:05:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:00.648 22:05:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:00.648 22:05:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:00.648 22:05:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:00.649 22:05:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:00.649 22:05:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:00.649 22:05:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:00.649 22:05:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:00.649 22:05:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:00.649 22:05:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:00.649 22:05:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:00.907 22:05:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:00.907 "name": "raid_bdev1", 00:22:00.908 "uuid": "c790780a-2912-4278-9549-4bb781ff71b3", 00:22:00.908 "strip_size_kb": 0, 00:22:00.908 "state": "online", 00:22:00.908 "raid_level": "raid1", 00:22:00.908 "superblock": false, 00:22:00.908 "num_base_bdevs": 2, 00:22:00.908 "num_base_bdevs_discovered": 1, 00:22:00.908 "num_base_bdevs_operational": 1, 00:22:00.908 "base_bdevs_list": [ 00:22:00.908 { 00:22:00.908 "name": null, 00:22:00.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:00.908 "is_configured": false, 00:22:00.908 "data_offset": 0, 00:22:00.908 "data_size": 65536 00:22:00.908 }, 00:22:00.908 { 00:22:00.908 "name": "BaseBdev2", 00:22:00.908 "uuid": "c5fe8f3c-51de-5b94-a450-c93d50766c19", 00:22:00.908 "is_configured": true, 00:22:00.908 "data_offset": 0, 00:22:00.908 "data_size": 65536 00:22:00.908 } 00:22:00.908 ] 00:22:00.908 }' 00:22:00.908 22:05:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:00.908 22:05:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.474 22:05:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:01.474 22:05:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:01.474 22:05:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:01.474 22:05:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:01.474 22:05:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:01.474 22:05:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:01.474 22:05:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:01.475 22:05:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:01.475 "name": "raid_bdev1", 00:22:01.475 "uuid": "c790780a-2912-4278-9549-4bb781ff71b3", 00:22:01.475 "strip_size_kb": 0, 00:22:01.475 "state": "online", 00:22:01.475 "raid_level": "raid1", 00:22:01.475 "superblock": false, 00:22:01.475 "num_base_bdevs": 2, 00:22:01.475 "num_base_bdevs_discovered": 1, 00:22:01.475 "num_base_bdevs_operational": 1, 00:22:01.475 "base_bdevs_list": [ 00:22:01.475 { 00:22:01.475 "name": null, 00:22:01.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.475 "is_configured": false, 00:22:01.475 "data_offset": 0, 00:22:01.475 "data_size": 65536 00:22:01.475 }, 00:22:01.475 { 00:22:01.475 "name": "BaseBdev2", 00:22:01.475 "uuid": "c5fe8f3c-51de-5b94-a450-c93d50766c19", 00:22:01.475 "is_configured": true, 00:22:01.475 "data_offset": 0, 00:22:01.475 "data_size": 65536 00:22:01.475 } 00:22:01.475 ] 00:22:01.475 }' 00:22:01.475 22:05:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:01.475 22:05:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:01.475 22:05:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:01.475 22:05:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:01.475 22:05:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:01.733 [2024-07-13 22:05:21.010939] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:01.733 [2024-07-13 22:05:21.028181] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d144d0 00:22:01.733 [2024-07-13 22:05:21.029974] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:01.733 22:05:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:02.669 22:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:02.669 22:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:02.669 22:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:02.669 22:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:02.669 22:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:02.669 22:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:02.669 22:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:02.928 22:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:02.928 "name": "raid_bdev1", 00:22:02.928 "uuid": "c790780a-2912-4278-9549-4bb781ff71b3", 00:22:02.928 "strip_size_kb": 0, 00:22:02.928 "state": "online", 00:22:02.928 "raid_level": "raid1", 00:22:02.928 "superblock": false, 00:22:02.928 "num_base_bdevs": 2, 00:22:02.928 "num_base_bdevs_discovered": 2, 00:22:02.928 "num_base_bdevs_operational": 2, 00:22:02.928 "process": { 00:22:02.928 "type": "rebuild", 00:22:02.928 "target": "spare", 00:22:02.928 "progress": { 00:22:02.928 "blocks": 22528, 00:22:02.928 "percent": 34 00:22:02.928 } 00:22:02.928 }, 00:22:02.928 "base_bdevs_list": [ 00:22:02.928 { 00:22:02.928 "name": "spare", 00:22:02.928 "uuid": "81210682-3c80-565c-abcd-084d8f9c1bea", 00:22:02.928 "is_configured": true, 00:22:02.928 "data_offset": 0, 00:22:02.928 "data_size": 65536 00:22:02.928 }, 00:22:02.928 { 00:22:02.928 "name": "BaseBdev2", 00:22:02.928 "uuid": "c5fe8f3c-51de-5b94-a450-c93d50766c19", 00:22:02.928 "is_configured": true, 00:22:02.928 "data_offset": 0, 00:22:02.928 "data_size": 65536 00:22:02.928 } 00:22:02.928 ] 00:22:02.928 }' 00:22:02.928 22:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:02.928 22:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:02.928 22:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:02.928 22:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:02.928 22:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:22:02.928 22:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:22:02.928 22:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:22:02.928 22:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:22:02.928 22:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=653 00:22:02.928 22:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:22:02.928 22:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:02.928 22:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:02.928 22:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:02.928 22:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:02.928 22:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:02.929 22:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:02.929 22:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:03.187 22:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:03.187 "name": "raid_bdev1", 00:22:03.187 "uuid": "c790780a-2912-4278-9549-4bb781ff71b3", 00:22:03.187 "strip_size_kb": 0, 00:22:03.187 "state": "online", 00:22:03.187 "raid_level": "raid1", 00:22:03.187 "superblock": false, 00:22:03.187 "num_base_bdevs": 2, 00:22:03.187 "num_base_bdevs_discovered": 2, 00:22:03.187 "num_base_bdevs_operational": 2, 00:22:03.187 "process": { 00:22:03.187 "type": "rebuild", 00:22:03.187 "target": "spare", 00:22:03.187 "progress": { 00:22:03.187 "blocks": 28672, 00:22:03.187 "percent": 43 00:22:03.187 } 00:22:03.187 }, 00:22:03.187 "base_bdevs_list": [ 00:22:03.187 { 00:22:03.187 "name": "spare", 00:22:03.187 "uuid": "81210682-3c80-565c-abcd-084d8f9c1bea", 00:22:03.187 "is_configured": true, 00:22:03.187 "data_offset": 0, 00:22:03.187 "data_size": 65536 00:22:03.187 }, 00:22:03.187 { 00:22:03.187 "name": "BaseBdev2", 00:22:03.187 "uuid": "c5fe8f3c-51de-5b94-a450-c93d50766c19", 00:22:03.187 "is_configured": true, 00:22:03.187 "data_offset": 0, 00:22:03.187 "data_size": 65536 00:22:03.187 } 00:22:03.187 ] 00:22:03.187 }' 00:22:03.187 22:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:03.187 22:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:03.187 22:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:03.187 22:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:03.187 22:05:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:22:04.563 22:05:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:22:04.563 22:05:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:04.563 22:05:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:04.563 22:05:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:04.563 22:05:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:04.563 22:05:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:04.563 22:05:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:04.563 22:05:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:04.563 22:05:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:04.563 "name": "raid_bdev1", 00:22:04.563 "uuid": "c790780a-2912-4278-9549-4bb781ff71b3", 00:22:04.563 "strip_size_kb": 0, 00:22:04.563 "state": "online", 00:22:04.563 "raid_level": "raid1", 00:22:04.563 "superblock": false, 00:22:04.563 "num_base_bdevs": 2, 00:22:04.563 "num_base_bdevs_discovered": 2, 00:22:04.563 "num_base_bdevs_operational": 2, 00:22:04.563 "process": { 00:22:04.563 "type": "rebuild", 00:22:04.563 "target": "spare", 00:22:04.563 "progress": { 00:22:04.563 "blocks": 53248, 00:22:04.563 "percent": 81 00:22:04.563 } 00:22:04.563 }, 00:22:04.563 "base_bdevs_list": [ 00:22:04.563 { 00:22:04.563 "name": "spare", 00:22:04.563 "uuid": "81210682-3c80-565c-abcd-084d8f9c1bea", 00:22:04.563 "is_configured": true, 00:22:04.563 "data_offset": 0, 00:22:04.563 "data_size": 65536 00:22:04.563 }, 00:22:04.563 { 00:22:04.563 "name": "BaseBdev2", 00:22:04.563 "uuid": "c5fe8f3c-51de-5b94-a450-c93d50766c19", 00:22:04.563 "is_configured": true, 00:22:04.563 "data_offset": 0, 00:22:04.563 "data_size": 65536 00:22:04.563 } 00:22:04.563 ] 00:22:04.563 }' 00:22:04.563 22:05:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:04.563 22:05:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:04.563 22:05:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:04.563 22:05:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:04.563 22:05:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:22:05.159 [2024-07-13 22:05:24.254314] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:05.159 [2024-07-13 22:05:24.254373] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:05.159 [2024-07-13 22:05:24.254414] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:05.727 22:05:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:22:05.727 22:05:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:05.727 22:05:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:05.727 22:05:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:05.727 22:05:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:05.727 22:05:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:05.727 22:05:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:05.727 22:05:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:05.727 22:05:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:05.727 "name": "raid_bdev1", 00:22:05.727 "uuid": "c790780a-2912-4278-9549-4bb781ff71b3", 00:22:05.727 "strip_size_kb": 0, 00:22:05.727 "state": "online", 00:22:05.727 "raid_level": "raid1", 00:22:05.727 "superblock": false, 00:22:05.727 "num_base_bdevs": 2, 00:22:05.727 "num_base_bdevs_discovered": 2, 00:22:05.727 "num_base_bdevs_operational": 2, 00:22:05.727 "base_bdevs_list": [ 00:22:05.727 { 00:22:05.727 "name": "spare", 00:22:05.727 "uuid": "81210682-3c80-565c-abcd-084d8f9c1bea", 00:22:05.727 "is_configured": true, 00:22:05.727 "data_offset": 0, 00:22:05.727 "data_size": 65536 00:22:05.727 }, 00:22:05.727 { 00:22:05.727 "name": "BaseBdev2", 00:22:05.727 "uuid": "c5fe8f3c-51de-5b94-a450-c93d50766c19", 00:22:05.727 "is_configured": true, 00:22:05.727 "data_offset": 0, 00:22:05.727 "data_size": 65536 00:22:05.727 } 00:22:05.727 ] 00:22:05.727 }' 00:22:05.727 22:05:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:05.727 22:05:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:05.727 22:05:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:05.727 22:05:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:22:05.727 22:05:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:22:05.727 22:05:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:05.727 22:05:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:05.727 22:05:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:05.727 22:05:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:05.727 22:05:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:05.727 22:05:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:05.727 22:05:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:05.985 22:05:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:05.985 "name": "raid_bdev1", 00:22:05.985 "uuid": "c790780a-2912-4278-9549-4bb781ff71b3", 00:22:05.985 "strip_size_kb": 0, 00:22:05.985 "state": "online", 00:22:05.985 "raid_level": "raid1", 00:22:05.985 "superblock": false, 00:22:05.985 "num_base_bdevs": 2, 00:22:05.985 "num_base_bdevs_discovered": 2, 00:22:05.985 "num_base_bdevs_operational": 2, 00:22:05.985 "base_bdevs_list": [ 00:22:05.985 { 00:22:05.985 "name": "spare", 00:22:05.985 "uuid": "81210682-3c80-565c-abcd-084d8f9c1bea", 00:22:05.985 "is_configured": true, 00:22:05.985 "data_offset": 0, 00:22:05.985 "data_size": 65536 00:22:05.985 }, 00:22:05.985 { 00:22:05.985 "name": "BaseBdev2", 00:22:05.985 "uuid": "c5fe8f3c-51de-5b94-a450-c93d50766c19", 00:22:05.985 "is_configured": true, 00:22:05.985 "data_offset": 0, 00:22:05.985 "data_size": 65536 00:22:05.985 } 00:22:05.985 ] 00:22:05.985 }' 00:22:05.985 22:05:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:05.985 22:05:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:05.985 22:05:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:05.985 22:05:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:05.985 22:05:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:05.985 22:05:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:05.985 22:05:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:05.985 22:05:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:05.985 22:05:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:05.985 22:05:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:05.985 22:05:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:05.985 22:05:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:05.985 22:05:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:05.985 22:05:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:05.985 22:05:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:05.985 22:05:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:06.244 22:05:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:06.244 "name": "raid_bdev1", 00:22:06.244 "uuid": "c790780a-2912-4278-9549-4bb781ff71b3", 00:22:06.244 "strip_size_kb": 0, 00:22:06.244 "state": "online", 00:22:06.244 "raid_level": "raid1", 00:22:06.244 "superblock": false, 00:22:06.244 "num_base_bdevs": 2, 00:22:06.244 "num_base_bdevs_discovered": 2, 00:22:06.244 "num_base_bdevs_operational": 2, 00:22:06.244 "base_bdevs_list": [ 00:22:06.244 { 00:22:06.244 "name": "spare", 00:22:06.244 "uuid": "81210682-3c80-565c-abcd-084d8f9c1bea", 00:22:06.244 "is_configured": true, 00:22:06.244 "data_offset": 0, 00:22:06.244 "data_size": 65536 00:22:06.244 }, 00:22:06.244 { 00:22:06.244 "name": "BaseBdev2", 00:22:06.244 "uuid": "c5fe8f3c-51de-5b94-a450-c93d50766c19", 00:22:06.244 "is_configured": true, 00:22:06.244 "data_offset": 0, 00:22:06.244 "data_size": 65536 00:22:06.244 } 00:22:06.244 ] 00:22:06.244 }' 00:22:06.244 22:05:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:06.244 22:05:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:06.811 22:05:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:06.811 [2024-07-13 22:05:26.157355] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:06.811 [2024-07-13 22:05:26.157387] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:06.811 [2024-07-13 22:05:26.157457] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:06.811 [2024-07-13 22:05:26.157531] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:06.811 [2024-07-13 22:05:26.157542] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041a80 name raid_bdev1, state offline 00:22:06.811 22:05:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:06.811 22:05:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:22:07.069 22:05:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:22:07.069 22:05:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:22:07.069 22:05:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:22:07.069 22:05:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:07.069 22:05:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:07.069 22:05:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:07.069 22:05:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:07.069 22:05:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:07.069 22:05:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:07.069 22:05:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:22:07.069 22:05:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:07.069 22:05:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:07.069 22:05:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:07.328 /dev/nbd0 00:22:07.328 22:05:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:07.328 22:05:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:07.328 22:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:22:07.328 22:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:22:07.328 22:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:07.328 22:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:07.328 22:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:22:07.328 22:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:22:07.328 22:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:07.328 22:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:07.328 22:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:07.328 1+0 records in 00:22:07.328 1+0 records out 00:22:07.328 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253264 s, 16.2 MB/s 00:22:07.328 22:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:07.328 22:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:22:07.328 22:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:07.328 22:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:07.328 22:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:22:07.328 22:05:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:07.328 22:05:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:07.328 22:05:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:22:07.587 /dev/nbd1 00:22:07.587 22:05:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:07.587 22:05:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:07.587 22:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:22:07.587 22:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:22:07.587 22:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:07.587 22:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:07.587 22:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:22:07.587 22:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:22:07.587 22:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:07.587 22:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:07.587 22:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:07.587 1+0 records in 00:22:07.587 1+0 records out 00:22:07.587 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288593 s, 14.2 MB/s 00:22:07.587 22:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:07.587 22:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:22:07.587 22:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:07.587 22:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:07.587 22:05:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:22:07.587 22:05:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:07.587 22:05:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:07.587 22:05:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:07.587 22:05:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:22:07.587 22:05:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:07.587 22:05:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:07.587 22:05:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:07.587 22:05:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:22:07.587 22:05:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:07.587 22:05:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:07.846 22:05:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:07.846 22:05:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:07.846 22:05:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:07.846 22:05:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:07.846 22:05:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:07.846 22:05:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:07.846 22:05:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:22:07.846 22:05:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:22:07.846 22:05:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:07.846 22:05:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:08.105 22:05:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:08.105 22:05:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:08.105 22:05:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:08.105 22:05:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:08.105 22:05:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:08.105 22:05:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:08.105 22:05:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:22:08.105 22:05:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:22:08.105 22:05:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:22:08.105 22:05:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 1460616 00:22:08.105 22:05:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@948 -- # '[' -z 1460616 ']' 00:22:08.105 22:05:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # kill -0 1460616 00:22:08.105 22:05:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # uname 00:22:08.105 22:05:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:08.105 22:05:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1460616 00:22:08.105 22:05:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:08.105 22:05:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:08.105 22:05:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1460616' 00:22:08.105 killing process with pid 1460616 00:22:08.105 22:05:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@967 -- # kill 1460616 00:22:08.105 Received shutdown signal, test time was about 60.000000 seconds 00:22:08.105 00:22:08.105 Latency(us) 00:22:08.105 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:08.105 =================================================================================================================== 00:22:08.105 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:08.105 [2024-07-13 22:05:27.386513] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:08.105 22:05:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # wait 1460616 00:22:08.364 [2024-07-13 22:05:27.619299] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:09.739 22:05:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:22:09.739 00:22:09.739 real 0m18.598s 00:22:09.739 user 0m24.227s 00:22:09.739 sys 0m3.719s 00:22:09.739 22:05:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:09.739 22:05:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:09.739 ************************************ 00:22:09.739 END TEST raid_rebuild_test 00:22:09.739 ************************************ 00:22:09.739 22:05:28 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:22:09.739 22:05:28 bdev_raid -- bdev/bdev_raid.sh@878 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:22:09.739 22:05:28 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:22:09.739 22:05:28 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:09.739 22:05:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:09.739 ************************************ 00:22:09.739 START TEST raid_rebuild_test_sb 00:22:09.739 ************************************ 00:22:09.739 22:05:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false true 00:22:09.739 22:05:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:22:09.739 22:05:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:22:09.739 22:05:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:22:09.739 22:05:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:22:09.739 22:05:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:22:09.739 22:05:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:22:09.739 22:05:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:22:09.739 22:05:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:22:09.739 22:05:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:22:09.739 22:05:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:22:09.739 22:05:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:22:09.739 22:05:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:22:09.739 22:05:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:22:09.739 22:05:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:09.739 22:05:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:22:09.739 22:05:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:22:09.739 22:05:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:22:09.739 22:05:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:22:09.739 22:05:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:22:09.739 22:05:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:22:09.739 22:05:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:22:09.739 22:05:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:22:09.739 22:05:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:22:09.739 22:05:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:22:09.739 22:05:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=1463865 00:22:09.739 22:05:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 1463865 /var/tmp/spdk-raid.sock 00:22:09.739 22:05:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:09.739 22:05:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@829 -- # '[' -z 1463865 ']' 00:22:09.739 22:05:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:09.739 22:05:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:09.739 22:05:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:09.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:09.739 22:05:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:09.739 22:05:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.739 [2024-07-13 22:05:29.005303] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:09.739 [2024-07-13 22:05:29.005404] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1463865 ] 00:22:09.740 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:09.740 Zero copy mechanism will not be used. 00:22:09.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:09.740 EAL: Requested device 0000:3d:01.0 cannot be used 00:22:09.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:09.740 EAL: Requested device 0000:3d:01.1 cannot be used 00:22:09.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:09.740 EAL: Requested device 0000:3d:01.2 cannot be used 00:22:09.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:09.740 EAL: Requested device 0000:3d:01.3 cannot be used 00:22:09.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:09.740 EAL: Requested device 0000:3d:01.4 cannot be used 00:22:09.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:09.740 EAL: Requested device 0000:3d:01.5 cannot be used 00:22:09.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:09.740 EAL: Requested device 0000:3d:01.6 cannot be used 00:22:09.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:09.740 EAL: Requested device 0000:3d:01.7 cannot be used 00:22:09.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:09.740 EAL: Requested device 0000:3d:02.0 cannot be used 00:22:09.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:09.740 EAL: Requested device 0000:3d:02.1 cannot be used 00:22:09.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:09.740 EAL: Requested device 0000:3d:02.2 cannot be used 00:22:09.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:09.740 EAL: Requested device 0000:3d:02.3 cannot be used 00:22:09.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:09.740 EAL: Requested device 0000:3d:02.4 cannot be used 00:22:09.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:09.740 EAL: Requested device 0000:3d:02.5 cannot be used 00:22:09.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:09.740 EAL: Requested device 0000:3d:02.6 cannot be used 00:22:09.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:09.740 EAL: Requested device 0000:3d:02.7 cannot be used 00:22:09.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:09.740 EAL: Requested device 0000:3f:01.0 cannot be used 00:22:09.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:09.740 EAL: Requested device 0000:3f:01.1 cannot be used 00:22:09.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:09.740 EAL: Requested device 0000:3f:01.2 cannot be used 00:22:09.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:09.740 EAL: Requested device 0000:3f:01.3 cannot be used 00:22:09.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:09.740 EAL: Requested device 0000:3f:01.4 cannot be used 00:22:09.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:09.740 EAL: Requested device 0000:3f:01.5 cannot be used 00:22:09.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:09.740 EAL: Requested device 0000:3f:01.6 cannot be used 00:22:09.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:09.740 EAL: Requested device 0000:3f:01.7 cannot be used 00:22:09.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:09.740 EAL: Requested device 0000:3f:02.0 cannot be used 00:22:09.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:09.740 EAL: Requested device 0000:3f:02.1 cannot be used 00:22:09.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:09.740 EAL: Requested device 0000:3f:02.2 cannot be used 00:22:09.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:09.740 EAL: Requested device 0000:3f:02.3 cannot be used 00:22:09.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:09.740 EAL: Requested device 0000:3f:02.4 cannot be used 00:22:09.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:09.740 EAL: Requested device 0000:3f:02.5 cannot be used 00:22:09.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:09.740 EAL: Requested device 0000:3f:02.6 cannot be used 00:22:09.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:09.740 EAL: Requested device 0000:3f:02.7 cannot be used 00:22:09.998 [2024-07-13 22:05:29.171266] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.998 [2024-07-13 22:05:29.376635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:10.257 [2024-07-13 22:05:29.613352] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:10.257 [2024-07-13 22:05:29.613391] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:10.515 22:05:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:10.515 22:05:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # return 0 00:22:10.515 22:05:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:22:10.515 22:05:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:10.774 BaseBdev1_malloc 00:22:10.774 22:05:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:10.774 [2024-07-13 22:05:30.108763] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:10.774 [2024-07-13 22:05:30.108824] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:10.774 [2024-07-13 22:05:30.108866] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:22:10.774 [2024-07-13 22:05:30.108881] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:10.774 [2024-07-13 22:05:30.111042] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:10.774 [2024-07-13 22:05:30.111076] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:10.774 BaseBdev1 00:22:10.774 22:05:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:22:10.774 22:05:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:11.033 BaseBdev2_malloc 00:22:11.033 22:05:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:11.292 [2024-07-13 22:05:30.500171] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:11.292 [2024-07-13 22:05:30.500222] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:11.292 [2024-07-13 22:05:30.500258] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:22:11.292 [2024-07-13 22:05:30.500274] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:11.292 [2024-07-13 22:05:30.502293] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:11.292 [2024-07-13 22:05:30.502323] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:11.292 BaseBdev2 00:22:11.292 22:05:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:11.551 spare_malloc 00:22:11.551 22:05:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:11.551 spare_delay 00:22:11.551 22:05:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:11.810 [2024-07-13 22:05:31.046999] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:11.810 [2024-07-13 22:05:31.047050] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:11.810 [2024-07-13 22:05:31.047084] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:22:11.810 [2024-07-13 22:05:31.047098] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:11.810 [2024-07-13 22:05:31.049163] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:11.810 [2024-07-13 22:05:31.049193] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:11.810 spare 00:22:11.810 22:05:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:22:12.069 [2024-07-13 22:05:31.211467] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:12.069 [2024-07-13 22:05:31.213210] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:12.069 [2024-07-13 22:05:31.213378] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041a80 00:22:12.069 [2024-07-13 22:05:31.213396] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:12.069 [2024-07-13 22:05:31.213643] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:22:12.069 [2024-07-13 22:05:31.213838] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041a80 00:22:12.069 [2024-07-13 22:05:31.213850] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041a80 00:22:12.069 [2024-07-13 22:05:31.214008] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:12.069 22:05:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:12.069 22:05:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:12.069 22:05:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:12.069 22:05:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:12.069 22:05:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:12.069 22:05:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:12.069 22:05:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:12.069 22:05:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:12.069 22:05:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:12.069 22:05:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:12.069 22:05:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:12.069 22:05:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:12.069 22:05:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:12.069 "name": "raid_bdev1", 00:22:12.069 "uuid": "41074c1d-90f3-4c1f-9c84-00ce893dff97", 00:22:12.069 "strip_size_kb": 0, 00:22:12.069 "state": "online", 00:22:12.069 "raid_level": "raid1", 00:22:12.069 "superblock": true, 00:22:12.069 "num_base_bdevs": 2, 00:22:12.069 "num_base_bdevs_discovered": 2, 00:22:12.069 "num_base_bdevs_operational": 2, 00:22:12.069 "base_bdevs_list": [ 00:22:12.069 { 00:22:12.069 "name": "BaseBdev1", 00:22:12.069 "uuid": "ee15a0b1-319f-5020-bfad-b2266981feb5", 00:22:12.069 "is_configured": true, 00:22:12.069 "data_offset": 2048, 00:22:12.069 "data_size": 63488 00:22:12.069 }, 00:22:12.069 { 00:22:12.069 "name": "BaseBdev2", 00:22:12.069 "uuid": "c87110c2-8835-5102-8920-757ec845bdbf", 00:22:12.070 "is_configured": true, 00:22:12.070 "data_offset": 2048, 00:22:12.070 "data_size": 63488 00:22:12.070 } 00:22:12.070 ] 00:22:12.070 }' 00:22:12.070 22:05:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:12.070 22:05:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.637 22:05:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:12.637 22:05:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:22:12.896 [2024-07-13 22:05:32.062010] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:12.896 22:05:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:22:12.896 22:05:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:12.896 22:05:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:12.896 22:05:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:22:12.896 22:05:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:22:12.896 22:05:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:22:12.896 22:05:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:22:12.896 22:05:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:22:12.896 22:05:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:12.896 22:05:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:12.896 22:05:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:12.896 22:05:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:12.896 22:05:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:12.896 22:05:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:22:12.896 22:05:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:12.896 22:05:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:12.896 22:05:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:13.155 [2024-07-13 22:05:32.418704] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:22:13.155 /dev/nbd0 00:22:13.155 22:05:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:13.155 22:05:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:13.155 22:05:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:22:13.155 22:05:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:22:13.155 22:05:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:13.155 22:05:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:13.155 22:05:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:22:13.155 22:05:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:22:13.155 22:05:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:13.155 22:05:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:13.155 22:05:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:13.155 1+0 records in 00:22:13.155 1+0 records out 00:22:13.155 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271147 s, 15.1 MB/s 00:22:13.155 22:05:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:13.155 22:05:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:22:13.155 22:05:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:13.155 22:05:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:13.155 22:05:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:22:13.155 22:05:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:13.155 22:05:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:13.155 22:05:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:22:13.155 22:05:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:22:13.155 22:05:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:22:17.356 63488+0 records in 00:22:17.356 63488+0 records out 00:22:17.356 32505856 bytes (33 MB, 31 MiB) copied, 4.20027 s, 7.7 MB/s 00:22:17.356 22:05:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:17.356 22:05:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:17.356 22:05:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:17.356 22:05:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:17.356 22:05:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:22:17.356 22:05:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:17.356 22:05:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:17.615 22:05:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:17.615 22:05:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:17.615 22:05:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:17.615 22:05:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:17.615 22:05:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:17.615 22:05:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:17.615 [2024-07-13 22:05:36.885047] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:17.615 22:05:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:22:17.615 22:05:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:22:17.615 22:05:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:17.875 [2024-07-13 22:05:37.029979] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:17.875 22:05:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:17.875 22:05:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:17.875 22:05:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:17.875 22:05:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:17.875 22:05:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:17.875 22:05:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:17.875 22:05:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:17.875 22:05:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:17.875 22:05:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:17.875 22:05:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:17.875 22:05:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:17.875 22:05:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:17.875 22:05:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:17.875 "name": "raid_bdev1", 00:22:17.875 "uuid": "41074c1d-90f3-4c1f-9c84-00ce893dff97", 00:22:17.875 "strip_size_kb": 0, 00:22:17.875 "state": "online", 00:22:17.875 "raid_level": "raid1", 00:22:17.875 "superblock": true, 00:22:17.875 "num_base_bdevs": 2, 00:22:17.875 "num_base_bdevs_discovered": 1, 00:22:17.875 "num_base_bdevs_operational": 1, 00:22:17.875 "base_bdevs_list": [ 00:22:17.875 { 00:22:17.875 "name": null, 00:22:17.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.875 "is_configured": false, 00:22:17.875 "data_offset": 2048, 00:22:17.875 "data_size": 63488 00:22:17.875 }, 00:22:17.875 { 00:22:17.875 "name": "BaseBdev2", 00:22:17.875 "uuid": "c87110c2-8835-5102-8920-757ec845bdbf", 00:22:17.875 "is_configured": true, 00:22:17.875 "data_offset": 2048, 00:22:17.875 "data_size": 63488 00:22:17.875 } 00:22:17.875 ] 00:22:17.875 }' 00:22:17.875 22:05:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:17.875 22:05:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:18.443 22:05:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:18.702 [2024-07-13 22:05:37.836072] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:18.702 [2024-07-13 22:05:37.854846] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caaba0 00:22:18.702 [2024-07-13 22:05:37.856608] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:18.702 22:05:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:22:19.641 22:05:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:19.641 22:05:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:19.641 22:05:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:19.641 22:05:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:19.641 22:05:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:19.641 22:05:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:19.641 22:05:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:19.900 22:05:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:19.900 "name": "raid_bdev1", 00:22:19.900 "uuid": "41074c1d-90f3-4c1f-9c84-00ce893dff97", 00:22:19.900 "strip_size_kb": 0, 00:22:19.900 "state": "online", 00:22:19.900 "raid_level": "raid1", 00:22:19.900 "superblock": true, 00:22:19.900 "num_base_bdevs": 2, 00:22:19.900 "num_base_bdevs_discovered": 2, 00:22:19.900 "num_base_bdevs_operational": 2, 00:22:19.900 "process": { 00:22:19.900 "type": "rebuild", 00:22:19.900 "target": "spare", 00:22:19.900 "progress": { 00:22:19.900 "blocks": 22528, 00:22:19.900 "percent": 35 00:22:19.900 } 00:22:19.900 }, 00:22:19.900 "base_bdevs_list": [ 00:22:19.900 { 00:22:19.900 "name": "spare", 00:22:19.900 "uuid": "6327b36c-0654-5a2f-8c77-05e2ae106df8", 00:22:19.900 "is_configured": true, 00:22:19.900 "data_offset": 2048, 00:22:19.900 "data_size": 63488 00:22:19.900 }, 00:22:19.900 { 00:22:19.900 "name": "BaseBdev2", 00:22:19.900 "uuid": "c87110c2-8835-5102-8920-757ec845bdbf", 00:22:19.900 "is_configured": true, 00:22:19.900 "data_offset": 2048, 00:22:19.900 "data_size": 63488 00:22:19.900 } 00:22:19.900 ] 00:22:19.900 }' 00:22:19.900 22:05:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:19.900 22:05:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:19.900 22:05:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:19.900 22:05:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:19.900 22:05:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:20.196 [2024-07-13 22:05:39.294156] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:20.196 [2024-07-13 22:05:39.368156] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:20.196 [2024-07-13 22:05:39.368209] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:20.196 [2024-07-13 22:05:39.368225] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:20.196 [2024-07-13 22:05:39.368236] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:20.196 22:05:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:20.196 22:05:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:20.196 22:05:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:20.196 22:05:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:20.196 22:05:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:20.196 22:05:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:20.196 22:05:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:20.196 22:05:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:20.196 22:05:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:20.196 22:05:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:20.196 22:05:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:20.196 22:05:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:20.457 22:05:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:20.457 "name": "raid_bdev1", 00:22:20.457 "uuid": "41074c1d-90f3-4c1f-9c84-00ce893dff97", 00:22:20.457 "strip_size_kb": 0, 00:22:20.457 "state": "online", 00:22:20.457 "raid_level": "raid1", 00:22:20.457 "superblock": true, 00:22:20.457 "num_base_bdevs": 2, 00:22:20.457 "num_base_bdevs_discovered": 1, 00:22:20.457 "num_base_bdevs_operational": 1, 00:22:20.457 "base_bdevs_list": [ 00:22:20.457 { 00:22:20.457 "name": null, 00:22:20.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:20.457 "is_configured": false, 00:22:20.457 "data_offset": 2048, 00:22:20.457 "data_size": 63488 00:22:20.457 }, 00:22:20.457 { 00:22:20.457 "name": "BaseBdev2", 00:22:20.457 "uuid": "c87110c2-8835-5102-8920-757ec845bdbf", 00:22:20.457 "is_configured": true, 00:22:20.457 "data_offset": 2048, 00:22:20.457 "data_size": 63488 00:22:20.457 } 00:22:20.457 ] 00:22:20.457 }' 00:22:20.457 22:05:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:20.457 22:05:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:20.716 22:05:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:20.716 22:05:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:20.716 22:05:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:20.716 22:05:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:20.716 22:05:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:20.975 22:05:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:20.975 22:05:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:20.975 22:05:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:20.975 "name": "raid_bdev1", 00:22:20.975 "uuid": "41074c1d-90f3-4c1f-9c84-00ce893dff97", 00:22:20.975 "strip_size_kb": 0, 00:22:20.975 "state": "online", 00:22:20.975 "raid_level": "raid1", 00:22:20.975 "superblock": true, 00:22:20.975 "num_base_bdevs": 2, 00:22:20.975 "num_base_bdevs_discovered": 1, 00:22:20.975 "num_base_bdevs_operational": 1, 00:22:20.975 "base_bdevs_list": [ 00:22:20.975 { 00:22:20.975 "name": null, 00:22:20.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:20.975 "is_configured": false, 00:22:20.975 "data_offset": 2048, 00:22:20.975 "data_size": 63488 00:22:20.975 }, 00:22:20.975 { 00:22:20.975 "name": "BaseBdev2", 00:22:20.975 "uuid": "c87110c2-8835-5102-8920-757ec845bdbf", 00:22:20.975 "is_configured": true, 00:22:20.975 "data_offset": 2048, 00:22:20.975 "data_size": 63488 00:22:20.975 } 00:22:20.975 ] 00:22:20.975 }' 00:22:20.975 22:05:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:20.975 22:05:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:20.975 22:05:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:20.975 22:05:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:20.975 22:05:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:21.234 [2024-07-13 22:05:40.518389] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:21.234 [2024-07-13 22:05:40.535313] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caac70 00:22:21.234 [2024-07-13 22:05:40.537078] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:21.234 22:05:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:22.169 22:05:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:22.169 22:05:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:22.169 22:05:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:22.169 22:05:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:22.169 22:05:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:22.169 22:05:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:22.169 22:05:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:22.427 22:05:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:22.427 "name": "raid_bdev1", 00:22:22.427 "uuid": "41074c1d-90f3-4c1f-9c84-00ce893dff97", 00:22:22.427 "strip_size_kb": 0, 00:22:22.427 "state": "online", 00:22:22.427 "raid_level": "raid1", 00:22:22.427 "superblock": true, 00:22:22.427 "num_base_bdevs": 2, 00:22:22.427 "num_base_bdevs_discovered": 2, 00:22:22.427 "num_base_bdevs_operational": 2, 00:22:22.427 "process": { 00:22:22.427 "type": "rebuild", 00:22:22.427 "target": "spare", 00:22:22.427 "progress": { 00:22:22.427 "blocks": 22528, 00:22:22.427 "percent": 35 00:22:22.427 } 00:22:22.427 }, 00:22:22.427 "base_bdevs_list": [ 00:22:22.427 { 00:22:22.427 "name": "spare", 00:22:22.427 "uuid": "6327b36c-0654-5a2f-8c77-05e2ae106df8", 00:22:22.427 "is_configured": true, 00:22:22.427 "data_offset": 2048, 00:22:22.427 "data_size": 63488 00:22:22.427 }, 00:22:22.427 { 00:22:22.427 "name": "BaseBdev2", 00:22:22.427 "uuid": "c87110c2-8835-5102-8920-757ec845bdbf", 00:22:22.427 "is_configured": true, 00:22:22.427 "data_offset": 2048, 00:22:22.427 "data_size": 63488 00:22:22.427 } 00:22:22.427 ] 00:22:22.427 }' 00:22:22.427 22:05:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:22.427 22:05:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:22.427 22:05:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:22.427 22:05:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:22.427 22:05:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:22:22.427 22:05:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:22:22.427 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:22:22.427 22:05:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:22:22.427 22:05:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:22:22.427 22:05:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:22:22.427 22:05:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=672 00:22:22.427 22:05:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:22:22.427 22:05:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:22.427 22:05:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:22.427 22:05:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:22.686 22:05:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:22.686 22:05:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:22.686 22:05:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:22.686 22:05:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:22.686 22:05:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:22.686 "name": "raid_bdev1", 00:22:22.686 "uuid": "41074c1d-90f3-4c1f-9c84-00ce893dff97", 00:22:22.686 "strip_size_kb": 0, 00:22:22.686 "state": "online", 00:22:22.686 "raid_level": "raid1", 00:22:22.686 "superblock": true, 00:22:22.686 "num_base_bdevs": 2, 00:22:22.686 "num_base_bdevs_discovered": 2, 00:22:22.686 "num_base_bdevs_operational": 2, 00:22:22.686 "process": { 00:22:22.686 "type": "rebuild", 00:22:22.686 "target": "spare", 00:22:22.686 "progress": { 00:22:22.686 "blocks": 28672, 00:22:22.686 "percent": 45 00:22:22.686 } 00:22:22.686 }, 00:22:22.686 "base_bdevs_list": [ 00:22:22.686 { 00:22:22.686 "name": "spare", 00:22:22.686 "uuid": "6327b36c-0654-5a2f-8c77-05e2ae106df8", 00:22:22.686 "is_configured": true, 00:22:22.686 "data_offset": 2048, 00:22:22.686 "data_size": 63488 00:22:22.686 }, 00:22:22.686 { 00:22:22.686 "name": "BaseBdev2", 00:22:22.686 "uuid": "c87110c2-8835-5102-8920-757ec845bdbf", 00:22:22.686 "is_configured": true, 00:22:22.686 "data_offset": 2048, 00:22:22.686 "data_size": 63488 00:22:22.686 } 00:22:22.686 ] 00:22:22.686 }' 00:22:22.686 22:05:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:22.686 22:05:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:22.686 22:05:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:22.686 22:05:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:22.686 22:05:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:22:24.063 22:05:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:22:24.063 22:05:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:24.063 22:05:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:24.063 22:05:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:24.063 22:05:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:24.063 22:05:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:24.063 22:05:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:24.063 22:05:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:24.063 22:05:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:24.063 "name": "raid_bdev1", 00:22:24.063 "uuid": "41074c1d-90f3-4c1f-9c84-00ce893dff97", 00:22:24.063 "strip_size_kb": 0, 00:22:24.063 "state": "online", 00:22:24.063 "raid_level": "raid1", 00:22:24.063 "superblock": true, 00:22:24.063 "num_base_bdevs": 2, 00:22:24.063 "num_base_bdevs_discovered": 2, 00:22:24.063 "num_base_bdevs_operational": 2, 00:22:24.063 "process": { 00:22:24.063 "type": "rebuild", 00:22:24.063 "target": "spare", 00:22:24.063 "progress": { 00:22:24.063 "blocks": 53248, 00:22:24.063 "percent": 83 00:22:24.063 } 00:22:24.063 }, 00:22:24.063 "base_bdevs_list": [ 00:22:24.063 { 00:22:24.063 "name": "spare", 00:22:24.063 "uuid": "6327b36c-0654-5a2f-8c77-05e2ae106df8", 00:22:24.063 "is_configured": true, 00:22:24.063 "data_offset": 2048, 00:22:24.063 "data_size": 63488 00:22:24.063 }, 00:22:24.063 { 00:22:24.063 "name": "BaseBdev2", 00:22:24.063 "uuid": "c87110c2-8835-5102-8920-757ec845bdbf", 00:22:24.063 "is_configured": true, 00:22:24.063 "data_offset": 2048, 00:22:24.063 "data_size": 63488 00:22:24.063 } 00:22:24.063 ] 00:22:24.063 }' 00:22:24.063 22:05:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:24.063 22:05:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:24.063 22:05:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:24.063 22:05:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:24.063 22:05:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:22:24.321 [2024-07-13 22:05:43.660624] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:24.321 [2024-07-13 22:05:43.660686] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:24.321 [2024-07-13 22:05:43.660766] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:25.256 22:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:22:25.256 22:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:25.256 22:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:25.256 22:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:25.256 22:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:25.256 22:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:25.256 22:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:25.256 22:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:25.256 22:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:25.256 "name": "raid_bdev1", 00:22:25.256 "uuid": "41074c1d-90f3-4c1f-9c84-00ce893dff97", 00:22:25.256 "strip_size_kb": 0, 00:22:25.256 "state": "online", 00:22:25.256 "raid_level": "raid1", 00:22:25.256 "superblock": true, 00:22:25.256 "num_base_bdevs": 2, 00:22:25.256 "num_base_bdevs_discovered": 2, 00:22:25.256 "num_base_bdevs_operational": 2, 00:22:25.256 "base_bdevs_list": [ 00:22:25.256 { 00:22:25.256 "name": "spare", 00:22:25.256 "uuid": "6327b36c-0654-5a2f-8c77-05e2ae106df8", 00:22:25.256 "is_configured": true, 00:22:25.256 "data_offset": 2048, 00:22:25.256 "data_size": 63488 00:22:25.256 }, 00:22:25.256 { 00:22:25.256 "name": "BaseBdev2", 00:22:25.256 "uuid": "c87110c2-8835-5102-8920-757ec845bdbf", 00:22:25.256 "is_configured": true, 00:22:25.256 "data_offset": 2048, 00:22:25.256 "data_size": 63488 00:22:25.256 } 00:22:25.256 ] 00:22:25.256 }' 00:22:25.256 22:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:25.256 22:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:25.256 22:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:25.256 22:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:22:25.256 22:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:22:25.256 22:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:25.256 22:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:25.256 22:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:25.256 22:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:25.256 22:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:25.256 22:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:25.256 22:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:25.515 22:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:25.515 "name": "raid_bdev1", 00:22:25.515 "uuid": "41074c1d-90f3-4c1f-9c84-00ce893dff97", 00:22:25.515 "strip_size_kb": 0, 00:22:25.515 "state": "online", 00:22:25.515 "raid_level": "raid1", 00:22:25.515 "superblock": true, 00:22:25.515 "num_base_bdevs": 2, 00:22:25.515 "num_base_bdevs_discovered": 2, 00:22:25.515 "num_base_bdevs_operational": 2, 00:22:25.515 "base_bdevs_list": [ 00:22:25.515 { 00:22:25.515 "name": "spare", 00:22:25.515 "uuid": "6327b36c-0654-5a2f-8c77-05e2ae106df8", 00:22:25.515 "is_configured": true, 00:22:25.515 "data_offset": 2048, 00:22:25.515 "data_size": 63488 00:22:25.515 }, 00:22:25.515 { 00:22:25.515 "name": "BaseBdev2", 00:22:25.515 "uuid": "c87110c2-8835-5102-8920-757ec845bdbf", 00:22:25.515 "is_configured": true, 00:22:25.515 "data_offset": 2048, 00:22:25.515 "data_size": 63488 00:22:25.515 } 00:22:25.515 ] 00:22:25.515 }' 00:22:25.515 22:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:25.515 22:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:25.515 22:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:25.515 22:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:25.515 22:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:25.515 22:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:25.515 22:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:25.515 22:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:25.515 22:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:25.515 22:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:25.515 22:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:25.515 22:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:25.515 22:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:25.515 22:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:25.515 22:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:25.515 22:05:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:25.774 22:05:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:25.774 "name": "raid_bdev1", 00:22:25.774 "uuid": "41074c1d-90f3-4c1f-9c84-00ce893dff97", 00:22:25.774 "strip_size_kb": 0, 00:22:25.774 "state": "online", 00:22:25.774 "raid_level": "raid1", 00:22:25.774 "superblock": true, 00:22:25.774 "num_base_bdevs": 2, 00:22:25.774 "num_base_bdevs_discovered": 2, 00:22:25.774 "num_base_bdevs_operational": 2, 00:22:25.774 "base_bdevs_list": [ 00:22:25.774 { 00:22:25.774 "name": "spare", 00:22:25.774 "uuid": "6327b36c-0654-5a2f-8c77-05e2ae106df8", 00:22:25.774 "is_configured": true, 00:22:25.774 "data_offset": 2048, 00:22:25.774 "data_size": 63488 00:22:25.774 }, 00:22:25.774 { 00:22:25.774 "name": "BaseBdev2", 00:22:25.774 "uuid": "c87110c2-8835-5102-8920-757ec845bdbf", 00:22:25.775 "is_configured": true, 00:22:25.775 "data_offset": 2048, 00:22:25.775 "data_size": 63488 00:22:25.775 } 00:22:25.775 ] 00:22:25.775 }' 00:22:25.775 22:05:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:25.775 22:05:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.342 22:05:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:26.342 [2024-07-13 22:05:45.658545] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:26.342 [2024-07-13 22:05:45.658580] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:26.342 [2024-07-13 22:05:45.658663] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:26.342 [2024-07-13 22:05:45.658731] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:26.342 [2024-07-13 22:05:45.658744] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041a80 name raid_bdev1, state offline 00:22:26.342 22:05:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:26.342 22:05:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:22:26.601 22:05:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:22:26.601 22:05:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:22:26.601 22:05:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:22:26.601 22:05:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:26.601 22:05:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:26.601 22:05:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:26.601 22:05:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:26.601 22:05:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:26.601 22:05:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:26.601 22:05:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:22:26.601 22:05:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:26.601 22:05:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:26.601 22:05:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:26.860 /dev/nbd0 00:22:26.860 22:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:26.860 22:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:26.860 22:05:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:22:26.860 22:05:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:22:26.860 22:05:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:26.860 22:05:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:26.860 22:05:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:22:26.860 22:05:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:22:26.860 22:05:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:26.860 22:05:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:26.860 22:05:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:26.860 1+0 records in 00:22:26.860 1+0 records out 00:22:26.860 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216789 s, 18.9 MB/s 00:22:26.860 22:05:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:26.860 22:05:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:22:26.860 22:05:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:26.860 22:05:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:26.860 22:05:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:22:26.860 22:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:26.860 22:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:26.860 22:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:22:26.860 /dev/nbd1 00:22:27.118 22:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:27.118 22:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:27.118 22:05:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:22:27.118 22:05:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:22:27.118 22:05:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:27.118 22:05:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:27.118 22:05:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:22:27.118 22:05:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:22:27.118 22:05:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:27.118 22:05:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:27.118 22:05:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:27.118 1+0 records in 00:22:27.118 1+0 records out 00:22:27.118 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247789 s, 16.5 MB/s 00:22:27.119 22:05:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:27.119 22:05:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:22:27.119 22:05:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:27.119 22:05:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:27.119 22:05:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:22:27.119 22:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:27.119 22:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:27.119 22:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:27.119 22:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:22:27.119 22:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:27.119 22:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:27.119 22:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:27.119 22:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:22:27.119 22:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:27.119 22:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:27.377 22:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:27.377 22:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:27.377 22:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:27.377 22:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:27.377 22:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:27.377 22:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:27.377 22:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:22:27.377 22:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:22:27.377 22:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:27.377 22:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:27.637 22:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:27.637 22:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:27.637 22:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:27.637 22:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:27.637 22:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:27.637 22:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:27.637 22:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:22:27.637 22:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:22:27.637 22:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:22:27.637 22:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:27.637 22:05:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:27.896 [2024-07-13 22:05:47.135236] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:27.896 [2024-07-13 22:05:47.135304] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:27.896 [2024-07-13 22:05:47.135344] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043280 00:22:27.896 [2024-07-13 22:05:47.135356] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:27.896 [2024-07-13 22:05:47.137492] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:27.896 [2024-07-13 22:05:47.137520] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:27.896 [2024-07-13 22:05:47.137601] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:27.896 [2024-07-13 22:05:47.137658] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:27.896 [2024-07-13 22:05:47.137805] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:27.896 spare 00:22:27.896 22:05:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:27.896 22:05:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:27.896 22:05:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:27.896 22:05:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:27.896 22:05:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:27.896 22:05:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:27.896 22:05:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:27.896 22:05:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:27.896 22:05:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:27.896 22:05:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:27.896 22:05:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:27.896 22:05:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:27.896 [2024-07-13 22:05:47.238124] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000043880 00:22:27.896 [2024-07-13 22:05:47.238150] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:27.896 [2024-07-13 22:05:47.238395] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc9320 00:22:27.897 [2024-07-13 22:05:47.238581] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000043880 00:22:27.897 [2024-07-13 22:05:47.238592] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000043880 00:22:27.897 [2024-07-13 22:05:47.238735] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:28.155 22:05:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:28.155 "name": "raid_bdev1", 00:22:28.155 "uuid": "41074c1d-90f3-4c1f-9c84-00ce893dff97", 00:22:28.156 "strip_size_kb": 0, 00:22:28.156 "state": "online", 00:22:28.156 "raid_level": "raid1", 00:22:28.156 "superblock": true, 00:22:28.156 "num_base_bdevs": 2, 00:22:28.156 "num_base_bdevs_discovered": 2, 00:22:28.156 "num_base_bdevs_operational": 2, 00:22:28.156 "base_bdevs_list": [ 00:22:28.156 { 00:22:28.156 "name": "spare", 00:22:28.156 "uuid": "6327b36c-0654-5a2f-8c77-05e2ae106df8", 00:22:28.156 "is_configured": true, 00:22:28.156 "data_offset": 2048, 00:22:28.156 "data_size": 63488 00:22:28.156 }, 00:22:28.156 { 00:22:28.156 "name": "BaseBdev2", 00:22:28.156 "uuid": "c87110c2-8835-5102-8920-757ec845bdbf", 00:22:28.156 "is_configured": true, 00:22:28.156 "data_offset": 2048, 00:22:28.156 "data_size": 63488 00:22:28.156 } 00:22:28.156 ] 00:22:28.156 }' 00:22:28.156 22:05:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:28.156 22:05:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:28.722 22:05:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:28.723 22:05:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:28.723 22:05:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:28.723 22:05:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:28.723 22:05:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:28.723 22:05:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:28.723 22:05:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:28.723 22:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:28.723 "name": "raid_bdev1", 00:22:28.723 "uuid": "41074c1d-90f3-4c1f-9c84-00ce893dff97", 00:22:28.723 "strip_size_kb": 0, 00:22:28.723 "state": "online", 00:22:28.723 "raid_level": "raid1", 00:22:28.723 "superblock": true, 00:22:28.723 "num_base_bdevs": 2, 00:22:28.723 "num_base_bdevs_discovered": 2, 00:22:28.723 "num_base_bdevs_operational": 2, 00:22:28.723 "base_bdevs_list": [ 00:22:28.723 { 00:22:28.723 "name": "spare", 00:22:28.723 "uuid": "6327b36c-0654-5a2f-8c77-05e2ae106df8", 00:22:28.723 "is_configured": true, 00:22:28.723 "data_offset": 2048, 00:22:28.723 "data_size": 63488 00:22:28.723 }, 00:22:28.723 { 00:22:28.723 "name": "BaseBdev2", 00:22:28.723 "uuid": "c87110c2-8835-5102-8920-757ec845bdbf", 00:22:28.723 "is_configured": true, 00:22:28.723 "data_offset": 2048, 00:22:28.723 "data_size": 63488 00:22:28.723 } 00:22:28.723 ] 00:22:28.723 }' 00:22:28.723 22:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:28.723 22:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:28.723 22:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:28.723 22:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:28.723 22:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:28.723 22:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:28.982 22:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:22:28.982 22:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:29.241 [2024-07-13 22:05:48.426911] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:29.241 22:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:29.241 22:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:29.241 22:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:29.241 22:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:29.241 22:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:29.241 22:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:29.241 22:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:29.241 22:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:29.241 22:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:29.241 22:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:29.241 22:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:29.241 22:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:29.241 22:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:29.241 "name": "raid_bdev1", 00:22:29.241 "uuid": "41074c1d-90f3-4c1f-9c84-00ce893dff97", 00:22:29.241 "strip_size_kb": 0, 00:22:29.241 "state": "online", 00:22:29.241 "raid_level": "raid1", 00:22:29.241 "superblock": true, 00:22:29.241 "num_base_bdevs": 2, 00:22:29.241 "num_base_bdevs_discovered": 1, 00:22:29.241 "num_base_bdevs_operational": 1, 00:22:29.241 "base_bdevs_list": [ 00:22:29.241 { 00:22:29.241 "name": null, 00:22:29.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:29.241 "is_configured": false, 00:22:29.241 "data_offset": 2048, 00:22:29.241 "data_size": 63488 00:22:29.241 }, 00:22:29.241 { 00:22:29.241 "name": "BaseBdev2", 00:22:29.241 "uuid": "c87110c2-8835-5102-8920-757ec845bdbf", 00:22:29.241 "is_configured": true, 00:22:29.241 "data_offset": 2048, 00:22:29.241 "data_size": 63488 00:22:29.242 } 00:22:29.242 ] 00:22:29.242 }' 00:22:29.242 22:05:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:29.242 22:05:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:29.810 22:05:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:30.069 [2024-07-13 22:05:49.261113] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:30.069 [2024-07-13 22:05:49.261299] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:30.069 [2024-07-13 22:05:49.261318] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:30.069 [2024-07-13 22:05:49.261352] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:30.069 [2024-07-13 22:05:49.278870] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc93f0 00:22:30.069 [2024-07-13 22:05:49.280684] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:30.069 22:05:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:22:31.004 22:05:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:31.004 22:05:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:31.004 22:05:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:31.004 22:05:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:31.004 22:05:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:31.004 22:05:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:31.004 22:05:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:31.262 22:05:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:31.262 "name": "raid_bdev1", 00:22:31.262 "uuid": "41074c1d-90f3-4c1f-9c84-00ce893dff97", 00:22:31.262 "strip_size_kb": 0, 00:22:31.262 "state": "online", 00:22:31.262 "raid_level": "raid1", 00:22:31.262 "superblock": true, 00:22:31.262 "num_base_bdevs": 2, 00:22:31.262 "num_base_bdevs_discovered": 2, 00:22:31.262 "num_base_bdevs_operational": 2, 00:22:31.262 "process": { 00:22:31.262 "type": "rebuild", 00:22:31.262 "target": "spare", 00:22:31.262 "progress": { 00:22:31.262 "blocks": 22528, 00:22:31.262 "percent": 35 00:22:31.262 } 00:22:31.262 }, 00:22:31.262 "base_bdevs_list": [ 00:22:31.262 { 00:22:31.262 "name": "spare", 00:22:31.262 "uuid": "6327b36c-0654-5a2f-8c77-05e2ae106df8", 00:22:31.262 "is_configured": true, 00:22:31.262 "data_offset": 2048, 00:22:31.262 "data_size": 63488 00:22:31.262 }, 00:22:31.262 { 00:22:31.262 "name": "BaseBdev2", 00:22:31.262 "uuid": "c87110c2-8835-5102-8920-757ec845bdbf", 00:22:31.262 "is_configured": true, 00:22:31.262 "data_offset": 2048, 00:22:31.262 "data_size": 63488 00:22:31.262 } 00:22:31.262 ] 00:22:31.262 }' 00:22:31.262 22:05:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:31.262 22:05:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:31.262 22:05:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:31.262 22:05:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:31.262 22:05:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:31.521 [2024-07-13 22:05:50.714222] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:31.521 [2024-07-13 22:05:50.792303] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:31.521 [2024-07-13 22:05:50.792363] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:31.521 [2024-07-13 22:05:50.792380] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:31.521 [2024-07-13 22:05:50.792392] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:31.521 22:05:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:31.521 22:05:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:31.521 22:05:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:31.521 22:05:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:31.521 22:05:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:31.521 22:05:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:31.521 22:05:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:31.521 22:05:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:31.521 22:05:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:31.521 22:05:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:31.521 22:05:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:31.521 22:05:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:31.780 22:05:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:31.780 "name": "raid_bdev1", 00:22:31.780 "uuid": "41074c1d-90f3-4c1f-9c84-00ce893dff97", 00:22:31.780 "strip_size_kb": 0, 00:22:31.780 "state": "online", 00:22:31.780 "raid_level": "raid1", 00:22:31.780 "superblock": true, 00:22:31.780 "num_base_bdevs": 2, 00:22:31.780 "num_base_bdevs_discovered": 1, 00:22:31.780 "num_base_bdevs_operational": 1, 00:22:31.780 "base_bdevs_list": [ 00:22:31.780 { 00:22:31.780 "name": null, 00:22:31.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:31.780 "is_configured": false, 00:22:31.780 "data_offset": 2048, 00:22:31.780 "data_size": 63488 00:22:31.780 }, 00:22:31.780 { 00:22:31.780 "name": "BaseBdev2", 00:22:31.780 "uuid": "c87110c2-8835-5102-8920-757ec845bdbf", 00:22:31.780 "is_configured": true, 00:22:31.780 "data_offset": 2048, 00:22:31.780 "data_size": 63488 00:22:31.780 } 00:22:31.780 ] 00:22:31.780 }' 00:22:31.780 22:05:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:31.780 22:05:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:32.345 22:05:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:32.345 [2024-07-13 22:05:51.639557] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:32.345 [2024-07-13 22:05:51.639628] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:32.345 [2024-07-13 22:05:51.639651] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043e80 00:22:32.345 [2024-07-13 22:05:51.639666] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:32.345 [2024-07-13 22:05:51.640193] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:32.345 [2024-07-13 22:05:51.640219] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:32.345 [2024-07-13 22:05:51.640316] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:32.345 [2024-07-13 22:05:51.640332] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:32.345 [2024-07-13 22:05:51.640345] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:32.345 [2024-07-13 22:05:51.640373] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:32.345 [2024-07-13 22:05:51.657778] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc94c0 00:22:32.345 spare 00:22:32.345 [2024-07-13 22:05:51.659527] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:32.345 22:05:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:22:33.720 22:05:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:33.720 22:05:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:33.720 22:05:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:33.720 22:05:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:33.720 22:05:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:33.720 22:05:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:33.720 22:05:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:33.720 22:05:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:33.720 "name": "raid_bdev1", 00:22:33.720 "uuid": "41074c1d-90f3-4c1f-9c84-00ce893dff97", 00:22:33.720 "strip_size_kb": 0, 00:22:33.720 "state": "online", 00:22:33.720 "raid_level": "raid1", 00:22:33.720 "superblock": true, 00:22:33.720 "num_base_bdevs": 2, 00:22:33.720 "num_base_bdevs_discovered": 2, 00:22:33.720 "num_base_bdevs_operational": 2, 00:22:33.720 "process": { 00:22:33.720 "type": "rebuild", 00:22:33.720 "target": "spare", 00:22:33.720 "progress": { 00:22:33.720 "blocks": 22528, 00:22:33.720 "percent": 35 00:22:33.720 } 00:22:33.720 }, 00:22:33.720 "base_bdevs_list": [ 00:22:33.720 { 00:22:33.720 "name": "spare", 00:22:33.720 "uuid": "6327b36c-0654-5a2f-8c77-05e2ae106df8", 00:22:33.720 "is_configured": true, 00:22:33.720 "data_offset": 2048, 00:22:33.720 "data_size": 63488 00:22:33.720 }, 00:22:33.720 { 00:22:33.720 "name": "BaseBdev2", 00:22:33.720 "uuid": "c87110c2-8835-5102-8920-757ec845bdbf", 00:22:33.720 "is_configured": true, 00:22:33.720 "data_offset": 2048, 00:22:33.720 "data_size": 63488 00:22:33.720 } 00:22:33.720 ] 00:22:33.720 }' 00:22:33.720 22:05:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:33.720 22:05:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:33.720 22:05:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:33.720 22:05:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:33.720 22:05:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:33.720 [2024-07-13 22:05:53.076926] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:33.991 [2024-07-13 22:05:53.171032] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:33.991 [2024-07-13 22:05:53.171081] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:33.991 [2024-07-13 22:05:53.171099] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:33.991 [2024-07-13 22:05:53.171108] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:33.991 22:05:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:33.991 22:05:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:33.991 22:05:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:33.991 22:05:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:33.991 22:05:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:33.992 22:05:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:33.992 22:05:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:33.992 22:05:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:33.992 22:05:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:33.992 22:05:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:33.992 22:05:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:33.992 22:05:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:34.263 22:05:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:34.263 "name": "raid_bdev1", 00:22:34.263 "uuid": "41074c1d-90f3-4c1f-9c84-00ce893dff97", 00:22:34.263 "strip_size_kb": 0, 00:22:34.263 "state": "online", 00:22:34.263 "raid_level": "raid1", 00:22:34.263 "superblock": true, 00:22:34.263 "num_base_bdevs": 2, 00:22:34.263 "num_base_bdevs_discovered": 1, 00:22:34.263 "num_base_bdevs_operational": 1, 00:22:34.263 "base_bdevs_list": [ 00:22:34.263 { 00:22:34.263 "name": null, 00:22:34.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:34.263 "is_configured": false, 00:22:34.263 "data_offset": 2048, 00:22:34.263 "data_size": 63488 00:22:34.263 }, 00:22:34.263 { 00:22:34.263 "name": "BaseBdev2", 00:22:34.263 "uuid": "c87110c2-8835-5102-8920-757ec845bdbf", 00:22:34.263 "is_configured": true, 00:22:34.263 "data_offset": 2048, 00:22:34.263 "data_size": 63488 00:22:34.263 } 00:22:34.263 ] 00:22:34.263 }' 00:22:34.263 22:05:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:34.263 22:05:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:34.586 22:05:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:34.586 22:05:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:34.586 22:05:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:34.586 22:05:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:34.586 22:05:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:34.586 22:05:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:34.586 22:05:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:34.845 22:05:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:34.845 "name": "raid_bdev1", 00:22:34.845 "uuid": "41074c1d-90f3-4c1f-9c84-00ce893dff97", 00:22:34.845 "strip_size_kb": 0, 00:22:34.845 "state": "online", 00:22:34.845 "raid_level": "raid1", 00:22:34.845 "superblock": true, 00:22:34.845 "num_base_bdevs": 2, 00:22:34.845 "num_base_bdevs_discovered": 1, 00:22:34.845 "num_base_bdevs_operational": 1, 00:22:34.845 "base_bdevs_list": [ 00:22:34.845 { 00:22:34.845 "name": null, 00:22:34.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:34.845 "is_configured": false, 00:22:34.845 "data_offset": 2048, 00:22:34.845 "data_size": 63488 00:22:34.845 }, 00:22:34.845 { 00:22:34.845 "name": "BaseBdev2", 00:22:34.845 "uuid": "c87110c2-8835-5102-8920-757ec845bdbf", 00:22:34.845 "is_configured": true, 00:22:34.845 "data_offset": 2048, 00:22:34.845 "data_size": 63488 00:22:34.845 } 00:22:34.845 ] 00:22:34.845 }' 00:22:34.845 22:05:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:34.845 22:05:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:34.845 22:05:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:34.845 22:05:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:34.845 22:05:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:22:35.104 22:05:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:35.104 [2024-07-13 22:05:54.467315] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:35.104 [2024-07-13 22:05:54.467378] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:35.104 [2024-07-13 22:05:54.467402] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000044480 00:22:35.104 [2024-07-13 22:05:54.467416] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:35.104 [2024-07-13 22:05:54.467885] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:35.104 [2024-07-13 22:05:54.467916] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:35.104 [2024-07-13 22:05:54.468012] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:35.104 [2024-07-13 22:05:54.468029] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:35.104 [2024-07-13 22:05:54.468042] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:35.104 BaseBdev1 00:22:35.104 22:05:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:22:36.482 22:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:36.483 22:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:36.483 22:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:36.483 22:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:36.483 22:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:36.483 22:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:36.483 22:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:36.483 22:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:36.483 22:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:36.483 22:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:36.483 22:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:36.483 22:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:36.483 22:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:36.483 "name": "raid_bdev1", 00:22:36.483 "uuid": "41074c1d-90f3-4c1f-9c84-00ce893dff97", 00:22:36.483 "strip_size_kb": 0, 00:22:36.483 "state": "online", 00:22:36.483 "raid_level": "raid1", 00:22:36.483 "superblock": true, 00:22:36.483 "num_base_bdevs": 2, 00:22:36.483 "num_base_bdevs_discovered": 1, 00:22:36.483 "num_base_bdevs_operational": 1, 00:22:36.483 "base_bdevs_list": [ 00:22:36.483 { 00:22:36.483 "name": null, 00:22:36.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:36.483 "is_configured": false, 00:22:36.483 "data_offset": 2048, 00:22:36.483 "data_size": 63488 00:22:36.483 }, 00:22:36.483 { 00:22:36.483 "name": "BaseBdev2", 00:22:36.483 "uuid": "c87110c2-8835-5102-8920-757ec845bdbf", 00:22:36.483 "is_configured": true, 00:22:36.483 "data_offset": 2048, 00:22:36.483 "data_size": 63488 00:22:36.483 } 00:22:36.483 ] 00:22:36.483 }' 00:22:36.483 22:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:36.483 22:05:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:37.051 22:05:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:37.051 22:05:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:37.051 22:05:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:37.051 22:05:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:37.051 22:05:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:37.051 22:05:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:37.051 22:05:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:37.051 22:05:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:37.051 "name": "raid_bdev1", 00:22:37.051 "uuid": "41074c1d-90f3-4c1f-9c84-00ce893dff97", 00:22:37.051 "strip_size_kb": 0, 00:22:37.051 "state": "online", 00:22:37.051 "raid_level": "raid1", 00:22:37.051 "superblock": true, 00:22:37.051 "num_base_bdevs": 2, 00:22:37.051 "num_base_bdevs_discovered": 1, 00:22:37.051 "num_base_bdevs_operational": 1, 00:22:37.051 "base_bdevs_list": [ 00:22:37.051 { 00:22:37.051 "name": null, 00:22:37.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:37.051 "is_configured": false, 00:22:37.051 "data_offset": 2048, 00:22:37.051 "data_size": 63488 00:22:37.051 }, 00:22:37.051 { 00:22:37.051 "name": "BaseBdev2", 00:22:37.051 "uuid": "c87110c2-8835-5102-8920-757ec845bdbf", 00:22:37.051 "is_configured": true, 00:22:37.051 "data_offset": 2048, 00:22:37.051 "data_size": 63488 00:22:37.051 } 00:22:37.051 ] 00:22:37.051 }' 00:22:37.051 22:05:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:37.051 22:05:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:37.051 22:05:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:37.051 22:05:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:37.051 22:05:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:37.051 22:05:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@648 -- # local es=0 00:22:37.051 22:05:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:37.051 22:05:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:37.051 22:05:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:37.051 22:05:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:37.051 22:05:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:37.051 22:05:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:37.051 22:05:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:37.051 22:05:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:37.051 22:05:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:22:37.051 22:05:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:37.312 [2024-07-13 22:05:56.580885] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:37.312 [2024-07-13 22:05:56.581055] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:37.312 [2024-07-13 22:05:56.581081] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:37.312 request: 00:22:37.312 { 00:22:37.312 "base_bdev": "BaseBdev1", 00:22:37.312 "raid_bdev": "raid_bdev1", 00:22:37.312 "method": "bdev_raid_add_base_bdev", 00:22:37.312 "req_id": 1 00:22:37.312 } 00:22:37.312 Got JSON-RPC error response 00:22:37.312 response: 00:22:37.312 { 00:22:37.312 "code": -22, 00:22:37.312 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:22:37.312 } 00:22:37.312 22:05:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # es=1 00:22:37.312 22:05:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:37.312 22:05:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:37.312 22:05:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:37.312 22:05:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:22:38.250 22:05:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:38.250 22:05:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:38.250 22:05:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:38.250 22:05:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:38.250 22:05:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:38.250 22:05:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:38.250 22:05:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:38.250 22:05:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:38.250 22:05:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:38.250 22:05:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:38.250 22:05:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:38.250 22:05:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:38.508 22:05:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:38.508 "name": "raid_bdev1", 00:22:38.508 "uuid": "41074c1d-90f3-4c1f-9c84-00ce893dff97", 00:22:38.508 "strip_size_kb": 0, 00:22:38.508 "state": "online", 00:22:38.508 "raid_level": "raid1", 00:22:38.508 "superblock": true, 00:22:38.508 "num_base_bdevs": 2, 00:22:38.508 "num_base_bdevs_discovered": 1, 00:22:38.508 "num_base_bdevs_operational": 1, 00:22:38.508 "base_bdevs_list": [ 00:22:38.508 { 00:22:38.508 "name": null, 00:22:38.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:38.508 "is_configured": false, 00:22:38.508 "data_offset": 2048, 00:22:38.508 "data_size": 63488 00:22:38.508 }, 00:22:38.508 { 00:22:38.508 "name": "BaseBdev2", 00:22:38.508 "uuid": "c87110c2-8835-5102-8920-757ec845bdbf", 00:22:38.508 "is_configured": true, 00:22:38.508 "data_offset": 2048, 00:22:38.508 "data_size": 63488 00:22:38.508 } 00:22:38.508 ] 00:22:38.508 }' 00:22:38.508 22:05:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:38.508 22:05:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:39.075 22:05:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:39.075 22:05:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:39.075 22:05:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:39.075 22:05:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:39.075 22:05:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:39.075 22:05:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:39.075 22:05:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:39.075 22:05:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:39.075 "name": "raid_bdev1", 00:22:39.075 "uuid": "41074c1d-90f3-4c1f-9c84-00ce893dff97", 00:22:39.075 "strip_size_kb": 0, 00:22:39.075 "state": "online", 00:22:39.075 "raid_level": "raid1", 00:22:39.075 "superblock": true, 00:22:39.075 "num_base_bdevs": 2, 00:22:39.075 "num_base_bdevs_discovered": 1, 00:22:39.075 "num_base_bdevs_operational": 1, 00:22:39.075 "base_bdevs_list": [ 00:22:39.075 { 00:22:39.075 "name": null, 00:22:39.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:39.075 "is_configured": false, 00:22:39.075 "data_offset": 2048, 00:22:39.075 "data_size": 63488 00:22:39.075 }, 00:22:39.075 { 00:22:39.075 "name": "BaseBdev2", 00:22:39.075 "uuid": "c87110c2-8835-5102-8920-757ec845bdbf", 00:22:39.075 "is_configured": true, 00:22:39.075 "data_offset": 2048, 00:22:39.075 "data_size": 63488 00:22:39.075 } 00:22:39.075 ] 00:22:39.075 }' 00:22:39.075 22:05:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:39.334 22:05:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:39.334 22:05:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:39.334 22:05:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:39.334 22:05:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 1463865 00:22:39.334 22:05:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@948 -- # '[' -z 1463865 ']' 00:22:39.334 22:05:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # kill -0 1463865 00:22:39.334 22:05:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # uname 00:22:39.334 22:05:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:39.334 22:05:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1463865 00:22:39.334 22:05:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:39.334 22:05:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:39.334 22:05:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1463865' 00:22:39.334 killing process with pid 1463865 00:22:39.334 22:05:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@967 -- # kill 1463865 00:22:39.334 Received shutdown signal, test time was about 60.000000 seconds 00:22:39.334 00:22:39.334 Latency(us) 00:22:39.334 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.335 =================================================================================================================== 00:22:39.335 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:39.335 [2024-07-13 22:05:58.580874] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:39.335 [2024-07-13 22:05:58.581017] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:39.335 22:05:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # wait 1463865 00:22:39.335 [2024-07-13 22:05:58.581074] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:39.335 [2024-07-13 22:05:58.581087] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000043880 name raid_bdev1, state offline 00:22:39.594 [2024-07-13 22:05:58.814769] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:40.973 22:06:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:22:40.973 00:22:40.973 real 0m31.103s 00:22:40.973 user 0m42.911s 00:22:40.973 sys 0m5.443s 00:22:40.973 22:06:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:40.973 22:06:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:40.973 ************************************ 00:22:40.973 END TEST raid_rebuild_test_sb 00:22:40.973 ************************************ 00:22:40.973 22:06:00 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:22:40.973 22:06:00 bdev_raid -- bdev/bdev_raid.sh@879 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:22:40.973 22:06:00 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:22:40.973 22:06:00 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:40.973 22:06:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:40.973 ************************************ 00:22:40.973 START TEST raid_rebuild_test_io 00:22:40.973 ************************************ 00:22:40.973 22:06:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 false true true 00:22:40.973 22:06:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:22:40.974 22:06:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:22:40.974 22:06:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:22:40.974 22:06:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:22:40.974 22:06:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:22:40.974 22:06:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:22:40.974 22:06:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:22:40.974 22:06:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:22:40.974 22:06:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:22:40.974 22:06:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:22:40.974 22:06:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:22:40.974 22:06:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:22:40.974 22:06:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:22:40.974 22:06:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:40.974 22:06:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:22:40.974 22:06:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:22:40.974 22:06:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:22:40.974 22:06:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:22:40.974 22:06:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:22:40.974 22:06:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:22:40.974 22:06:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:22:40.974 22:06:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:22:40.974 22:06:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:22:40.974 22:06:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # raid_pid=1469561 00:22:40.974 22:06:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 1469561 /var/tmp/spdk-raid.sock 00:22:40.974 22:06:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:40.974 22:06:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@829 -- # '[' -z 1469561 ']' 00:22:40.974 22:06:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:40.974 22:06:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:40.974 22:06:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:40.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:40.974 22:06:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:40.974 22:06:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:22:40.974 [2024-07-13 22:06:00.191233] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:40.974 [2024-07-13 22:06:00.191335] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1469561 ] 00:22:40.974 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:40.974 Zero copy mechanism will not be used. 00:22:40.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.974 EAL: Requested device 0000:3d:01.0 cannot be used 00:22:40.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.974 EAL: Requested device 0000:3d:01.1 cannot be used 00:22:40.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.974 EAL: Requested device 0000:3d:01.2 cannot be used 00:22:40.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.974 EAL: Requested device 0000:3d:01.3 cannot be used 00:22:40.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.974 EAL: Requested device 0000:3d:01.4 cannot be used 00:22:40.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.974 EAL: Requested device 0000:3d:01.5 cannot be used 00:22:40.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.974 EAL: Requested device 0000:3d:01.6 cannot be used 00:22:40.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.974 EAL: Requested device 0000:3d:01.7 cannot be used 00:22:40.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.974 EAL: Requested device 0000:3d:02.0 cannot be used 00:22:40.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.974 EAL: Requested device 0000:3d:02.1 cannot be used 00:22:40.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.974 EAL: Requested device 0000:3d:02.2 cannot be used 00:22:40.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.974 EAL: Requested device 0000:3d:02.3 cannot be used 00:22:40.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.974 EAL: Requested device 0000:3d:02.4 cannot be used 00:22:40.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.974 EAL: Requested device 0000:3d:02.5 cannot be used 00:22:40.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.974 EAL: Requested device 0000:3d:02.6 cannot be used 00:22:40.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.974 EAL: Requested device 0000:3d:02.7 cannot be used 00:22:40.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.974 EAL: Requested device 0000:3f:01.0 cannot be used 00:22:40.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.974 EAL: Requested device 0000:3f:01.1 cannot be used 00:22:40.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.974 EAL: Requested device 0000:3f:01.2 cannot be used 00:22:40.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.974 EAL: Requested device 0000:3f:01.3 cannot be used 00:22:40.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.974 EAL: Requested device 0000:3f:01.4 cannot be used 00:22:40.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.974 EAL: Requested device 0000:3f:01.5 cannot be used 00:22:40.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.974 EAL: Requested device 0000:3f:01.6 cannot be used 00:22:40.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.974 EAL: Requested device 0000:3f:01.7 cannot be used 00:22:40.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.974 EAL: Requested device 0000:3f:02.0 cannot be used 00:22:40.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.974 EAL: Requested device 0000:3f:02.1 cannot be used 00:22:40.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.974 EAL: Requested device 0000:3f:02.2 cannot be used 00:22:40.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.974 EAL: Requested device 0000:3f:02.3 cannot be used 00:22:40.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.974 EAL: Requested device 0000:3f:02.4 cannot be used 00:22:40.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.974 EAL: Requested device 0000:3f:02.5 cannot be used 00:22:40.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.974 EAL: Requested device 0000:3f:02.6 cannot be used 00:22:40.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.974 EAL: Requested device 0000:3f:02.7 cannot be used 00:22:40.974 [2024-07-13 22:06:00.356114] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.234 [2024-07-13 22:06:00.560670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:41.493 [2024-07-13 22:06:00.807725] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:41.493 [2024-07-13 22:06:00.807754] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:41.753 22:06:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:41.753 22:06:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # return 0 00:22:41.753 22:06:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:22:41.753 22:06:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:41.753 BaseBdev1_malloc 00:22:42.012 22:06:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:42.012 [2024-07-13 22:06:01.310668] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:42.012 [2024-07-13 22:06:01.310728] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:42.012 [2024-07-13 22:06:01.310767] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:22:42.012 [2024-07-13 22:06:01.310781] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:42.012 [2024-07-13 22:06:01.312887] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:42.012 [2024-07-13 22:06:01.312924] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:42.012 BaseBdev1 00:22:42.012 22:06:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:22:42.012 22:06:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:42.270 BaseBdev2_malloc 00:22:42.270 22:06:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:42.528 [2024-07-13 22:06:01.690476] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:42.528 [2024-07-13 22:06:01.690528] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:42.528 [2024-07-13 22:06:01.690565] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:22:42.528 [2024-07-13 22:06:01.690580] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:42.528 [2024-07-13 22:06:01.692651] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:42.528 [2024-07-13 22:06:01.692683] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:42.528 BaseBdev2 00:22:42.528 22:06:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:42.528 spare_malloc 00:22:42.528 22:06:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:42.786 spare_delay 00:22:42.786 22:06:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:43.045 [2024-07-13 22:06:02.235877] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:43.045 [2024-07-13 22:06:02.235939] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:43.045 [2024-07-13 22:06:02.235977] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:22:43.045 [2024-07-13 22:06:02.235991] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:43.045 [2024-07-13 22:06:02.238134] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:43.045 [2024-07-13 22:06:02.238165] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:43.045 spare 00:22:43.045 22:06:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:22:43.045 [2024-07-13 22:06:02.400327] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:43.045 [2024-07-13 22:06:02.402167] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:43.045 [2024-07-13 22:06:02.402253] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041a80 00:22:43.045 [2024-07-13 22:06:02.402267] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:43.045 [2024-07-13 22:06:02.402557] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:22:43.045 [2024-07-13 22:06:02.402748] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041a80 00:22:43.045 [2024-07-13 22:06:02.402762] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041a80 00:22:43.045 [2024-07-13 22:06:02.402952] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:43.045 22:06:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:43.045 22:06:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:43.045 22:06:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:43.045 22:06:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:43.045 22:06:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:43.045 22:06:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:43.045 22:06:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:43.045 22:06:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:43.045 22:06:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:43.045 22:06:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:43.045 22:06:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:43.045 22:06:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:43.304 22:06:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:43.304 "name": "raid_bdev1", 00:22:43.304 "uuid": "8d4c44d8-2f65-4f30-b4ce-3bbe04ce6c43", 00:22:43.304 "strip_size_kb": 0, 00:22:43.304 "state": "online", 00:22:43.304 "raid_level": "raid1", 00:22:43.304 "superblock": false, 00:22:43.304 "num_base_bdevs": 2, 00:22:43.304 "num_base_bdevs_discovered": 2, 00:22:43.304 "num_base_bdevs_operational": 2, 00:22:43.304 "base_bdevs_list": [ 00:22:43.304 { 00:22:43.304 "name": "BaseBdev1", 00:22:43.304 "uuid": "a237775d-9167-515e-b0ca-437ed5219a78", 00:22:43.304 "is_configured": true, 00:22:43.304 "data_offset": 0, 00:22:43.304 "data_size": 65536 00:22:43.304 }, 00:22:43.304 { 00:22:43.304 "name": "BaseBdev2", 00:22:43.304 "uuid": "65ceba1d-344a-58da-a022-74bc55234d1a", 00:22:43.304 "is_configured": true, 00:22:43.304 "data_offset": 0, 00:22:43.304 "data_size": 65536 00:22:43.304 } 00:22:43.304 ] 00:22:43.304 }' 00:22:43.304 22:06:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:43.304 22:06:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:22:43.872 22:06:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:43.872 22:06:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:22:43.872 [2024-07-13 22:06:03.218695] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:43.872 22:06:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:22:43.872 22:06:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:43.872 22:06:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:44.130 22:06:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:22:44.130 22:06:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:22:44.130 22:06:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:44.130 22:06:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:44.130 [2024-07-13 22:06:03.504144] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000108b0 00:22:44.130 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:44.130 Zero copy mechanism will not be used. 00:22:44.130 Running I/O for 60 seconds... 00:22:44.388 [2024-07-13 22:06:03.581608] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:44.388 [2024-07-13 22:06:03.587263] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000108b0 00:22:44.388 22:06:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:44.388 22:06:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:44.388 22:06:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:44.388 22:06:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:44.388 22:06:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:44.388 22:06:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:44.388 22:06:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:44.388 22:06:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:44.388 22:06:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:44.388 22:06:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:44.388 22:06:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:44.388 22:06:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:44.644 22:06:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:44.644 "name": "raid_bdev1", 00:22:44.644 "uuid": "8d4c44d8-2f65-4f30-b4ce-3bbe04ce6c43", 00:22:44.644 "strip_size_kb": 0, 00:22:44.645 "state": "online", 00:22:44.645 "raid_level": "raid1", 00:22:44.645 "superblock": false, 00:22:44.645 "num_base_bdevs": 2, 00:22:44.645 "num_base_bdevs_discovered": 1, 00:22:44.645 "num_base_bdevs_operational": 1, 00:22:44.645 "base_bdevs_list": [ 00:22:44.645 { 00:22:44.645 "name": null, 00:22:44.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.645 "is_configured": false, 00:22:44.645 "data_offset": 0, 00:22:44.645 "data_size": 65536 00:22:44.645 }, 00:22:44.645 { 00:22:44.645 "name": "BaseBdev2", 00:22:44.645 "uuid": "65ceba1d-344a-58da-a022-74bc55234d1a", 00:22:44.645 "is_configured": true, 00:22:44.645 "data_offset": 0, 00:22:44.645 "data_size": 65536 00:22:44.645 } 00:22:44.645 ] 00:22:44.645 }' 00:22:44.645 22:06:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:44.645 22:06:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:22:44.903 22:06:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:45.161 [2024-07-13 22:06:04.434363] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:45.161 22:06:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:22:45.161 [2024-07-13 22:06:04.489264] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010980 00:22:45.161 [2024-07-13 22:06:04.491092] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:45.421 [2024-07-13 22:06:04.604154] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:45.421 [2024-07-13 22:06:04.604510] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:45.679 [2024-07-13 22:06:04.823416] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:45.679 [2024-07-13 22:06:04.823664] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:45.937 [2024-07-13 22:06:05.290241] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:45.937 [2024-07-13 22:06:05.290527] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:46.195 22:06:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:46.195 22:06:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:46.195 22:06:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:46.195 22:06:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:46.195 22:06:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:46.195 22:06:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:46.195 22:06:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:46.195 [2024-07-13 22:06:05.507930] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:46.454 22:06:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:46.454 "name": "raid_bdev1", 00:22:46.454 "uuid": "8d4c44d8-2f65-4f30-b4ce-3bbe04ce6c43", 00:22:46.454 "strip_size_kb": 0, 00:22:46.454 "state": "online", 00:22:46.454 "raid_level": "raid1", 00:22:46.454 "superblock": false, 00:22:46.454 "num_base_bdevs": 2, 00:22:46.454 "num_base_bdevs_discovered": 2, 00:22:46.454 "num_base_bdevs_operational": 2, 00:22:46.454 "process": { 00:22:46.454 "type": "rebuild", 00:22:46.454 "target": "spare", 00:22:46.454 "progress": { 00:22:46.454 "blocks": 14336, 00:22:46.454 "percent": 21 00:22:46.454 } 00:22:46.454 }, 00:22:46.454 "base_bdevs_list": [ 00:22:46.454 { 00:22:46.454 "name": "spare", 00:22:46.454 "uuid": "fc4474a8-b3dc-5b7c-8a5b-e5913d4ed585", 00:22:46.454 "is_configured": true, 00:22:46.454 "data_offset": 0, 00:22:46.454 "data_size": 65536 00:22:46.454 }, 00:22:46.454 { 00:22:46.454 "name": "BaseBdev2", 00:22:46.454 "uuid": "65ceba1d-344a-58da-a022-74bc55234d1a", 00:22:46.454 "is_configured": true, 00:22:46.454 "data_offset": 0, 00:22:46.454 "data_size": 65536 00:22:46.454 } 00:22:46.454 ] 00:22:46.454 }' 00:22:46.454 22:06:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:46.454 22:06:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:46.454 22:06:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:46.454 [2024-07-13 22:06:05.721444] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:46.454 [2024-07-13 22:06:05.721682] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:46.454 22:06:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:46.454 22:06:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:46.713 [2024-07-13 22:06:05.915730] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:46.713 [2024-07-13 22:06:05.949078] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:46.713 [2024-07-13 22:06:05.956285] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:46.713 [2024-07-13 22:06:05.956312] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:46.713 [2024-07-13 22:06:05.956329] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:46.713 [2024-07-13 22:06:05.998208] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000108b0 00:22:46.713 22:06:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:46.713 22:06:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:46.713 22:06:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:46.713 22:06:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:46.713 22:06:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:46.713 22:06:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:46.713 22:06:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:46.713 22:06:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:46.713 22:06:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:46.713 22:06:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:46.713 22:06:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:46.713 22:06:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:46.972 22:06:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:46.972 "name": "raid_bdev1", 00:22:46.972 "uuid": "8d4c44d8-2f65-4f30-b4ce-3bbe04ce6c43", 00:22:46.972 "strip_size_kb": 0, 00:22:46.972 "state": "online", 00:22:46.972 "raid_level": "raid1", 00:22:46.972 "superblock": false, 00:22:46.972 "num_base_bdevs": 2, 00:22:46.972 "num_base_bdevs_discovered": 1, 00:22:46.972 "num_base_bdevs_operational": 1, 00:22:46.972 "base_bdevs_list": [ 00:22:46.972 { 00:22:46.972 "name": null, 00:22:46.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:46.972 "is_configured": false, 00:22:46.972 "data_offset": 0, 00:22:46.972 "data_size": 65536 00:22:46.972 }, 00:22:46.972 { 00:22:46.972 "name": "BaseBdev2", 00:22:46.972 "uuid": "65ceba1d-344a-58da-a022-74bc55234d1a", 00:22:46.972 "is_configured": true, 00:22:46.972 "data_offset": 0, 00:22:46.972 "data_size": 65536 00:22:46.972 } 00:22:46.972 ] 00:22:46.972 }' 00:22:46.972 22:06:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:46.972 22:06:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:22:47.540 22:06:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:47.540 22:06:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:47.540 22:06:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:47.540 22:06:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:47.540 22:06:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:47.540 22:06:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:47.540 22:06:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:47.540 22:06:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:47.540 "name": "raid_bdev1", 00:22:47.540 "uuid": "8d4c44d8-2f65-4f30-b4ce-3bbe04ce6c43", 00:22:47.540 "strip_size_kb": 0, 00:22:47.540 "state": "online", 00:22:47.540 "raid_level": "raid1", 00:22:47.540 "superblock": false, 00:22:47.540 "num_base_bdevs": 2, 00:22:47.540 "num_base_bdevs_discovered": 1, 00:22:47.540 "num_base_bdevs_operational": 1, 00:22:47.540 "base_bdevs_list": [ 00:22:47.540 { 00:22:47.540 "name": null, 00:22:47.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:47.540 "is_configured": false, 00:22:47.540 "data_offset": 0, 00:22:47.540 "data_size": 65536 00:22:47.540 }, 00:22:47.540 { 00:22:47.540 "name": "BaseBdev2", 00:22:47.540 "uuid": "65ceba1d-344a-58da-a022-74bc55234d1a", 00:22:47.540 "is_configured": true, 00:22:47.540 "data_offset": 0, 00:22:47.540 "data_size": 65536 00:22:47.540 } 00:22:47.540 ] 00:22:47.540 }' 00:22:47.540 22:06:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:47.540 22:06:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:47.540 22:06:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:47.799 22:06:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:47.799 22:06:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:47.799 [2024-07-13 22:06:07.118809] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:47.799 22:06:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:47.799 [2024-07-13 22:06:07.177684] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010a50 00:22:47.799 [2024-07-13 22:06:07.179479] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:48.096 [2024-07-13 22:06:07.292525] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:48.096 [2024-07-13 22:06:07.292838] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:48.096 [2024-07-13 22:06:07.430167] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:48.354 [2024-07-13 22:06:07.679494] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:48.923 [2024-07-13 22:06:08.141908] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:48.923 22:06:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:48.923 22:06:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:48.923 22:06:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:48.923 22:06:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:48.923 22:06:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:48.923 22:06:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:48.923 22:06:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:49.182 22:06:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:49.182 "name": "raid_bdev1", 00:22:49.182 "uuid": "8d4c44d8-2f65-4f30-b4ce-3bbe04ce6c43", 00:22:49.182 "strip_size_kb": 0, 00:22:49.182 "state": "online", 00:22:49.182 "raid_level": "raid1", 00:22:49.182 "superblock": false, 00:22:49.182 "num_base_bdevs": 2, 00:22:49.182 "num_base_bdevs_discovered": 2, 00:22:49.182 "num_base_bdevs_operational": 2, 00:22:49.182 "process": { 00:22:49.182 "type": "rebuild", 00:22:49.182 "target": "spare", 00:22:49.182 "progress": { 00:22:49.182 "blocks": 16384, 00:22:49.182 "percent": 25 00:22:49.182 } 00:22:49.182 }, 00:22:49.182 "base_bdevs_list": [ 00:22:49.182 { 00:22:49.182 "name": "spare", 00:22:49.182 "uuid": "fc4474a8-b3dc-5b7c-8a5b-e5913d4ed585", 00:22:49.182 "is_configured": true, 00:22:49.182 "data_offset": 0, 00:22:49.182 "data_size": 65536 00:22:49.182 }, 00:22:49.182 { 00:22:49.182 "name": "BaseBdev2", 00:22:49.182 "uuid": "65ceba1d-344a-58da-a022-74bc55234d1a", 00:22:49.182 "is_configured": true, 00:22:49.182 "data_offset": 0, 00:22:49.182 "data_size": 65536 00:22:49.182 } 00:22:49.182 ] 00:22:49.182 }' 00:22:49.182 22:06:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:49.182 22:06:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:49.182 22:06:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:49.182 22:06:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:49.182 22:06:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:22:49.182 22:06:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:22:49.182 22:06:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:22:49.182 22:06:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:22:49.182 22:06:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@705 -- # local timeout=699 00:22:49.182 22:06:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:22:49.182 22:06:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:49.182 22:06:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:49.182 22:06:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:49.182 22:06:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:49.182 22:06:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:49.182 22:06:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:49.182 22:06:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:49.442 [2024-07-13 22:06:08.577094] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:22:49.442 22:06:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:49.442 "name": "raid_bdev1", 00:22:49.442 "uuid": "8d4c44d8-2f65-4f30-b4ce-3bbe04ce6c43", 00:22:49.442 "strip_size_kb": 0, 00:22:49.442 "state": "online", 00:22:49.442 "raid_level": "raid1", 00:22:49.442 "superblock": false, 00:22:49.442 "num_base_bdevs": 2, 00:22:49.442 "num_base_bdevs_discovered": 2, 00:22:49.442 "num_base_bdevs_operational": 2, 00:22:49.442 "process": { 00:22:49.442 "type": "rebuild", 00:22:49.442 "target": "spare", 00:22:49.442 "progress": { 00:22:49.442 "blocks": 22528, 00:22:49.442 "percent": 34 00:22:49.442 } 00:22:49.442 }, 00:22:49.442 "base_bdevs_list": [ 00:22:49.442 { 00:22:49.442 "name": "spare", 00:22:49.442 "uuid": "fc4474a8-b3dc-5b7c-8a5b-e5913d4ed585", 00:22:49.442 "is_configured": true, 00:22:49.442 "data_offset": 0, 00:22:49.442 "data_size": 65536 00:22:49.442 }, 00:22:49.442 { 00:22:49.442 "name": "BaseBdev2", 00:22:49.442 "uuid": "65ceba1d-344a-58da-a022-74bc55234d1a", 00:22:49.442 "is_configured": true, 00:22:49.442 "data_offset": 0, 00:22:49.442 "data_size": 65536 00:22:49.442 } 00:22:49.442 ] 00:22:49.442 }' 00:22:49.442 22:06:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:49.442 22:06:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:49.442 22:06:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:49.442 22:06:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:49.442 22:06:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:22:49.701 [2024-07-13 22:06:09.018004] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:22:50.269 [2024-07-13 22:06:09.630367] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:22:50.528 22:06:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:22:50.528 22:06:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:50.528 22:06:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:50.528 22:06:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:50.528 22:06:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:50.528 22:06:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:50.528 22:06:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:50.528 22:06:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:50.528 22:06:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:50.528 "name": "raid_bdev1", 00:22:50.528 "uuid": "8d4c44d8-2f65-4f30-b4ce-3bbe04ce6c43", 00:22:50.528 "strip_size_kb": 0, 00:22:50.528 "state": "online", 00:22:50.528 "raid_level": "raid1", 00:22:50.528 "superblock": false, 00:22:50.528 "num_base_bdevs": 2, 00:22:50.528 "num_base_bdevs_discovered": 2, 00:22:50.528 "num_base_bdevs_operational": 2, 00:22:50.528 "process": { 00:22:50.528 "type": "rebuild", 00:22:50.528 "target": "spare", 00:22:50.528 "progress": { 00:22:50.528 "blocks": 43008, 00:22:50.528 "percent": 65 00:22:50.528 } 00:22:50.528 }, 00:22:50.528 "base_bdevs_list": [ 00:22:50.528 { 00:22:50.528 "name": "spare", 00:22:50.528 "uuid": "fc4474a8-b3dc-5b7c-8a5b-e5913d4ed585", 00:22:50.528 "is_configured": true, 00:22:50.528 "data_offset": 0, 00:22:50.528 "data_size": 65536 00:22:50.528 }, 00:22:50.528 { 00:22:50.528 "name": "BaseBdev2", 00:22:50.528 "uuid": "65ceba1d-344a-58da-a022-74bc55234d1a", 00:22:50.528 "is_configured": true, 00:22:50.528 "data_offset": 0, 00:22:50.528 "data_size": 65536 00:22:50.528 } 00:22:50.528 ] 00:22:50.528 }' 00:22:50.528 22:06:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:50.811 22:06:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:50.811 22:06:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:50.811 [2024-07-13 22:06:09.964370] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:22:50.811 22:06:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:50.811 22:06:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:22:50.811 [2024-07-13 22:06:10.178729] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:22:51.748 [2024-07-13 22:06:10.933206] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:22:51.748 [2024-07-13 22:06:10.933444] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:22:51.748 22:06:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:22:51.749 22:06:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:51.749 22:06:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:51.749 22:06:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:51.749 22:06:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:51.749 22:06:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:51.749 22:06:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.749 22:06:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:52.008 22:06:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:52.008 "name": "raid_bdev1", 00:22:52.008 "uuid": "8d4c44d8-2f65-4f30-b4ce-3bbe04ce6c43", 00:22:52.008 "strip_size_kb": 0, 00:22:52.008 "state": "online", 00:22:52.008 "raid_level": "raid1", 00:22:52.008 "superblock": false, 00:22:52.008 "num_base_bdevs": 2, 00:22:52.008 "num_base_bdevs_discovered": 2, 00:22:52.008 "num_base_bdevs_operational": 2, 00:22:52.008 "process": { 00:22:52.008 "type": "rebuild", 00:22:52.008 "target": "spare", 00:22:52.008 "progress": { 00:22:52.008 "blocks": 59392, 00:22:52.008 "percent": 90 00:22:52.008 } 00:22:52.008 }, 00:22:52.008 "base_bdevs_list": [ 00:22:52.008 { 00:22:52.008 "name": "spare", 00:22:52.008 "uuid": "fc4474a8-b3dc-5b7c-8a5b-e5913d4ed585", 00:22:52.008 "is_configured": true, 00:22:52.008 "data_offset": 0, 00:22:52.008 "data_size": 65536 00:22:52.008 }, 00:22:52.008 { 00:22:52.008 "name": "BaseBdev2", 00:22:52.008 "uuid": "65ceba1d-344a-58da-a022-74bc55234d1a", 00:22:52.008 "is_configured": true, 00:22:52.008 "data_offset": 0, 00:22:52.008 "data_size": 65536 00:22:52.008 } 00:22:52.008 ] 00:22:52.008 }' 00:22:52.008 22:06:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:52.008 22:06:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:52.008 22:06:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:52.008 22:06:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:52.008 22:06:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:22:52.008 [2024-07-13 22:06:11.339599] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:52.008 [2024-07-13 22:06:11.379450] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:52.008 [2024-07-13 22:06:11.381061] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:52.954 22:06:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:22:52.954 22:06:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:52.954 22:06:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:52.954 22:06:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:52.954 22:06:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:52.954 22:06:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:52.954 22:06:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.954 22:06:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:53.211 22:06:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:53.211 "name": "raid_bdev1", 00:22:53.211 "uuid": "8d4c44d8-2f65-4f30-b4ce-3bbe04ce6c43", 00:22:53.211 "strip_size_kb": 0, 00:22:53.211 "state": "online", 00:22:53.211 "raid_level": "raid1", 00:22:53.211 "superblock": false, 00:22:53.211 "num_base_bdevs": 2, 00:22:53.211 "num_base_bdevs_discovered": 2, 00:22:53.211 "num_base_bdevs_operational": 2, 00:22:53.211 "base_bdevs_list": [ 00:22:53.211 { 00:22:53.211 "name": "spare", 00:22:53.211 "uuid": "fc4474a8-b3dc-5b7c-8a5b-e5913d4ed585", 00:22:53.211 "is_configured": true, 00:22:53.211 "data_offset": 0, 00:22:53.211 "data_size": 65536 00:22:53.211 }, 00:22:53.211 { 00:22:53.211 "name": "BaseBdev2", 00:22:53.211 "uuid": "65ceba1d-344a-58da-a022-74bc55234d1a", 00:22:53.211 "is_configured": true, 00:22:53.211 "data_offset": 0, 00:22:53.211 "data_size": 65536 00:22:53.211 } 00:22:53.211 ] 00:22:53.211 }' 00:22:53.211 22:06:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:53.211 22:06:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:53.211 22:06:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:53.211 22:06:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:22:53.211 22:06:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # break 00:22:53.211 22:06:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:53.211 22:06:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:53.211 22:06:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:53.211 22:06:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:53.211 22:06:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:53.211 22:06:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:53.211 22:06:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:53.470 22:06:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:53.470 "name": "raid_bdev1", 00:22:53.470 "uuid": "8d4c44d8-2f65-4f30-b4ce-3bbe04ce6c43", 00:22:53.470 "strip_size_kb": 0, 00:22:53.470 "state": "online", 00:22:53.470 "raid_level": "raid1", 00:22:53.470 "superblock": false, 00:22:53.470 "num_base_bdevs": 2, 00:22:53.470 "num_base_bdevs_discovered": 2, 00:22:53.470 "num_base_bdevs_operational": 2, 00:22:53.470 "base_bdevs_list": [ 00:22:53.470 { 00:22:53.470 "name": "spare", 00:22:53.470 "uuid": "fc4474a8-b3dc-5b7c-8a5b-e5913d4ed585", 00:22:53.470 "is_configured": true, 00:22:53.470 "data_offset": 0, 00:22:53.470 "data_size": 65536 00:22:53.470 }, 00:22:53.470 { 00:22:53.470 "name": "BaseBdev2", 00:22:53.470 "uuid": "65ceba1d-344a-58da-a022-74bc55234d1a", 00:22:53.470 "is_configured": true, 00:22:53.470 "data_offset": 0, 00:22:53.470 "data_size": 65536 00:22:53.470 } 00:22:53.470 ] 00:22:53.470 }' 00:22:53.470 22:06:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:53.470 22:06:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:53.470 22:06:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:53.470 22:06:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:53.470 22:06:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:53.470 22:06:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:53.470 22:06:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:53.470 22:06:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:53.470 22:06:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:53.470 22:06:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:53.470 22:06:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:53.470 22:06:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:53.470 22:06:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:53.470 22:06:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:53.470 22:06:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:53.470 22:06:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:53.729 22:06:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:53.729 "name": "raid_bdev1", 00:22:53.729 "uuid": "8d4c44d8-2f65-4f30-b4ce-3bbe04ce6c43", 00:22:53.729 "strip_size_kb": 0, 00:22:53.729 "state": "online", 00:22:53.729 "raid_level": "raid1", 00:22:53.729 "superblock": false, 00:22:53.729 "num_base_bdevs": 2, 00:22:53.729 "num_base_bdevs_discovered": 2, 00:22:53.729 "num_base_bdevs_operational": 2, 00:22:53.729 "base_bdevs_list": [ 00:22:53.729 { 00:22:53.729 "name": "spare", 00:22:53.729 "uuid": "fc4474a8-b3dc-5b7c-8a5b-e5913d4ed585", 00:22:53.729 "is_configured": true, 00:22:53.729 "data_offset": 0, 00:22:53.729 "data_size": 65536 00:22:53.729 }, 00:22:53.729 { 00:22:53.729 "name": "BaseBdev2", 00:22:53.729 "uuid": "65ceba1d-344a-58da-a022-74bc55234d1a", 00:22:53.729 "is_configured": true, 00:22:53.729 "data_offset": 0, 00:22:53.729 "data_size": 65536 00:22:53.729 } 00:22:53.729 ] 00:22:53.729 }' 00:22:53.729 22:06:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:53.729 22:06:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:22:54.297 22:06:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:54.297 [2024-07-13 22:06:13.574998] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:54.297 [2024-07-13 22:06:13.575028] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:54.297 00:22:54.297 Latency(us) 00:22:54.297 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.297 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:22:54.297 raid_bdev1 : 10.13 117.20 351.59 0.00 0.00 12349.38 285.08 112407.35 00:22:54.297 =================================================================================================================== 00:22:54.297 Total : 117.20 351.59 0.00 0.00 12349.38 285.08 112407.35 00:22:54.297 [2024-07-13 22:06:13.678210] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:54.297 [2024-07-13 22:06:13.678243] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:54.297 [2024-07-13 22:06:13.678315] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:54.297 [2024-07-13 22:06:13.678327] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041a80 name raid_bdev1, state offline 00:22:54.297 0 00:22:54.557 22:06:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.557 22:06:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # jq length 00:22:54.557 22:06:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:22:54.557 22:06:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:22:54.557 22:06:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:22:54.557 22:06:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:22:54.557 22:06:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:54.557 22:06:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:22:54.557 22:06:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:54.557 22:06:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:54.557 22:06:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:54.557 22:06:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:22:54.557 22:06:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:54.557 22:06:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:54.557 22:06:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:22:54.817 /dev/nbd0 00:22:54.817 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:54.817 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:54.817 22:06:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:22:54.817 22:06:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:22:54.817 22:06:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:54.817 22:06:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:54.817 22:06:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:22:54.817 22:06:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:22:54.817 22:06:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:54.817 22:06:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:54.817 22:06:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:54.817 1+0 records in 00:22:54.817 1+0 records out 00:22:54.817 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262322 s, 15.6 MB/s 00:22:54.817 22:06:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:54.817 22:06:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:22:54.817 22:06:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:54.817 22:06:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:54.817 22:06:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:22:54.817 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:54.817 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:54.817 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:22:54.817 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev2 ']' 00:22:54.817 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:22:54.817 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:54.817 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:22:54.817 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:54.817 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:22:54.817 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:54.817 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:22:54.817 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:54.817 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:54.817 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:22:55.077 /dev/nbd1 00:22:55.078 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:55.078 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:55.078 22:06:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:22:55.078 22:06:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:22:55.078 22:06:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:55.078 22:06:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:55.078 22:06:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:22:55.078 22:06:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:22:55.078 22:06:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:55.078 22:06:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:55.078 22:06:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:55.078 1+0 records in 00:22:55.078 1+0 records out 00:22:55.078 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289336 s, 14.2 MB/s 00:22:55.078 22:06:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:55.078 22:06:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:22:55.078 22:06:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:55.078 22:06:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:55.078 22:06:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:22:55.078 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:55.078 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:55.078 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:55.078 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:22:55.078 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:55.078 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:22:55.078 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:55.078 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:22:55.078 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:55.078 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:55.337 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:55.337 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:55.337 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:55.337 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:55.337 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:55.337 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:55.337 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:22:55.337 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:22:55.337 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:55.337 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:55.337 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:55.337 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:55.337 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:22:55.337 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:55.337 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:55.597 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:55.597 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:55.597 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:55.597 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:55.597 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:55.597 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:55.597 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:22:55.597 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:22:55.597 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:22:55.597 22:06:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@782 -- # killprocess 1469561 00:22:55.597 22:06:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@948 -- # '[' -z 1469561 ']' 00:22:55.597 22:06:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # kill -0 1469561 00:22:55.597 22:06:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # uname 00:22:55.597 22:06:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:55.597 22:06:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1469561 00:22:55.597 22:06:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:55.597 22:06:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:55.597 22:06:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1469561' 00:22:55.597 killing process with pid 1469561 00:22:55.597 22:06:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@967 -- # kill 1469561 00:22:55.597 Received shutdown signal, test time was about 11.381763 seconds 00:22:55.597 00:22:55.597 Latency(us) 00:22:55.597 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:55.597 =================================================================================================================== 00:22:55.597 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:55.597 [2024-07-13 22:06:14.915577] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:55.597 22:06:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # wait 1469561 00:22:55.856 [2024-07-13 22:06:15.079740] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:57.236 22:06:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # return 0 00:22:57.236 00:22:57.236 real 0m16.264s 00:22:57.236 user 0m22.996s 00:22:57.236 sys 0m2.421s 00:22:57.236 22:06:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:57.236 22:06:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:22:57.236 ************************************ 00:22:57.236 END TEST raid_rebuild_test_io 00:22:57.236 ************************************ 00:22:57.236 22:06:16 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:22:57.236 22:06:16 bdev_raid -- bdev/bdev_raid.sh@880 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:22:57.236 22:06:16 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:22:57.236 22:06:16 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:57.236 22:06:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:57.236 ************************************ 00:22:57.236 START TEST raid_rebuild_test_sb_io 00:22:57.236 ************************************ 00:22:57.236 22:06:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true true true 00:22:57.236 22:06:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:22:57.236 22:06:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:22:57.236 22:06:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:22:57.236 22:06:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:22:57.236 22:06:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:22:57.236 22:06:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:22:57.236 22:06:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:22:57.236 22:06:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:22:57.236 22:06:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:22:57.236 22:06:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:22:57.236 22:06:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:22:57.236 22:06:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:22:57.236 22:06:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:22:57.236 22:06:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:57.236 22:06:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:22:57.236 22:06:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:22:57.236 22:06:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:22:57.236 22:06:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:22:57.236 22:06:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:22:57.236 22:06:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:22:57.236 22:06:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:22:57.236 22:06:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:22:57.236 22:06:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:22:57.236 22:06:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:22:57.236 22:06:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # raid_pid=1473208 00:22:57.236 22:06:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 1473208 /var/tmp/spdk-raid.sock 00:22:57.236 22:06:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:57.236 22:06:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@829 -- # '[' -z 1473208 ']' 00:22:57.236 22:06:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:57.236 22:06:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:57.236 22:06:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:57.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:57.236 22:06:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:57.236 22:06:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:57.236 [2024-07-13 22:06:16.550251] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:57.236 [2024-07-13 22:06:16.550348] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1473208 ] 00:22:57.237 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:57.237 Zero copy mechanism will not be used. 00:22:57.496 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:57.496 EAL: Requested device 0000:3d:01.0 cannot be used 00:22:57.496 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:57.496 EAL: Requested device 0000:3d:01.1 cannot be used 00:22:57.496 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:57.496 EAL: Requested device 0000:3d:01.2 cannot be used 00:22:57.496 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:57.496 EAL: Requested device 0000:3d:01.3 cannot be used 00:22:57.496 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:57.496 EAL: Requested device 0000:3d:01.4 cannot be used 00:22:57.496 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:57.496 EAL: Requested device 0000:3d:01.5 cannot be used 00:22:57.496 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:57.496 EAL: Requested device 0000:3d:01.6 cannot be used 00:22:57.496 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:57.496 EAL: Requested device 0000:3d:01.7 cannot be used 00:22:57.496 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:57.496 EAL: Requested device 0000:3d:02.0 cannot be used 00:22:57.496 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:57.496 EAL: Requested device 0000:3d:02.1 cannot be used 00:22:57.496 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:57.496 EAL: Requested device 0000:3d:02.2 cannot be used 00:22:57.496 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:57.496 EAL: Requested device 0000:3d:02.3 cannot be used 00:22:57.496 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:57.496 EAL: Requested device 0000:3d:02.4 cannot be used 00:22:57.496 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:57.496 EAL: Requested device 0000:3d:02.5 cannot be used 00:22:57.496 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:57.496 EAL: Requested device 0000:3d:02.6 cannot be used 00:22:57.496 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:57.496 EAL: Requested device 0000:3d:02.7 cannot be used 00:22:57.496 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:57.496 EAL: Requested device 0000:3f:01.0 cannot be used 00:22:57.496 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:57.496 EAL: Requested device 0000:3f:01.1 cannot be used 00:22:57.496 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:57.496 EAL: Requested device 0000:3f:01.2 cannot be used 00:22:57.496 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:57.496 EAL: Requested device 0000:3f:01.3 cannot be used 00:22:57.496 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:57.496 EAL: Requested device 0000:3f:01.4 cannot be used 00:22:57.496 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:57.496 EAL: Requested device 0000:3f:01.5 cannot be used 00:22:57.496 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:57.496 EAL: Requested device 0000:3f:01.6 cannot be used 00:22:57.496 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:57.496 EAL: Requested device 0000:3f:01.7 cannot be used 00:22:57.496 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:57.496 EAL: Requested device 0000:3f:02.0 cannot be used 00:22:57.496 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:57.496 EAL: Requested device 0000:3f:02.1 cannot be used 00:22:57.496 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:57.496 EAL: Requested device 0000:3f:02.2 cannot be used 00:22:57.496 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:57.496 EAL: Requested device 0000:3f:02.3 cannot be used 00:22:57.496 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:57.496 EAL: Requested device 0000:3f:02.4 cannot be used 00:22:57.496 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:57.496 EAL: Requested device 0000:3f:02.5 cannot be used 00:22:57.496 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:57.496 EAL: Requested device 0000:3f:02.6 cannot be used 00:22:57.496 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:57.496 EAL: Requested device 0000:3f:02.7 cannot be used 00:22:57.496 [2024-07-13 22:06:16.711631] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.755 [2024-07-13 22:06:16.913811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.014 [2024-07-13 22:06:17.147666] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:58.014 [2024-07-13 22:06:17.147703] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:58.014 22:06:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:58.014 22:06:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # return 0 00:22:58.014 22:06:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:22:58.015 22:06:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:58.274 BaseBdev1_malloc 00:22:58.274 22:06:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:58.534 [2024-07-13 22:06:17.672875] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:58.534 [2024-07-13 22:06:17.672950] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:58.534 [2024-07-13 22:06:17.672993] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:22:58.534 [2024-07-13 22:06:17.673008] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:58.534 [2024-07-13 22:06:17.675185] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:58.534 [2024-07-13 22:06:17.675219] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:58.534 BaseBdev1 00:22:58.534 22:06:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:22:58.534 22:06:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:58.534 BaseBdev2_malloc 00:22:58.534 22:06:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:58.793 [2024-07-13 22:06:18.057267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:58.793 [2024-07-13 22:06:18.057323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:58.793 [2024-07-13 22:06:18.057349] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:22:58.793 [2024-07-13 22:06:18.057365] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:58.793 [2024-07-13 22:06:18.059532] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:58.793 [2024-07-13 22:06:18.059564] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:58.793 BaseBdev2 00:22:58.793 22:06:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:59.052 spare_malloc 00:22:59.052 22:06:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:59.052 spare_delay 00:22:59.311 22:06:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:59.311 [2024-07-13 22:06:18.595119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:59.311 [2024-07-13 22:06:18.595172] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:59.311 [2024-07-13 22:06:18.595210] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:22:59.311 [2024-07-13 22:06:18.595223] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:59.311 [2024-07-13 22:06:18.597304] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:59.311 [2024-07-13 22:06:18.597333] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:59.311 spare 00:22:59.311 22:06:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:22:59.570 [2024-07-13 22:06:18.759571] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:59.570 [2024-07-13 22:06:18.761308] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:59.570 [2024-07-13 22:06:18.761453] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041a80 00:22:59.570 [2024-07-13 22:06:18.761471] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:59.570 [2024-07-13 22:06:18.761720] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:22:59.570 [2024-07-13 22:06:18.761889] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041a80 00:22:59.570 [2024-07-13 22:06:18.761900] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041a80 00:22:59.570 [2024-07-13 22:06:18.762037] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:59.570 22:06:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:59.570 22:06:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:59.570 22:06:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:59.570 22:06:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:59.570 22:06:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:59.570 22:06:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:59.570 22:06:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:59.570 22:06:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:59.570 22:06:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:59.570 22:06:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:59.570 22:06:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:59.570 22:06:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:59.570 22:06:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:59.570 "name": "raid_bdev1", 00:22:59.570 "uuid": "58e34fcc-5089-4367-ba56-35a9076c372a", 00:22:59.570 "strip_size_kb": 0, 00:22:59.570 "state": "online", 00:22:59.570 "raid_level": "raid1", 00:22:59.570 "superblock": true, 00:22:59.570 "num_base_bdevs": 2, 00:22:59.570 "num_base_bdevs_discovered": 2, 00:22:59.570 "num_base_bdevs_operational": 2, 00:22:59.570 "base_bdevs_list": [ 00:22:59.570 { 00:22:59.570 "name": "BaseBdev1", 00:22:59.570 "uuid": "e7f7066a-ac6b-5f3c-b709-228f193fa73c", 00:22:59.570 "is_configured": true, 00:22:59.570 "data_offset": 2048, 00:22:59.570 "data_size": 63488 00:22:59.570 }, 00:22:59.570 { 00:22:59.570 "name": "BaseBdev2", 00:22:59.570 "uuid": "62774522-6961-5e72-94a1-2c239e4c654c", 00:22:59.570 "is_configured": true, 00:22:59.570 "data_offset": 2048, 00:22:59.570 "data_size": 63488 00:22:59.570 } 00:22:59.570 ] 00:22:59.570 }' 00:22:59.570 22:06:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:59.570 22:06:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:00.137 22:06:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:00.137 22:06:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:23:00.395 [2024-07-13 22:06:19.622135] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:00.395 22:06:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:23:00.395 22:06:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:00.395 22:06:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:00.654 22:06:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:23:00.654 22:06:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:23:00.654 22:06:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:00.654 22:06:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:23:00.654 [2024-07-13 22:06:19.902966] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000108b0 00:23:00.654 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:00.654 Zero copy mechanism will not be used. 00:23:00.654 Running I/O for 60 seconds... 00:23:00.654 [2024-07-13 22:06:19.978557] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:00.654 [2024-07-13 22:06:19.984094] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000108b0 00:23:00.654 22:06:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:00.654 22:06:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:00.654 22:06:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:00.654 22:06:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:00.654 22:06:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:00.654 22:06:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:00.654 22:06:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:00.654 22:06:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:00.654 22:06:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:00.654 22:06:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:00.654 22:06:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:00.654 22:06:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:00.913 22:06:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:00.913 "name": "raid_bdev1", 00:23:00.913 "uuid": "58e34fcc-5089-4367-ba56-35a9076c372a", 00:23:00.913 "strip_size_kb": 0, 00:23:00.913 "state": "online", 00:23:00.913 "raid_level": "raid1", 00:23:00.913 "superblock": true, 00:23:00.913 "num_base_bdevs": 2, 00:23:00.913 "num_base_bdevs_discovered": 1, 00:23:00.913 "num_base_bdevs_operational": 1, 00:23:00.913 "base_bdevs_list": [ 00:23:00.913 { 00:23:00.913 "name": null, 00:23:00.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:00.913 "is_configured": false, 00:23:00.913 "data_offset": 2048, 00:23:00.913 "data_size": 63488 00:23:00.913 }, 00:23:00.913 { 00:23:00.913 "name": "BaseBdev2", 00:23:00.913 "uuid": "62774522-6961-5e72-94a1-2c239e4c654c", 00:23:00.913 "is_configured": true, 00:23:00.913 "data_offset": 2048, 00:23:00.913 "data_size": 63488 00:23:00.913 } 00:23:00.913 ] 00:23:00.913 }' 00:23:00.913 22:06:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:00.913 22:06:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:01.546 22:06:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:01.546 [2024-07-13 22:06:20.808104] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:01.546 22:06:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:23:01.546 [2024-07-13 22:06:20.857063] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010980 00:23:01.546 [2024-07-13 22:06:20.858887] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:01.805 [2024-07-13 22:06:20.972378] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:01.805 [2024-07-13 22:06:20.972796] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:01.805 [2024-07-13 22:06:21.186627] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:01.805 [2024-07-13 22:06:21.186779] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:02.374 [2024-07-13 22:06:21.556554] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:02.374 [2024-07-13 22:06:21.669416] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:02.374 [2024-07-13 22:06:21.669624] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:02.632 22:06:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:02.632 22:06:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:02.632 22:06:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:02.632 22:06:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:02.632 22:06:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:02.632 22:06:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:02.632 22:06:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:02.632 [2024-07-13 22:06:21.896998] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:02.891 22:06:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:02.891 "name": "raid_bdev1", 00:23:02.891 "uuid": "58e34fcc-5089-4367-ba56-35a9076c372a", 00:23:02.891 "strip_size_kb": 0, 00:23:02.891 "state": "online", 00:23:02.891 "raid_level": "raid1", 00:23:02.891 "superblock": true, 00:23:02.891 "num_base_bdevs": 2, 00:23:02.891 "num_base_bdevs_discovered": 2, 00:23:02.891 "num_base_bdevs_operational": 2, 00:23:02.891 "process": { 00:23:02.891 "type": "rebuild", 00:23:02.891 "target": "spare", 00:23:02.891 "progress": { 00:23:02.891 "blocks": 14336, 00:23:02.891 "percent": 22 00:23:02.891 } 00:23:02.891 }, 00:23:02.891 "base_bdevs_list": [ 00:23:02.891 { 00:23:02.891 "name": "spare", 00:23:02.891 "uuid": "0c0841ce-3449-5656-aebb-4fa69d994530", 00:23:02.891 "is_configured": true, 00:23:02.891 "data_offset": 2048, 00:23:02.891 "data_size": 63488 00:23:02.891 }, 00:23:02.891 { 00:23:02.891 "name": "BaseBdev2", 00:23:02.891 "uuid": "62774522-6961-5e72-94a1-2c239e4c654c", 00:23:02.891 "is_configured": true, 00:23:02.891 "data_offset": 2048, 00:23:02.891 "data_size": 63488 00:23:02.891 } 00:23:02.891 ] 00:23:02.891 }' 00:23:02.891 22:06:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:02.891 22:06:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:02.891 22:06:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:02.891 22:06:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:02.892 22:06:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:02.892 [2024-07-13 22:06:22.105521] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:23:02.892 [2024-07-13 22:06:22.248316] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:03.150 [2024-07-13 22:06:22.421707] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:03.150 [2024-07-13 22:06:22.429100] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:03.150 [2024-07-13 22:06:22.429134] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:03.150 [2024-07-13 22:06:22.429149] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:03.150 [2024-07-13 22:06:22.476094] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000108b0 00:23:03.150 22:06:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:03.150 22:06:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:03.150 22:06:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:03.151 22:06:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:03.151 22:06:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:03.151 22:06:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:03.151 22:06:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:03.151 22:06:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:03.151 22:06:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:03.151 22:06:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:03.151 22:06:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:03.151 22:06:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:03.409 22:06:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:03.409 "name": "raid_bdev1", 00:23:03.409 "uuid": "58e34fcc-5089-4367-ba56-35a9076c372a", 00:23:03.409 "strip_size_kb": 0, 00:23:03.409 "state": "online", 00:23:03.409 "raid_level": "raid1", 00:23:03.409 "superblock": true, 00:23:03.409 "num_base_bdevs": 2, 00:23:03.409 "num_base_bdevs_discovered": 1, 00:23:03.409 "num_base_bdevs_operational": 1, 00:23:03.409 "base_bdevs_list": [ 00:23:03.409 { 00:23:03.409 "name": null, 00:23:03.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.409 "is_configured": false, 00:23:03.409 "data_offset": 2048, 00:23:03.409 "data_size": 63488 00:23:03.409 }, 00:23:03.409 { 00:23:03.409 "name": "BaseBdev2", 00:23:03.409 "uuid": "62774522-6961-5e72-94a1-2c239e4c654c", 00:23:03.409 "is_configured": true, 00:23:03.409 "data_offset": 2048, 00:23:03.409 "data_size": 63488 00:23:03.409 } 00:23:03.409 ] 00:23:03.409 }' 00:23:03.409 22:06:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:03.409 22:06:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:03.977 22:06:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:03.977 22:06:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:03.977 22:06:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:03.977 22:06:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:03.977 22:06:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:03.977 22:06:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:03.977 22:06:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:04.235 22:06:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:04.235 "name": "raid_bdev1", 00:23:04.235 "uuid": "58e34fcc-5089-4367-ba56-35a9076c372a", 00:23:04.235 "strip_size_kb": 0, 00:23:04.235 "state": "online", 00:23:04.235 "raid_level": "raid1", 00:23:04.235 "superblock": true, 00:23:04.235 "num_base_bdevs": 2, 00:23:04.235 "num_base_bdevs_discovered": 1, 00:23:04.235 "num_base_bdevs_operational": 1, 00:23:04.235 "base_bdevs_list": [ 00:23:04.235 { 00:23:04.235 "name": null, 00:23:04.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.235 "is_configured": false, 00:23:04.235 "data_offset": 2048, 00:23:04.235 "data_size": 63488 00:23:04.235 }, 00:23:04.235 { 00:23:04.235 "name": "BaseBdev2", 00:23:04.235 "uuid": "62774522-6961-5e72-94a1-2c239e4c654c", 00:23:04.235 "is_configured": true, 00:23:04.235 "data_offset": 2048, 00:23:04.235 "data_size": 63488 00:23:04.235 } 00:23:04.235 ] 00:23:04.235 }' 00:23:04.235 22:06:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:04.235 22:06:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:04.235 22:06:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:04.236 22:06:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:04.236 22:06:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:04.236 [2024-07-13 22:06:23.610542] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:04.494 22:06:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:04.494 [2024-07-13 22:06:23.681414] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010a50 00:23:04.494 [2024-07-13 22:06:23.683259] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:04.494 [2024-07-13 22:06:23.801832] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:04.494 [2024-07-13 22:06:23.802245] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:04.753 [2024-07-13 22:06:24.029409] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:04.753 [2024-07-13 22:06:24.029651] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:05.012 [2024-07-13 22:06:24.353032] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:05.271 [2024-07-13 22:06:24.478997] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:05.531 22:06:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:05.531 22:06:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:05.531 22:06:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:05.531 22:06:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:05.531 22:06:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:05.531 22:06:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:05.531 22:06:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:05.531 [2024-07-13 22:06:24.814619] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:23:05.531 22:06:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:05.531 "name": "raid_bdev1", 00:23:05.531 "uuid": "58e34fcc-5089-4367-ba56-35a9076c372a", 00:23:05.531 "strip_size_kb": 0, 00:23:05.531 "state": "online", 00:23:05.531 "raid_level": "raid1", 00:23:05.531 "superblock": true, 00:23:05.531 "num_base_bdevs": 2, 00:23:05.531 "num_base_bdevs_discovered": 2, 00:23:05.531 "num_base_bdevs_operational": 2, 00:23:05.531 "process": { 00:23:05.531 "type": "rebuild", 00:23:05.531 "target": "spare", 00:23:05.531 "progress": { 00:23:05.531 "blocks": 16384, 00:23:05.531 "percent": 25 00:23:05.531 } 00:23:05.531 }, 00:23:05.531 "base_bdevs_list": [ 00:23:05.531 { 00:23:05.531 "name": "spare", 00:23:05.531 "uuid": "0c0841ce-3449-5656-aebb-4fa69d994530", 00:23:05.531 "is_configured": true, 00:23:05.531 "data_offset": 2048, 00:23:05.531 "data_size": 63488 00:23:05.531 }, 00:23:05.531 { 00:23:05.531 "name": "BaseBdev2", 00:23:05.531 "uuid": "62774522-6961-5e72-94a1-2c239e4c654c", 00:23:05.531 "is_configured": true, 00:23:05.531 "data_offset": 2048, 00:23:05.531 "data_size": 63488 00:23:05.531 } 00:23:05.531 ] 00:23:05.531 }' 00:23:05.531 22:06:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:05.531 22:06:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:05.531 22:06:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:05.790 22:06:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:05.790 22:06:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:23:05.791 22:06:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:23:05.791 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:23:05.791 22:06:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:23:05.791 22:06:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:23:05.791 22:06:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:23:05.791 22:06:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@705 -- # local timeout=715 00:23:05.791 22:06:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:23:05.791 22:06:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:05.791 22:06:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:05.791 22:06:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:05.791 22:06:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:05.791 22:06:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:05.791 22:06:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:05.791 22:06:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:05.791 [2024-07-13 22:06:25.042259] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:23:05.791 22:06:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:05.791 "name": "raid_bdev1", 00:23:05.791 "uuid": "58e34fcc-5089-4367-ba56-35a9076c372a", 00:23:05.791 "strip_size_kb": 0, 00:23:05.791 "state": "online", 00:23:05.791 "raid_level": "raid1", 00:23:05.791 "superblock": true, 00:23:05.791 "num_base_bdevs": 2, 00:23:05.791 "num_base_bdevs_discovered": 2, 00:23:05.791 "num_base_bdevs_operational": 2, 00:23:05.791 "process": { 00:23:05.791 "type": "rebuild", 00:23:05.791 "target": "spare", 00:23:05.791 "progress": { 00:23:05.791 "blocks": 20480, 00:23:05.791 "percent": 32 00:23:05.791 } 00:23:05.791 }, 00:23:05.791 "base_bdevs_list": [ 00:23:05.791 { 00:23:05.791 "name": "spare", 00:23:05.791 "uuid": "0c0841ce-3449-5656-aebb-4fa69d994530", 00:23:05.791 "is_configured": true, 00:23:05.791 "data_offset": 2048, 00:23:05.791 "data_size": 63488 00:23:05.791 }, 00:23:05.791 { 00:23:05.791 "name": "BaseBdev2", 00:23:05.791 "uuid": "62774522-6961-5e72-94a1-2c239e4c654c", 00:23:05.791 "is_configured": true, 00:23:05.791 "data_offset": 2048, 00:23:05.791 "data_size": 63488 00:23:05.791 } 00:23:05.791 ] 00:23:05.791 }' 00:23:05.791 22:06:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:05.791 22:06:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:05.791 22:06:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:05.791 22:06:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:05.791 22:06:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:23:06.050 [2024-07-13 22:06:25.245590] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:23:06.310 [2024-07-13 22:06:25.483696] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:23:06.310 [2024-07-13 22:06:25.685304] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:23:06.878 [2024-07-13 22:06:25.989685] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:23:06.878 22:06:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:23:06.878 22:06:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:06.878 22:06:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:06.878 22:06:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:06.878 22:06:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:06.878 22:06:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:06.878 22:06:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:06.878 22:06:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:07.138 22:06:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:07.138 "name": "raid_bdev1", 00:23:07.138 "uuid": "58e34fcc-5089-4367-ba56-35a9076c372a", 00:23:07.138 "strip_size_kb": 0, 00:23:07.138 "state": "online", 00:23:07.138 "raid_level": "raid1", 00:23:07.138 "superblock": true, 00:23:07.138 "num_base_bdevs": 2, 00:23:07.138 "num_base_bdevs_discovered": 2, 00:23:07.138 "num_base_bdevs_operational": 2, 00:23:07.138 "process": { 00:23:07.138 "type": "rebuild", 00:23:07.138 "target": "spare", 00:23:07.138 "progress": { 00:23:07.138 "blocks": 36864, 00:23:07.138 "percent": 58 00:23:07.138 } 00:23:07.138 }, 00:23:07.138 "base_bdevs_list": [ 00:23:07.138 { 00:23:07.138 "name": "spare", 00:23:07.138 "uuid": "0c0841ce-3449-5656-aebb-4fa69d994530", 00:23:07.138 "is_configured": true, 00:23:07.138 "data_offset": 2048, 00:23:07.138 "data_size": 63488 00:23:07.138 }, 00:23:07.138 { 00:23:07.138 "name": "BaseBdev2", 00:23:07.138 "uuid": "62774522-6961-5e72-94a1-2c239e4c654c", 00:23:07.138 "is_configured": true, 00:23:07.138 "data_offset": 2048, 00:23:07.138 "data_size": 63488 00:23:07.138 } 00:23:07.138 ] 00:23:07.138 }' 00:23:07.138 22:06:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:07.138 22:06:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:07.138 22:06:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:07.138 22:06:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:07.138 22:06:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:23:07.138 [2024-07-13 22:06:26.439907] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:23:07.138 [2024-07-13 22:06:26.440147] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:23:07.398 [2024-07-13 22:06:26.756228] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:23:07.656 [2024-07-13 22:06:26.958477] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:23:08.223 22:06:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:23:08.223 22:06:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:08.223 22:06:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:08.223 22:06:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:08.223 22:06:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:08.224 22:06:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:08.224 22:06:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:08.224 22:06:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:08.224 [2024-07-13 22:06:27.470788] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:23:08.224 [2024-07-13 22:06:27.573055] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:23:08.224 22:06:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:08.224 "name": "raid_bdev1", 00:23:08.224 "uuid": "58e34fcc-5089-4367-ba56-35a9076c372a", 00:23:08.224 "strip_size_kb": 0, 00:23:08.224 "state": "online", 00:23:08.224 "raid_level": "raid1", 00:23:08.224 "superblock": true, 00:23:08.224 "num_base_bdevs": 2, 00:23:08.224 "num_base_bdevs_discovered": 2, 00:23:08.224 "num_base_bdevs_operational": 2, 00:23:08.224 "process": { 00:23:08.224 "type": "rebuild", 00:23:08.224 "target": "spare", 00:23:08.224 "progress": { 00:23:08.224 "blocks": 59392, 00:23:08.224 "percent": 93 00:23:08.224 } 00:23:08.224 }, 00:23:08.224 "base_bdevs_list": [ 00:23:08.224 { 00:23:08.224 "name": "spare", 00:23:08.224 "uuid": "0c0841ce-3449-5656-aebb-4fa69d994530", 00:23:08.224 "is_configured": true, 00:23:08.224 "data_offset": 2048, 00:23:08.224 "data_size": 63488 00:23:08.224 }, 00:23:08.224 { 00:23:08.224 "name": "BaseBdev2", 00:23:08.224 "uuid": "62774522-6961-5e72-94a1-2c239e4c654c", 00:23:08.224 "is_configured": true, 00:23:08.224 "data_offset": 2048, 00:23:08.224 "data_size": 63488 00:23:08.224 } 00:23:08.224 ] 00:23:08.224 }' 00:23:08.224 22:06:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:08.483 22:06:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:08.483 22:06:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:08.483 22:06:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:08.483 22:06:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:23:08.742 [2024-07-13 22:06:27.894045] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:08.742 [2024-07-13 22:06:27.994364] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:08.742 [2024-07-13 22:06:27.996049] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:09.311 22:06:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:23:09.311 22:06:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:09.311 22:06:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:09.311 22:06:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:09.311 22:06:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:09.311 22:06:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:09.571 22:06:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:09.571 22:06:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:09.571 22:06:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:09.571 "name": "raid_bdev1", 00:23:09.571 "uuid": "58e34fcc-5089-4367-ba56-35a9076c372a", 00:23:09.571 "strip_size_kb": 0, 00:23:09.571 "state": "online", 00:23:09.571 "raid_level": "raid1", 00:23:09.571 "superblock": true, 00:23:09.571 "num_base_bdevs": 2, 00:23:09.571 "num_base_bdevs_discovered": 2, 00:23:09.571 "num_base_bdevs_operational": 2, 00:23:09.571 "base_bdevs_list": [ 00:23:09.571 { 00:23:09.571 "name": "spare", 00:23:09.571 "uuid": "0c0841ce-3449-5656-aebb-4fa69d994530", 00:23:09.571 "is_configured": true, 00:23:09.571 "data_offset": 2048, 00:23:09.571 "data_size": 63488 00:23:09.571 }, 00:23:09.571 { 00:23:09.571 "name": "BaseBdev2", 00:23:09.571 "uuid": "62774522-6961-5e72-94a1-2c239e4c654c", 00:23:09.571 "is_configured": true, 00:23:09.571 "data_offset": 2048, 00:23:09.571 "data_size": 63488 00:23:09.571 } 00:23:09.571 ] 00:23:09.571 }' 00:23:09.571 22:06:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:09.571 22:06:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:09.571 22:06:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:09.571 22:06:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:23:09.571 22:06:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # break 00:23:09.571 22:06:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:09.571 22:06:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:09.571 22:06:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:09.571 22:06:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:09.571 22:06:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:09.571 22:06:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:09.830 22:06:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:09.830 22:06:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:09.830 "name": "raid_bdev1", 00:23:09.830 "uuid": "58e34fcc-5089-4367-ba56-35a9076c372a", 00:23:09.830 "strip_size_kb": 0, 00:23:09.830 "state": "online", 00:23:09.830 "raid_level": "raid1", 00:23:09.830 "superblock": true, 00:23:09.830 "num_base_bdevs": 2, 00:23:09.830 "num_base_bdevs_discovered": 2, 00:23:09.830 "num_base_bdevs_operational": 2, 00:23:09.830 "base_bdevs_list": [ 00:23:09.830 { 00:23:09.830 "name": "spare", 00:23:09.830 "uuid": "0c0841ce-3449-5656-aebb-4fa69d994530", 00:23:09.830 "is_configured": true, 00:23:09.830 "data_offset": 2048, 00:23:09.830 "data_size": 63488 00:23:09.830 }, 00:23:09.830 { 00:23:09.830 "name": "BaseBdev2", 00:23:09.830 "uuid": "62774522-6961-5e72-94a1-2c239e4c654c", 00:23:09.830 "is_configured": true, 00:23:09.830 "data_offset": 2048, 00:23:09.830 "data_size": 63488 00:23:09.830 } 00:23:09.830 ] 00:23:09.830 }' 00:23:09.830 22:06:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:09.830 22:06:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:09.830 22:06:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:09.830 22:06:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:09.830 22:06:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:09.830 22:06:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:09.830 22:06:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:09.830 22:06:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:09.830 22:06:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:09.830 22:06:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:09.830 22:06:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:09.830 22:06:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:09.830 22:06:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:09.830 22:06:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:09.830 22:06:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:09.830 22:06:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:10.089 22:06:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:10.089 "name": "raid_bdev1", 00:23:10.089 "uuid": "58e34fcc-5089-4367-ba56-35a9076c372a", 00:23:10.089 "strip_size_kb": 0, 00:23:10.089 "state": "online", 00:23:10.089 "raid_level": "raid1", 00:23:10.089 "superblock": true, 00:23:10.089 "num_base_bdevs": 2, 00:23:10.089 "num_base_bdevs_discovered": 2, 00:23:10.089 "num_base_bdevs_operational": 2, 00:23:10.089 "base_bdevs_list": [ 00:23:10.089 { 00:23:10.089 "name": "spare", 00:23:10.089 "uuid": "0c0841ce-3449-5656-aebb-4fa69d994530", 00:23:10.089 "is_configured": true, 00:23:10.089 "data_offset": 2048, 00:23:10.089 "data_size": 63488 00:23:10.089 }, 00:23:10.089 { 00:23:10.089 "name": "BaseBdev2", 00:23:10.089 "uuid": "62774522-6961-5e72-94a1-2c239e4c654c", 00:23:10.089 "is_configured": true, 00:23:10.089 "data_offset": 2048, 00:23:10.089 "data_size": 63488 00:23:10.089 } 00:23:10.089 ] 00:23:10.089 }' 00:23:10.089 22:06:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:10.089 22:06:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:10.657 22:06:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:10.657 [2024-07-13 22:06:30.018930] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:10.657 [2024-07-13 22:06:30.018967] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:10.916 00:23:10.916 Latency(us) 00:23:10.917 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.917 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:23:10.917 raid_bdev1 : 10.13 118.67 356.00 0.00 0.00 11491.88 280.17 108213.04 00:23:10.917 =================================================================================================================== 00:23:10.917 Total : 118.67 356.00 0.00 0.00 11491.88 280.17 108213.04 00:23:10.917 [2024-07-13 22:06:30.079358] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:10.917 [2024-07-13 22:06:30.079398] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:10.917 [2024-07-13 22:06:30.079475] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:10.917 [2024-07-13 22:06:30.079489] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041a80 name raid_bdev1, state offline 00:23:10.917 0 00:23:10.917 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:10.917 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # jq length 00:23:10.917 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:23:10.917 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:23:10.917 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:23:10.917 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:23:10.917 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:10.917 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:23:10.917 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:10.917 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:10.917 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:10.917 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:23:10.917 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:10.917 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:10.917 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:23:11.176 /dev/nbd0 00:23:11.176 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:11.176 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:11.176 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:23:11.176 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:23:11.176 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:23:11.176 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:23:11.176 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:23:11.176 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:23:11.176 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:23:11.176 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:23:11.176 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:11.176 1+0 records in 00:23:11.176 1+0 records out 00:23:11.176 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276163 s, 14.8 MB/s 00:23:11.176 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:11.176 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:23:11.176 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:11.176 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:23:11.176 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:23:11.176 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:11.176 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:11.176 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:23:11.176 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev2 ']' 00:23:11.176 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:23:11.176 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:11.176 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:23:11.176 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:11.176 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:23:11.176 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:11.176 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:23:11.176 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:11.176 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:11.176 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:23:11.435 /dev/nbd1 00:23:11.435 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:11.435 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:11.435 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:23:11.435 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:23:11.435 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:23:11.435 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:23:11.435 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:23:11.435 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:23:11.435 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:23:11.435 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:23:11.435 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:11.435 1+0 records in 00:23:11.435 1+0 records out 00:23:11.435 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264202 s, 15.5 MB/s 00:23:11.435 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:11.435 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:23:11.435 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:11.435 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:23:11.435 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:23:11.435 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:11.435 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:11.435 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:23:11.693 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:23:11.693 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:11.693 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:23:11.693 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:11.693 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:23:11.693 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:11.693 22:06:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:11.693 22:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:11.693 22:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:11.693 22:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:11.693 22:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:11.693 22:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:11.693 22:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:11.693 22:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:23:11.693 22:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:23:11.693 22:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:11.693 22:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:11.693 22:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:11.693 22:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:11.693 22:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:23:11.693 22:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:11.693 22:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:11.951 22:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:11.951 22:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:11.951 22:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:11.951 22:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:11.951 22:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:11.951 22:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:11.951 22:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:23:11.951 22:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:23:11.951 22:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:23:11.951 22:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:12.209 22:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:12.209 [2024-07-13 22:06:31.555984] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:12.209 [2024-07-13 22:06:31.556037] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:12.209 [2024-07-13 22:06:31.556078] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043580 00:23:12.209 [2024-07-13 22:06:31.556091] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:12.209 [2024-07-13 22:06:31.558229] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:12.209 [2024-07-13 22:06:31.558258] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:12.209 [2024-07-13 22:06:31.558338] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:12.209 [2024-07-13 22:06:31.558389] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:12.209 [2024-07-13 22:06:31.558538] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:12.209 spare 00:23:12.209 22:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:12.209 22:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:12.209 22:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:12.209 22:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:12.209 22:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:12.209 22:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:12.209 22:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:12.209 22:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:12.209 22:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:12.209 22:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:12.209 22:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:12.209 22:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:12.468 [2024-07-13 22:06:31.658866] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000043b80 00:23:12.468 [2024-07-13 22:06:31.658900] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:12.468 [2024-07-13 22:06:31.659179] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00001f930 00:23:12.468 [2024-07-13 22:06:31.659397] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000043b80 00:23:12.468 [2024-07-13 22:06:31.659409] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000043b80 00:23:12.468 [2024-07-13 22:06:31.659565] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:12.468 22:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:12.468 "name": "raid_bdev1", 00:23:12.468 "uuid": "58e34fcc-5089-4367-ba56-35a9076c372a", 00:23:12.468 "strip_size_kb": 0, 00:23:12.468 "state": "online", 00:23:12.468 "raid_level": "raid1", 00:23:12.468 "superblock": true, 00:23:12.468 "num_base_bdevs": 2, 00:23:12.468 "num_base_bdevs_discovered": 2, 00:23:12.468 "num_base_bdevs_operational": 2, 00:23:12.468 "base_bdevs_list": [ 00:23:12.468 { 00:23:12.468 "name": "spare", 00:23:12.468 "uuid": "0c0841ce-3449-5656-aebb-4fa69d994530", 00:23:12.468 "is_configured": true, 00:23:12.468 "data_offset": 2048, 00:23:12.468 "data_size": 63488 00:23:12.468 }, 00:23:12.468 { 00:23:12.468 "name": "BaseBdev2", 00:23:12.468 "uuid": "62774522-6961-5e72-94a1-2c239e4c654c", 00:23:12.468 "is_configured": true, 00:23:12.468 "data_offset": 2048, 00:23:12.468 "data_size": 63488 00:23:12.468 } 00:23:12.468 ] 00:23:12.468 }' 00:23:12.468 22:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:12.468 22:06:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:13.034 22:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:13.034 22:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:13.034 22:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:13.034 22:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:13.034 22:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:13.034 22:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:13.034 22:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:13.034 22:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:13.034 "name": "raid_bdev1", 00:23:13.034 "uuid": "58e34fcc-5089-4367-ba56-35a9076c372a", 00:23:13.034 "strip_size_kb": 0, 00:23:13.034 "state": "online", 00:23:13.034 "raid_level": "raid1", 00:23:13.034 "superblock": true, 00:23:13.034 "num_base_bdevs": 2, 00:23:13.034 "num_base_bdevs_discovered": 2, 00:23:13.034 "num_base_bdevs_operational": 2, 00:23:13.034 "base_bdevs_list": [ 00:23:13.034 { 00:23:13.034 "name": "spare", 00:23:13.034 "uuid": "0c0841ce-3449-5656-aebb-4fa69d994530", 00:23:13.034 "is_configured": true, 00:23:13.034 "data_offset": 2048, 00:23:13.034 "data_size": 63488 00:23:13.034 }, 00:23:13.034 { 00:23:13.034 "name": "BaseBdev2", 00:23:13.034 "uuid": "62774522-6961-5e72-94a1-2c239e4c654c", 00:23:13.034 "is_configured": true, 00:23:13.034 "data_offset": 2048, 00:23:13.034 "data_size": 63488 00:23:13.034 } 00:23:13.034 ] 00:23:13.034 }' 00:23:13.034 22:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:13.292 22:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:13.292 22:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:13.292 22:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:13.292 22:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:13.292 22:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:13.292 22:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:23:13.292 22:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:13.550 [2024-07-13 22:06:32.791573] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:13.550 22:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:13.550 22:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:13.550 22:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:13.550 22:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:13.550 22:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:13.550 22:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:13.550 22:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:13.550 22:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:13.550 22:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:13.550 22:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:13.550 22:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:13.550 22:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:13.808 22:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:13.808 "name": "raid_bdev1", 00:23:13.808 "uuid": "58e34fcc-5089-4367-ba56-35a9076c372a", 00:23:13.808 "strip_size_kb": 0, 00:23:13.808 "state": "online", 00:23:13.808 "raid_level": "raid1", 00:23:13.808 "superblock": true, 00:23:13.808 "num_base_bdevs": 2, 00:23:13.808 "num_base_bdevs_discovered": 1, 00:23:13.808 "num_base_bdevs_operational": 1, 00:23:13.808 "base_bdevs_list": [ 00:23:13.808 { 00:23:13.808 "name": null, 00:23:13.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:13.808 "is_configured": false, 00:23:13.808 "data_offset": 2048, 00:23:13.808 "data_size": 63488 00:23:13.808 }, 00:23:13.808 { 00:23:13.808 "name": "BaseBdev2", 00:23:13.808 "uuid": "62774522-6961-5e72-94a1-2c239e4c654c", 00:23:13.808 "is_configured": true, 00:23:13.808 "data_offset": 2048, 00:23:13.808 "data_size": 63488 00:23:13.808 } 00:23:13.808 ] 00:23:13.808 }' 00:23:13.808 22:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:13.808 22:06:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:14.065 22:06:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:14.325 [2024-07-13 22:06:33.605826] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:14.325 [2024-07-13 22:06:33.606031] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:14.325 [2024-07-13 22:06:33.606054] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:14.325 [2024-07-13 22:06:33.606102] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:14.325 [2024-07-13 22:06:33.624315] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00001fa00 00:23:14.325 [2024-07-13 22:06:33.626129] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:14.325 22:06:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # sleep 1 00:23:15.309 22:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:15.309 22:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:15.309 22:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:15.309 22:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:15.309 22:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:15.309 22:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:15.309 22:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:15.568 22:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:15.568 "name": "raid_bdev1", 00:23:15.568 "uuid": "58e34fcc-5089-4367-ba56-35a9076c372a", 00:23:15.568 "strip_size_kb": 0, 00:23:15.568 "state": "online", 00:23:15.568 "raid_level": "raid1", 00:23:15.568 "superblock": true, 00:23:15.568 "num_base_bdevs": 2, 00:23:15.569 "num_base_bdevs_discovered": 2, 00:23:15.569 "num_base_bdevs_operational": 2, 00:23:15.569 "process": { 00:23:15.569 "type": "rebuild", 00:23:15.569 "target": "spare", 00:23:15.569 "progress": { 00:23:15.569 "blocks": 22528, 00:23:15.569 "percent": 35 00:23:15.569 } 00:23:15.569 }, 00:23:15.569 "base_bdevs_list": [ 00:23:15.569 { 00:23:15.569 "name": "spare", 00:23:15.569 "uuid": "0c0841ce-3449-5656-aebb-4fa69d994530", 00:23:15.569 "is_configured": true, 00:23:15.569 "data_offset": 2048, 00:23:15.569 "data_size": 63488 00:23:15.569 }, 00:23:15.569 { 00:23:15.569 "name": "BaseBdev2", 00:23:15.569 "uuid": "62774522-6961-5e72-94a1-2c239e4c654c", 00:23:15.569 "is_configured": true, 00:23:15.569 "data_offset": 2048, 00:23:15.569 "data_size": 63488 00:23:15.569 } 00:23:15.569 ] 00:23:15.569 }' 00:23:15.569 22:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:15.569 22:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:15.569 22:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:15.569 22:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:15.569 22:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:15.828 [2024-07-13 22:06:35.047286] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:15.828 [2024-07-13 22:06:35.137847] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:15.828 [2024-07-13 22:06:35.137925] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:15.828 [2024-07-13 22:06:35.137941] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:15.828 [2024-07-13 22:06:35.137956] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:15.828 22:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:15.828 22:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:15.828 22:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:15.828 22:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:15.828 22:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:15.828 22:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:15.828 22:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:15.828 22:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:15.828 22:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:15.828 22:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:15.828 22:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:15.828 22:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:16.087 22:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:16.087 "name": "raid_bdev1", 00:23:16.087 "uuid": "58e34fcc-5089-4367-ba56-35a9076c372a", 00:23:16.087 "strip_size_kb": 0, 00:23:16.087 "state": "online", 00:23:16.087 "raid_level": "raid1", 00:23:16.087 "superblock": true, 00:23:16.087 "num_base_bdevs": 2, 00:23:16.087 "num_base_bdevs_discovered": 1, 00:23:16.087 "num_base_bdevs_operational": 1, 00:23:16.087 "base_bdevs_list": [ 00:23:16.087 { 00:23:16.087 "name": null, 00:23:16.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.087 "is_configured": false, 00:23:16.087 "data_offset": 2048, 00:23:16.087 "data_size": 63488 00:23:16.087 }, 00:23:16.087 { 00:23:16.087 "name": "BaseBdev2", 00:23:16.087 "uuid": "62774522-6961-5e72-94a1-2c239e4c654c", 00:23:16.087 "is_configured": true, 00:23:16.087 "data_offset": 2048, 00:23:16.087 "data_size": 63488 00:23:16.087 } 00:23:16.087 ] 00:23:16.087 }' 00:23:16.087 22:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:16.087 22:06:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:16.655 22:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:16.655 [2024-07-13 22:06:35.989553] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:16.655 [2024-07-13 22:06:35.989621] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:16.655 [2024-07-13 22:06:35.989645] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000044180 00:23:16.655 [2024-07-13 22:06:35.989663] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:16.655 [2024-07-13 22:06:35.990220] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:16.655 [2024-07-13 22:06:35.990247] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:16.655 [2024-07-13 22:06:35.990344] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:16.655 [2024-07-13 22:06:35.990361] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:16.655 [2024-07-13 22:06:35.990373] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:16.655 [2024-07-13 22:06:35.990397] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:16.655 [2024-07-13 22:06:36.008423] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00001fad0 00:23:16.655 spare 00:23:16.655 [2024-07-13 22:06:36.010174] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:16.655 22:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # sleep 1 00:23:18.032 22:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:18.032 22:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:18.032 22:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:18.032 22:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:18.032 22:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:18.032 22:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:18.032 22:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:18.032 22:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:18.032 "name": "raid_bdev1", 00:23:18.032 "uuid": "58e34fcc-5089-4367-ba56-35a9076c372a", 00:23:18.032 "strip_size_kb": 0, 00:23:18.032 "state": "online", 00:23:18.032 "raid_level": "raid1", 00:23:18.032 "superblock": true, 00:23:18.032 "num_base_bdevs": 2, 00:23:18.032 "num_base_bdevs_discovered": 2, 00:23:18.032 "num_base_bdevs_operational": 2, 00:23:18.032 "process": { 00:23:18.032 "type": "rebuild", 00:23:18.032 "target": "spare", 00:23:18.032 "progress": { 00:23:18.032 "blocks": 22528, 00:23:18.032 "percent": 35 00:23:18.032 } 00:23:18.032 }, 00:23:18.032 "base_bdevs_list": [ 00:23:18.032 { 00:23:18.032 "name": "spare", 00:23:18.032 "uuid": "0c0841ce-3449-5656-aebb-4fa69d994530", 00:23:18.032 "is_configured": true, 00:23:18.032 "data_offset": 2048, 00:23:18.032 "data_size": 63488 00:23:18.032 }, 00:23:18.032 { 00:23:18.032 "name": "BaseBdev2", 00:23:18.032 "uuid": "62774522-6961-5e72-94a1-2c239e4c654c", 00:23:18.032 "is_configured": true, 00:23:18.032 "data_offset": 2048, 00:23:18.032 "data_size": 63488 00:23:18.032 } 00:23:18.032 ] 00:23:18.032 }' 00:23:18.032 22:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:18.032 22:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:18.032 22:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:18.032 22:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:18.032 22:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:18.032 [2024-07-13 22:06:37.419713] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:18.032 [2024-07-13 22:06:37.420964] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:18.032 [2024-07-13 22:06:37.421047] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:18.032 [2024-07-13 22:06:37.421070] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:18.032 [2024-07-13 22:06:37.421079] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:18.291 22:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:18.291 22:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:18.291 22:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:18.291 22:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:18.291 22:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:18.291 22:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:18.291 22:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:18.291 22:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:18.291 22:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:18.291 22:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:18.291 22:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:18.291 22:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:18.291 22:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:18.291 "name": "raid_bdev1", 00:23:18.291 "uuid": "58e34fcc-5089-4367-ba56-35a9076c372a", 00:23:18.291 "strip_size_kb": 0, 00:23:18.291 "state": "online", 00:23:18.291 "raid_level": "raid1", 00:23:18.291 "superblock": true, 00:23:18.291 "num_base_bdevs": 2, 00:23:18.291 "num_base_bdevs_discovered": 1, 00:23:18.291 "num_base_bdevs_operational": 1, 00:23:18.291 "base_bdevs_list": [ 00:23:18.291 { 00:23:18.291 "name": null, 00:23:18.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:18.291 "is_configured": false, 00:23:18.291 "data_offset": 2048, 00:23:18.291 "data_size": 63488 00:23:18.291 }, 00:23:18.291 { 00:23:18.291 "name": "BaseBdev2", 00:23:18.291 "uuid": "62774522-6961-5e72-94a1-2c239e4c654c", 00:23:18.291 "is_configured": true, 00:23:18.291 "data_offset": 2048, 00:23:18.291 "data_size": 63488 00:23:18.291 } 00:23:18.291 ] 00:23:18.291 }' 00:23:18.291 22:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:18.291 22:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:18.859 22:06:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:18.859 22:06:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:18.859 22:06:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:18.859 22:06:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:18.859 22:06:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:18.859 22:06:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:18.859 22:06:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:19.118 22:06:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:19.118 "name": "raid_bdev1", 00:23:19.118 "uuid": "58e34fcc-5089-4367-ba56-35a9076c372a", 00:23:19.118 "strip_size_kb": 0, 00:23:19.118 "state": "online", 00:23:19.118 "raid_level": "raid1", 00:23:19.118 "superblock": true, 00:23:19.118 "num_base_bdevs": 2, 00:23:19.118 "num_base_bdevs_discovered": 1, 00:23:19.118 "num_base_bdevs_operational": 1, 00:23:19.118 "base_bdevs_list": [ 00:23:19.118 { 00:23:19.118 "name": null, 00:23:19.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.118 "is_configured": false, 00:23:19.118 "data_offset": 2048, 00:23:19.118 "data_size": 63488 00:23:19.118 }, 00:23:19.118 { 00:23:19.118 "name": "BaseBdev2", 00:23:19.119 "uuid": "62774522-6961-5e72-94a1-2c239e4c654c", 00:23:19.119 "is_configured": true, 00:23:19.119 "data_offset": 2048, 00:23:19.119 "data_size": 63488 00:23:19.119 } 00:23:19.119 ] 00:23:19.119 }' 00:23:19.119 22:06:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:19.119 22:06:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:19.119 22:06:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:19.119 22:06:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:19.119 22:06:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:23:19.378 22:06:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:19.378 [2024-07-13 22:06:38.683391] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:19.378 [2024-07-13 22:06:38.683456] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:19.378 [2024-07-13 22:06:38.683484] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000044780 00:23:19.378 [2024-07-13 22:06:38.683496] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:19.378 [2024-07-13 22:06:38.684025] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:19.378 [2024-07-13 22:06:38.684045] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:19.378 [2024-07-13 22:06:38.684128] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:19.378 [2024-07-13 22:06:38.684144] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:19.378 [2024-07-13 22:06:38.684156] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:19.378 BaseBdev1 00:23:19.378 22:06:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # sleep 1 00:23:20.316 22:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:20.316 22:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:20.316 22:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:20.316 22:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:20.316 22:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:20.316 22:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:20.316 22:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:20.316 22:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:20.316 22:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:20.316 22:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:20.575 22:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:20.575 22:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:20.575 22:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:20.575 "name": "raid_bdev1", 00:23:20.575 "uuid": "58e34fcc-5089-4367-ba56-35a9076c372a", 00:23:20.575 "strip_size_kb": 0, 00:23:20.575 "state": "online", 00:23:20.575 "raid_level": "raid1", 00:23:20.575 "superblock": true, 00:23:20.575 "num_base_bdevs": 2, 00:23:20.575 "num_base_bdevs_discovered": 1, 00:23:20.575 "num_base_bdevs_operational": 1, 00:23:20.575 "base_bdevs_list": [ 00:23:20.575 { 00:23:20.575 "name": null, 00:23:20.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:20.575 "is_configured": false, 00:23:20.575 "data_offset": 2048, 00:23:20.575 "data_size": 63488 00:23:20.575 }, 00:23:20.575 { 00:23:20.575 "name": "BaseBdev2", 00:23:20.575 "uuid": "62774522-6961-5e72-94a1-2c239e4c654c", 00:23:20.575 "is_configured": true, 00:23:20.575 "data_offset": 2048, 00:23:20.575 "data_size": 63488 00:23:20.575 } 00:23:20.575 ] 00:23:20.575 }' 00:23:20.575 22:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:20.575 22:06:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:21.142 22:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:21.142 22:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:21.142 22:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:21.142 22:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:21.142 22:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:21.142 22:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:21.142 22:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:21.401 22:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:21.401 "name": "raid_bdev1", 00:23:21.401 "uuid": "58e34fcc-5089-4367-ba56-35a9076c372a", 00:23:21.401 "strip_size_kb": 0, 00:23:21.401 "state": "online", 00:23:21.401 "raid_level": "raid1", 00:23:21.401 "superblock": true, 00:23:21.401 "num_base_bdevs": 2, 00:23:21.401 "num_base_bdevs_discovered": 1, 00:23:21.401 "num_base_bdevs_operational": 1, 00:23:21.401 "base_bdevs_list": [ 00:23:21.401 { 00:23:21.401 "name": null, 00:23:21.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:21.401 "is_configured": false, 00:23:21.401 "data_offset": 2048, 00:23:21.401 "data_size": 63488 00:23:21.401 }, 00:23:21.401 { 00:23:21.401 "name": "BaseBdev2", 00:23:21.401 "uuid": "62774522-6961-5e72-94a1-2c239e4c654c", 00:23:21.401 "is_configured": true, 00:23:21.401 "data_offset": 2048, 00:23:21.401 "data_size": 63488 00:23:21.401 } 00:23:21.401 ] 00:23:21.401 }' 00:23:21.401 22:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:21.401 22:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:21.401 22:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:21.401 22:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:21.401 22:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:21.401 22:06:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@648 -- # local es=0 00:23:21.401 22:06:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:21.401 22:06:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:21.401 22:06:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:21.401 22:06:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:21.401 22:06:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:21.401 22:06:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:21.401 22:06:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:21.401 22:06:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:21.401 22:06:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:23:21.401 22:06:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:21.401 [2024-07-13 22:06:40.777136] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:21.401 [2024-07-13 22:06:40.777295] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:21.401 [2024-07-13 22:06:40.777311] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:21.401 request: 00:23:21.401 { 00:23:21.401 "base_bdev": "BaseBdev1", 00:23:21.401 "raid_bdev": "raid_bdev1", 00:23:21.401 "method": "bdev_raid_add_base_bdev", 00:23:21.401 "req_id": 1 00:23:21.401 } 00:23:21.401 Got JSON-RPC error response 00:23:21.401 response: 00:23:21.401 { 00:23:21.401 "code": -22, 00:23:21.401 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:23:21.401 } 00:23:21.660 22:06:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # es=1 00:23:21.660 22:06:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:21.660 22:06:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:21.660 22:06:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:21.660 22:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # sleep 1 00:23:22.598 22:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:22.598 22:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:22.598 22:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:22.598 22:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:22.598 22:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:22.598 22:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:22.598 22:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:22.598 22:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:22.598 22:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:22.598 22:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:22.598 22:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:22.598 22:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:22.598 22:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:22.598 "name": "raid_bdev1", 00:23:22.598 "uuid": "58e34fcc-5089-4367-ba56-35a9076c372a", 00:23:22.598 "strip_size_kb": 0, 00:23:22.598 "state": "online", 00:23:22.598 "raid_level": "raid1", 00:23:22.598 "superblock": true, 00:23:22.598 "num_base_bdevs": 2, 00:23:22.598 "num_base_bdevs_discovered": 1, 00:23:22.598 "num_base_bdevs_operational": 1, 00:23:22.598 "base_bdevs_list": [ 00:23:22.598 { 00:23:22.598 "name": null, 00:23:22.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:22.598 "is_configured": false, 00:23:22.598 "data_offset": 2048, 00:23:22.598 "data_size": 63488 00:23:22.598 }, 00:23:22.598 { 00:23:22.598 "name": "BaseBdev2", 00:23:22.598 "uuid": "62774522-6961-5e72-94a1-2c239e4c654c", 00:23:22.598 "is_configured": true, 00:23:22.598 "data_offset": 2048, 00:23:22.598 "data_size": 63488 00:23:22.598 } 00:23:22.598 ] 00:23:22.598 }' 00:23:22.598 22:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:22.598 22:06:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:23.166 22:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:23.166 22:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:23.166 22:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:23.166 22:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:23.166 22:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:23.166 22:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:23.166 22:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:23.426 22:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:23.426 "name": "raid_bdev1", 00:23:23.426 "uuid": "58e34fcc-5089-4367-ba56-35a9076c372a", 00:23:23.426 "strip_size_kb": 0, 00:23:23.426 "state": "online", 00:23:23.426 "raid_level": "raid1", 00:23:23.426 "superblock": true, 00:23:23.426 "num_base_bdevs": 2, 00:23:23.426 "num_base_bdevs_discovered": 1, 00:23:23.426 "num_base_bdevs_operational": 1, 00:23:23.426 "base_bdevs_list": [ 00:23:23.426 { 00:23:23.426 "name": null, 00:23:23.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:23.426 "is_configured": false, 00:23:23.426 "data_offset": 2048, 00:23:23.426 "data_size": 63488 00:23:23.426 }, 00:23:23.426 { 00:23:23.426 "name": "BaseBdev2", 00:23:23.426 "uuid": "62774522-6961-5e72-94a1-2c239e4c654c", 00:23:23.426 "is_configured": true, 00:23:23.426 "data_offset": 2048, 00:23:23.426 "data_size": 63488 00:23:23.426 } 00:23:23.426 ] 00:23:23.426 }' 00:23:23.426 22:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:23.426 22:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:23.426 22:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:23.426 22:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:23.426 22:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # killprocess 1473208 00:23:23.426 22:06:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@948 -- # '[' -z 1473208 ']' 00:23:23.426 22:06:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # kill -0 1473208 00:23:23.426 22:06:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # uname 00:23:23.426 22:06:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:23.426 22:06:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1473208 00:23:23.426 22:06:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:23.426 22:06:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:23.426 22:06:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1473208' 00:23:23.426 killing process with pid 1473208 00:23:23.426 22:06:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@967 -- # kill 1473208 00:23:23.426 Received shutdown signal, test time was about 22.802330 seconds 00:23:23.426 00:23:23.426 Latency(us) 00:23:23.426 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.426 =================================================================================================================== 00:23:23.426 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:23.426 [2024-07-13 22:06:42.762848] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:23.426 [2024-07-13 22:06:42.762985] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:23.426 22:06:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # wait 1473208 00:23:23.426 [2024-07-13 22:06:42.763050] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:23.426 [2024-07-13 22:06:42.763063] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000043b80 name raid_bdev1, state offline 00:23:23.685 [2024-07-13 22:06:42.930104] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:25.061 22:06:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # return 0 00:23:25.061 00:23:25.061 real 0m27.772s 00:23:25.061 user 0m40.844s 00:23:25.061 sys 0m3.851s 00:23:25.061 22:06:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:25.061 22:06:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:25.061 ************************************ 00:23:25.061 END TEST raid_rebuild_test_sb_io 00:23:25.061 ************************************ 00:23:25.061 22:06:44 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:23:25.061 22:06:44 bdev_raid -- bdev/bdev_raid.sh@876 -- # for n in 2 4 00:23:25.061 22:06:44 bdev_raid -- bdev/bdev_raid.sh@877 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:23:25.061 22:06:44 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:23:25.061 22:06:44 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:25.061 22:06:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:25.061 ************************************ 00:23:25.061 START TEST raid_rebuild_test 00:23:25.061 ************************************ 00:23:25.061 22:06:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 false false true 00:23:25.061 22:06:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:23:25.061 22:06:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:23:25.061 22:06:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:23:25.061 22:06:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:23:25.061 22:06:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:23:25.061 22:06:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:23:25.061 22:06:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:25.061 22:06:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:23:25.061 22:06:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:23:25.061 22:06:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:25.061 22:06:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:23:25.061 22:06:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:23:25.061 22:06:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:25.061 22:06:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev3 00:23:25.061 22:06:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:23:25.061 22:06:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:25.061 22:06:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev4 00:23:25.061 22:06:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:23:25.061 22:06:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:25.061 22:06:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:25.061 22:06:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:23:25.061 22:06:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:23:25.061 22:06:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:23:25.061 22:06:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:23:25.061 22:06:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:23:25.061 22:06:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:23:25.061 22:06:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:23:25.061 22:06:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:23:25.062 22:06:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:23:25.062 22:06:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=1478172 00:23:25.062 22:06:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 1478172 /var/tmp/spdk-raid.sock 00:23:25.062 22:06:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:25.062 22:06:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@829 -- # '[' -z 1478172 ']' 00:23:25.062 22:06:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:25.062 22:06:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:25.062 22:06:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:25.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:25.062 22:06:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:25.062 22:06:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:25.062 [2024-07-13 22:06:44.408348] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:25.062 [2024-07-13 22:06:44.408449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1478172 ] 00:23:25.062 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:25.062 Zero copy mechanism will not be used. 00:23:25.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.322 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:25.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.322 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:25.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.322 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:25.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.322 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:25.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.322 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:25.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.322 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:25.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.322 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:25.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.322 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:25.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.322 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:25.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.322 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:25.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.322 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:25.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.322 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:25.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.322 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:25.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.322 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:25.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.322 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:25.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.322 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:25.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.322 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:25.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.322 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:25.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.322 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:25.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.322 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:25.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.322 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:25.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.322 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:25.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.322 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:25.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.322 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:25.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.322 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:25.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.322 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:25.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.322 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:25.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.322 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:25.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.322 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:25.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.322 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:25.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.322 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:25.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.322 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:25.322 [2024-07-13 22:06:44.570736] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.582 [2024-07-13 22:06:44.779394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:25.841 [2024-07-13 22:06:45.022520] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:25.841 [2024-07-13 22:06:45.022552] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:25.841 22:06:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:25.841 22:06:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # return 0 00:23:25.841 22:06:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:23:25.841 22:06:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:26.100 BaseBdev1_malloc 00:23:26.100 22:06:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:26.359 [2024-07-13 22:06:45.534045] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:26.359 [2024-07-13 22:06:45.534104] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:26.359 [2024-07-13 22:06:45.534143] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:23:26.359 [2024-07-13 22:06:45.534157] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:26.359 [2024-07-13 22:06:45.536245] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:26.359 [2024-07-13 22:06:45.536277] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:26.359 BaseBdev1 00:23:26.359 22:06:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:23:26.359 22:06:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:26.359 BaseBdev2_malloc 00:23:26.618 22:06:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:26.618 [2024-07-13 22:06:45.907970] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:26.618 [2024-07-13 22:06:45.908025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:26.618 [2024-07-13 22:06:45.908064] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:23:26.618 [2024-07-13 22:06:45.908081] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:26.618 [2024-07-13 22:06:45.910246] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:26.618 [2024-07-13 22:06:45.910277] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:26.618 BaseBdev2 00:23:26.618 22:06:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:23:26.618 22:06:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:26.877 BaseBdev3_malloc 00:23:26.877 22:06:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:23:27.136 [2024-07-13 22:06:46.283780] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:23:27.137 [2024-07-13 22:06:46.283840] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:27.137 [2024-07-13 22:06:46.283878] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040e80 00:23:27.137 [2024-07-13 22:06:46.283893] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:27.137 [2024-07-13 22:06:46.285980] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:27.137 [2024-07-13 22:06:46.286011] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:27.137 BaseBdev3 00:23:27.137 22:06:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:23:27.137 22:06:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:27.137 BaseBdev4_malloc 00:23:27.137 22:06:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:23:27.397 [2024-07-13 22:06:46.659672] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:23:27.397 [2024-07-13 22:06:46.659747] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:27.397 [2024-07-13 22:06:46.659769] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041a80 00:23:27.397 [2024-07-13 22:06:46.659783] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:27.397 [2024-07-13 22:06:46.661982] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:27.397 [2024-07-13 22:06:46.662014] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:27.397 BaseBdev4 00:23:27.397 22:06:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:27.656 spare_malloc 00:23:27.656 22:06:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:27.656 spare_delay 00:23:27.915 22:06:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:27.915 [2024-07-13 22:06:47.192132] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:27.915 [2024-07-13 22:06:47.192180] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:27.915 [2024-07-13 22:06:47.192219] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042c80 00:23:27.915 [2024-07-13 22:06:47.192232] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:27.915 [2024-07-13 22:06:47.194352] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:27.915 [2024-07-13 22:06:47.194381] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:27.915 spare 00:23:27.915 22:06:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:23:28.175 [2024-07-13 22:06:47.352597] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:28.175 [2024-07-13 22:06:47.354522] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:28.175 [2024-07-13 22:06:47.354577] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:28.175 [2024-07-13 22:06:47.354624] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:28.175 [2024-07-13 22:06:47.354702] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000043280 00:23:28.175 [2024-07-13 22:06:47.354715] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:23:28.175 [2024-07-13 22:06:47.355015] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:23:28.175 [2024-07-13 22:06:47.355215] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000043280 00:23:28.175 [2024-07-13 22:06:47.355228] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000043280 00:23:28.175 [2024-07-13 22:06:47.355394] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:28.175 22:06:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:23:28.175 22:06:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:28.175 22:06:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:28.175 22:06:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:28.175 22:06:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:28.175 22:06:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:28.175 22:06:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:28.175 22:06:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:28.175 22:06:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:28.175 22:06:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:28.175 22:06:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:28.175 22:06:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:28.175 22:06:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:28.175 "name": "raid_bdev1", 00:23:28.175 "uuid": "999af518-b1dd-4407-970a-82d02bc3e81f", 00:23:28.175 "strip_size_kb": 0, 00:23:28.175 "state": "online", 00:23:28.175 "raid_level": "raid1", 00:23:28.175 "superblock": false, 00:23:28.175 "num_base_bdevs": 4, 00:23:28.175 "num_base_bdevs_discovered": 4, 00:23:28.175 "num_base_bdevs_operational": 4, 00:23:28.175 "base_bdevs_list": [ 00:23:28.175 { 00:23:28.175 "name": "BaseBdev1", 00:23:28.175 "uuid": "fce1225c-7f27-5387-80dc-7cad265de7d8", 00:23:28.175 "is_configured": true, 00:23:28.175 "data_offset": 0, 00:23:28.175 "data_size": 65536 00:23:28.175 }, 00:23:28.175 { 00:23:28.175 "name": "BaseBdev2", 00:23:28.175 "uuid": "6e27c35a-c71c-5074-b06e-21e7a5462e7d", 00:23:28.175 "is_configured": true, 00:23:28.175 "data_offset": 0, 00:23:28.175 "data_size": 65536 00:23:28.175 }, 00:23:28.175 { 00:23:28.175 "name": "BaseBdev3", 00:23:28.175 "uuid": "234464c2-bfd5-5336-8322-19906f0296f5", 00:23:28.175 "is_configured": true, 00:23:28.175 "data_offset": 0, 00:23:28.175 "data_size": 65536 00:23:28.175 }, 00:23:28.175 { 00:23:28.175 "name": "BaseBdev4", 00:23:28.175 "uuid": "343042a1-b3df-52a0-a1e4-ee1ee1c562b7", 00:23:28.175 "is_configured": true, 00:23:28.175 "data_offset": 0, 00:23:28.175 "data_size": 65536 00:23:28.175 } 00:23:28.175 ] 00:23:28.175 }' 00:23:28.175 22:06:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:28.175 22:06:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:28.811 22:06:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:28.811 22:06:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:23:28.811 [2024-07-13 22:06:48.187067] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:29.070 22:06:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:23:29.070 22:06:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:29.070 22:06:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:29.070 22:06:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:23:29.071 22:06:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:23:29.071 22:06:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:23:29.071 22:06:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:23:29.071 22:06:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:23:29.071 22:06:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:29.071 22:06:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:29.071 22:06:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:29.071 22:06:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:29.071 22:06:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:29.071 22:06:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:23:29.071 22:06:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:29.071 22:06:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:29.071 22:06:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:29.330 [2024-07-13 22:06:48.511666] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010980 00:23:29.330 /dev/nbd0 00:23:29.330 22:06:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:29.330 22:06:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:29.330 22:06:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:23:29.330 22:06:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:23:29.330 22:06:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:23:29.330 22:06:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:23:29.330 22:06:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:23:29.330 22:06:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:23:29.330 22:06:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:23:29.330 22:06:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:23:29.330 22:06:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:29.330 1+0 records in 00:23:29.330 1+0 records out 00:23:29.330 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251512 s, 16.3 MB/s 00:23:29.330 22:06:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:29.330 22:06:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:23:29.330 22:06:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:29.330 22:06:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:23:29.330 22:06:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:23:29.330 22:06:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:29.330 22:06:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:29.330 22:06:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:23:29.330 22:06:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:23:29.331 22:06:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:23:34.600 65536+0 records in 00:23:34.600 65536+0 records out 00:23:34.600 33554432 bytes (34 MB, 32 MiB) copied, 4.68497 s, 7.2 MB/s 00:23:34.600 22:06:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:34.600 22:06:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:34.600 22:06:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:34.600 22:06:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:34.600 22:06:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:23:34.600 22:06:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:34.600 22:06:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:34.600 [2024-07-13 22:06:53.448752] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:34.600 22:06:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:34.600 22:06:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:34.600 22:06:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:34.600 22:06:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:34.600 22:06:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:34.600 22:06:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:34.600 22:06:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:23:34.600 22:06:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:23:34.600 22:06:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:34.600 [2024-07-13 22:06:53.613312] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:34.600 22:06:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:34.600 22:06:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:34.600 22:06:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:34.600 22:06:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:34.600 22:06:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:34.600 22:06:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:34.600 22:06:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:34.600 22:06:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:34.600 22:06:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:34.600 22:06:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:34.600 22:06:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:34.600 22:06:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:34.600 22:06:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:34.600 "name": "raid_bdev1", 00:23:34.600 "uuid": "999af518-b1dd-4407-970a-82d02bc3e81f", 00:23:34.600 "strip_size_kb": 0, 00:23:34.600 "state": "online", 00:23:34.600 "raid_level": "raid1", 00:23:34.600 "superblock": false, 00:23:34.600 "num_base_bdevs": 4, 00:23:34.600 "num_base_bdevs_discovered": 3, 00:23:34.600 "num_base_bdevs_operational": 3, 00:23:34.600 "base_bdevs_list": [ 00:23:34.600 { 00:23:34.600 "name": null, 00:23:34.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:34.600 "is_configured": false, 00:23:34.600 "data_offset": 0, 00:23:34.600 "data_size": 65536 00:23:34.600 }, 00:23:34.600 { 00:23:34.600 "name": "BaseBdev2", 00:23:34.600 "uuid": "6e27c35a-c71c-5074-b06e-21e7a5462e7d", 00:23:34.601 "is_configured": true, 00:23:34.601 "data_offset": 0, 00:23:34.601 "data_size": 65536 00:23:34.601 }, 00:23:34.601 { 00:23:34.601 "name": "BaseBdev3", 00:23:34.601 "uuid": "234464c2-bfd5-5336-8322-19906f0296f5", 00:23:34.601 "is_configured": true, 00:23:34.601 "data_offset": 0, 00:23:34.601 "data_size": 65536 00:23:34.601 }, 00:23:34.601 { 00:23:34.601 "name": "BaseBdev4", 00:23:34.601 "uuid": "343042a1-b3df-52a0-a1e4-ee1ee1c562b7", 00:23:34.601 "is_configured": true, 00:23:34.601 "data_offset": 0, 00:23:34.601 "data_size": 65536 00:23:34.601 } 00:23:34.601 ] 00:23:34.601 }' 00:23:34.601 22:06:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:34.601 22:06:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.169 22:06:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:35.169 [2024-07-13 22:06:54.427490] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:35.169 [2024-07-13 22:06:54.446115] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d145a0 00:23:35.169 [2024-07-13 22:06:54.447922] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:35.169 22:06:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:23:36.106 22:06:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:36.106 22:06:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:36.106 22:06:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:36.106 22:06:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:36.106 22:06:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:36.106 22:06:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.106 22:06:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:36.365 22:06:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:36.365 "name": "raid_bdev1", 00:23:36.365 "uuid": "999af518-b1dd-4407-970a-82d02bc3e81f", 00:23:36.365 "strip_size_kb": 0, 00:23:36.365 "state": "online", 00:23:36.365 "raid_level": "raid1", 00:23:36.365 "superblock": false, 00:23:36.365 "num_base_bdevs": 4, 00:23:36.365 "num_base_bdevs_discovered": 4, 00:23:36.365 "num_base_bdevs_operational": 4, 00:23:36.365 "process": { 00:23:36.365 "type": "rebuild", 00:23:36.365 "target": "spare", 00:23:36.365 "progress": { 00:23:36.365 "blocks": 22528, 00:23:36.365 "percent": 34 00:23:36.365 } 00:23:36.365 }, 00:23:36.365 "base_bdevs_list": [ 00:23:36.365 { 00:23:36.365 "name": "spare", 00:23:36.365 "uuid": "bb53043c-e730-564d-bdb4-4a0bc5bdc1c4", 00:23:36.365 "is_configured": true, 00:23:36.365 "data_offset": 0, 00:23:36.365 "data_size": 65536 00:23:36.365 }, 00:23:36.365 { 00:23:36.365 "name": "BaseBdev2", 00:23:36.365 "uuid": "6e27c35a-c71c-5074-b06e-21e7a5462e7d", 00:23:36.365 "is_configured": true, 00:23:36.365 "data_offset": 0, 00:23:36.365 "data_size": 65536 00:23:36.365 }, 00:23:36.365 { 00:23:36.365 "name": "BaseBdev3", 00:23:36.365 "uuid": "234464c2-bfd5-5336-8322-19906f0296f5", 00:23:36.365 "is_configured": true, 00:23:36.365 "data_offset": 0, 00:23:36.365 "data_size": 65536 00:23:36.365 }, 00:23:36.365 { 00:23:36.365 "name": "BaseBdev4", 00:23:36.365 "uuid": "343042a1-b3df-52a0-a1e4-ee1ee1c562b7", 00:23:36.365 "is_configured": true, 00:23:36.365 "data_offset": 0, 00:23:36.365 "data_size": 65536 00:23:36.365 } 00:23:36.365 ] 00:23:36.365 }' 00:23:36.365 22:06:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:36.365 22:06:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:36.365 22:06:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:36.365 22:06:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:36.365 22:06:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:36.624 [2024-07-13 22:06:55.893300] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:36.624 [2024-07-13 22:06:55.959365] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:36.624 [2024-07-13 22:06:55.959429] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:36.624 [2024-07-13 22:06:55.959447] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:36.624 [2024-07-13 22:06:55.959459] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:36.624 22:06:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:36.624 22:06:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:36.624 22:06:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:36.624 22:06:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:36.624 22:06:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:36.624 22:06:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:36.624 22:06:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:36.624 22:06:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:36.624 22:06:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:36.624 22:06:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:36.624 22:06:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.624 22:06:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:36.883 22:06:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:36.883 "name": "raid_bdev1", 00:23:36.883 "uuid": "999af518-b1dd-4407-970a-82d02bc3e81f", 00:23:36.883 "strip_size_kb": 0, 00:23:36.883 "state": "online", 00:23:36.883 "raid_level": "raid1", 00:23:36.883 "superblock": false, 00:23:36.883 "num_base_bdevs": 4, 00:23:36.883 "num_base_bdevs_discovered": 3, 00:23:36.883 "num_base_bdevs_operational": 3, 00:23:36.883 "base_bdevs_list": [ 00:23:36.883 { 00:23:36.883 "name": null, 00:23:36.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:36.883 "is_configured": false, 00:23:36.883 "data_offset": 0, 00:23:36.883 "data_size": 65536 00:23:36.883 }, 00:23:36.883 { 00:23:36.883 "name": "BaseBdev2", 00:23:36.883 "uuid": "6e27c35a-c71c-5074-b06e-21e7a5462e7d", 00:23:36.883 "is_configured": true, 00:23:36.883 "data_offset": 0, 00:23:36.883 "data_size": 65536 00:23:36.883 }, 00:23:36.883 { 00:23:36.883 "name": "BaseBdev3", 00:23:36.883 "uuid": "234464c2-bfd5-5336-8322-19906f0296f5", 00:23:36.883 "is_configured": true, 00:23:36.883 "data_offset": 0, 00:23:36.883 "data_size": 65536 00:23:36.883 }, 00:23:36.883 { 00:23:36.883 "name": "BaseBdev4", 00:23:36.883 "uuid": "343042a1-b3df-52a0-a1e4-ee1ee1c562b7", 00:23:36.883 "is_configured": true, 00:23:36.883 "data_offset": 0, 00:23:36.883 "data_size": 65536 00:23:36.883 } 00:23:36.883 ] 00:23:36.883 }' 00:23:36.883 22:06:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:36.883 22:06:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.451 22:06:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:37.451 22:06:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:37.451 22:06:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:37.451 22:06:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:37.451 22:06:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:37.451 22:06:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:37.451 22:06:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:37.451 22:06:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:37.451 "name": "raid_bdev1", 00:23:37.451 "uuid": "999af518-b1dd-4407-970a-82d02bc3e81f", 00:23:37.451 "strip_size_kb": 0, 00:23:37.451 "state": "online", 00:23:37.451 "raid_level": "raid1", 00:23:37.451 "superblock": false, 00:23:37.451 "num_base_bdevs": 4, 00:23:37.451 "num_base_bdevs_discovered": 3, 00:23:37.451 "num_base_bdevs_operational": 3, 00:23:37.451 "base_bdevs_list": [ 00:23:37.451 { 00:23:37.451 "name": null, 00:23:37.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:37.451 "is_configured": false, 00:23:37.451 "data_offset": 0, 00:23:37.451 "data_size": 65536 00:23:37.451 }, 00:23:37.451 { 00:23:37.451 "name": "BaseBdev2", 00:23:37.451 "uuid": "6e27c35a-c71c-5074-b06e-21e7a5462e7d", 00:23:37.451 "is_configured": true, 00:23:37.451 "data_offset": 0, 00:23:37.451 "data_size": 65536 00:23:37.451 }, 00:23:37.451 { 00:23:37.451 "name": "BaseBdev3", 00:23:37.451 "uuid": "234464c2-bfd5-5336-8322-19906f0296f5", 00:23:37.451 "is_configured": true, 00:23:37.451 "data_offset": 0, 00:23:37.451 "data_size": 65536 00:23:37.451 }, 00:23:37.451 { 00:23:37.451 "name": "BaseBdev4", 00:23:37.451 "uuid": "343042a1-b3df-52a0-a1e4-ee1ee1c562b7", 00:23:37.451 "is_configured": true, 00:23:37.451 "data_offset": 0, 00:23:37.451 "data_size": 65536 00:23:37.451 } 00:23:37.451 ] 00:23:37.451 }' 00:23:37.451 22:06:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:37.710 22:06:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:37.710 22:06:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:37.710 22:06:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:37.710 22:06:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:37.710 [2024-07-13 22:06:57.056961] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:37.710 [2024-07-13 22:06:57.072143] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d14670 00:23:37.710 [2024-07-13 22:06:57.073935] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:37.710 22:06:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:39.083 22:06:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:39.083 22:06:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:39.083 22:06:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:39.083 22:06:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:39.083 22:06:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:39.083 22:06:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:39.083 22:06:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:39.083 22:06:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:39.083 "name": "raid_bdev1", 00:23:39.083 "uuid": "999af518-b1dd-4407-970a-82d02bc3e81f", 00:23:39.083 "strip_size_kb": 0, 00:23:39.083 "state": "online", 00:23:39.083 "raid_level": "raid1", 00:23:39.083 "superblock": false, 00:23:39.083 "num_base_bdevs": 4, 00:23:39.083 "num_base_bdevs_discovered": 4, 00:23:39.083 "num_base_bdevs_operational": 4, 00:23:39.083 "process": { 00:23:39.083 "type": "rebuild", 00:23:39.083 "target": "spare", 00:23:39.083 "progress": { 00:23:39.083 "blocks": 22528, 00:23:39.084 "percent": 34 00:23:39.084 } 00:23:39.084 }, 00:23:39.084 "base_bdevs_list": [ 00:23:39.084 { 00:23:39.084 "name": "spare", 00:23:39.084 "uuid": "bb53043c-e730-564d-bdb4-4a0bc5bdc1c4", 00:23:39.084 "is_configured": true, 00:23:39.084 "data_offset": 0, 00:23:39.084 "data_size": 65536 00:23:39.084 }, 00:23:39.084 { 00:23:39.084 "name": "BaseBdev2", 00:23:39.084 "uuid": "6e27c35a-c71c-5074-b06e-21e7a5462e7d", 00:23:39.084 "is_configured": true, 00:23:39.084 "data_offset": 0, 00:23:39.084 "data_size": 65536 00:23:39.084 }, 00:23:39.084 { 00:23:39.084 "name": "BaseBdev3", 00:23:39.084 "uuid": "234464c2-bfd5-5336-8322-19906f0296f5", 00:23:39.084 "is_configured": true, 00:23:39.084 "data_offset": 0, 00:23:39.084 "data_size": 65536 00:23:39.084 }, 00:23:39.084 { 00:23:39.084 "name": "BaseBdev4", 00:23:39.084 "uuid": "343042a1-b3df-52a0-a1e4-ee1ee1c562b7", 00:23:39.084 "is_configured": true, 00:23:39.084 "data_offset": 0, 00:23:39.084 "data_size": 65536 00:23:39.084 } 00:23:39.084 ] 00:23:39.084 }' 00:23:39.084 22:06:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:39.084 22:06:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:39.084 22:06:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:39.084 22:06:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:39.084 22:06:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:23:39.084 22:06:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:23:39.084 22:06:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:23:39.084 22:06:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:23:39.084 22:06:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@694 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:23:39.342 [2024-07-13 22:06:58.487307] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:39.342 [2024-07-13 22:06:58.585383] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d14670 00:23:39.342 22:06:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:23:39.342 22:06:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:23:39.342 22:06:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:39.342 22:06:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:39.342 22:06:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:39.342 22:06:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:39.342 22:06:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:39.342 22:06:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:39.342 22:06:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:39.601 22:06:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:39.601 "name": "raid_bdev1", 00:23:39.601 "uuid": "999af518-b1dd-4407-970a-82d02bc3e81f", 00:23:39.601 "strip_size_kb": 0, 00:23:39.601 "state": "online", 00:23:39.601 "raid_level": "raid1", 00:23:39.601 "superblock": false, 00:23:39.601 "num_base_bdevs": 4, 00:23:39.601 "num_base_bdevs_discovered": 3, 00:23:39.601 "num_base_bdevs_operational": 3, 00:23:39.601 "process": { 00:23:39.601 "type": "rebuild", 00:23:39.601 "target": "spare", 00:23:39.601 "progress": { 00:23:39.601 "blocks": 32768, 00:23:39.601 "percent": 50 00:23:39.601 } 00:23:39.601 }, 00:23:39.601 "base_bdevs_list": [ 00:23:39.601 { 00:23:39.601 "name": "spare", 00:23:39.601 "uuid": "bb53043c-e730-564d-bdb4-4a0bc5bdc1c4", 00:23:39.601 "is_configured": true, 00:23:39.601 "data_offset": 0, 00:23:39.601 "data_size": 65536 00:23:39.601 }, 00:23:39.601 { 00:23:39.601 "name": null, 00:23:39.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:39.601 "is_configured": false, 00:23:39.601 "data_offset": 0, 00:23:39.601 "data_size": 65536 00:23:39.601 }, 00:23:39.601 { 00:23:39.601 "name": "BaseBdev3", 00:23:39.601 "uuid": "234464c2-bfd5-5336-8322-19906f0296f5", 00:23:39.601 "is_configured": true, 00:23:39.601 "data_offset": 0, 00:23:39.601 "data_size": 65536 00:23:39.601 }, 00:23:39.601 { 00:23:39.601 "name": "BaseBdev4", 00:23:39.601 "uuid": "343042a1-b3df-52a0-a1e4-ee1ee1c562b7", 00:23:39.601 "is_configured": true, 00:23:39.601 "data_offset": 0, 00:23:39.601 "data_size": 65536 00:23:39.601 } 00:23:39.601 ] 00:23:39.601 }' 00:23:39.601 22:06:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:39.601 22:06:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:39.601 22:06:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:39.601 22:06:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:39.601 22:06:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=749 00:23:39.601 22:06:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:23:39.601 22:06:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:39.601 22:06:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:39.601 22:06:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:39.601 22:06:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:39.601 22:06:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:39.601 22:06:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:39.601 22:06:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:39.859 22:06:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:39.859 "name": "raid_bdev1", 00:23:39.859 "uuid": "999af518-b1dd-4407-970a-82d02bc3e81f", 00:23:39.859 "strip_size_kb": 0, 00:23:39.859 "state": "online", 00:23:39.859 "raid_level": "raid1", 00:23:39.859 "superblock": false, 00:23:39.859 "num_base_bdevs": 4, 00:23:39.859 "num_base_bdevs_discovered": 3, 00:23:39.859 "num_base_bdevs_operational": 3, 00:23:39.859 "process": { 00:23:39.859 "type": "rebuild", 00:23:39.859 "target": "spare", 00:23:39.859 "progress": { 00:23:39.859 "blocks": 38912, 00:23:39.859 "percent": 59 00:23:39.859 } 00:23:39.859 }, 00:23:39.859 "base_bdevs_list": [ 00:23:39.859 { 00:23:39.859 "name": "spare", 00:23:39.859 "uuid": "bb53043c-e730-564d-bdb4-4a0bc5bdc1c4", 00:23:39.859 "is_configured": true, 00:23:39.859 "data_offset": 0, 00:23:39.859 "data_size": 65536 00:23:39.859 }, 00:23:39.859 { 00:23:39.859 "name": null, 00:23:39.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:39.859 "is_configured": false, 00:23:39.859 "data_offset": 0, 00:23:39.859 "data_size": 65536 00:23:39.859 }, 00:23:39.859 { 00:23:39.859 "name": "BaseBdev3", 00:23:39.859 "uuid": "234464c2-bfd5-5336-8322-19906f0296f5", 00:23:39.859 "is_configured": true, 00:23:39.859 "data_offset": 0, 00:23:39.859 "data_size": 65536 00:23:39.859 }, 00:23:39.859 { 00:23:39.859 "name": "BaseBdev4", 00:23:39.859 "uuid": "343042a1-b3df-52a0-a1e4-ee1ee1c562b7", 00:23:39.859 "is_configured": true, 00:23:39.859 "data_offset": 0, 00:23:39.859 "data_size": 65536 00:23:39.859 } 00:23:39.859 ] 00:23:39.859 }' 00:23:39.859 22:06:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:39.859 22:06:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:39.859 22:06:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:39.859 22:06:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:39.859 22:06:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:23:40.794 22:07:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:23:40.794 22:07:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:40.794 22:07:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:40.794 22:07:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:40.794 22:07:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:40.794 22:07:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:40.794 22:07:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:40.794 22:07:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:41.052 22:07:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:41.052 "name": "raid_bdev1", 00:23:41.052 "uuid": "999af518-b1dd-4407-970a-82d02bc3e81f", 00:23:41.052 "strip_size_kb": 0, 00:23:41.052 "state": "online", 00:23:41.052 "raid_level": "raid1", 00:23:41.052 "superblock": false, 00:23:41.052 "num_base_bdevs": 4, 00:23:41.052 "num_base_bdevs_discovered": 3, 00:23:41.052 "num_base_bdevs_operational": 3, 00:23:41.052 "process": { 00:23:41.052 "type": "rebuild", 00:23:41.052 "target": "spare", 00:23:41.052 "progress": { 00:23:41.052 "blocks": 63488, 00:23:41.052 "percent": 96 00:23:41.052 } 00:23:41.052 }, 00:23:41.052 "base_bdevs_list": [ 00:23:41.052 { 00:23:41.052 "name": "spare", 00:23:41.052 "uuid": "bb53043c-e730-564d-bdb4-4a0bc5bdc1c4", 00:23:41.052 "is_configured": true, 00:23:41.052 "data_offset": 0, 00:23:41.052 "data_size": 65536 00:23:41.052 }, 00:23:41.052 { 00:23:41.052 "name": null, 00:23:41.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:41.052 "is_configured": false, 00:23:41.052 "data_offset": 0, 00:23:41.052 "data_size": 65536 00:23:41.052 }, 00:23:41.052 { 00:23:41.052 "name": "BaseBdev3", 00:23:41.052 "uuid": "234464c2-bfd5-5336-8322-19906f0296f5", 00:23:41.052 "is_configured": true, 00:23:41.052 "data_offset": 0, 00:23:41.052 "data_size": 65536 00:23:41.052 }, 00:23:41.052 { 00:23:41.052 "name": "BaseBdev4", 00:23:41.052 "uuid": "343042a1-b3df-52a0-a1e4-ee1ee1c562b7", 00:23:41.052 "is_configured": true, 00:23:41.052 "data_offset": 0, 00:23:41.052 "data_size": 65536 00:23:41.052 } 00:23:41.052 ] 00:23:41.052 }' 00:23:41.052 22:07:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:41.052 [2024-07-13 22:07:00.298434] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:41.052 [2024-07-13 22:07:00.298492] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:41.052 [2024-07-13 22:07:00.298529] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:41.052 22:07:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:41.052 22:07:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:41.052 22:07:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:41.052 22:07:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:23:41.986 22:07:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:23:41.986 22:07:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:41.986 22:07:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:41.986 22:07:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:41.986 22:07:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:41.986 22:07:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:42.244 22:07:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:42.244 22:07:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:42.244 22:07:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:42.244 "name": "raid_bdev1", 00:23:42.244 "uuid": "999af518-b1dd-4407-970a-82d02bc3e81f", 00:23:42.244 "strip_size_kb": 0, 00:23:42.244 "state": "online", 00:23:42.244 "raid_level": "raid1", 00:23:42.244 "superblock": false, 00:23:42.244 "num_base_bdevs": 4, 00:23:42.244 "num_base_bdevs_discovered": 3, 00:23:42.244 "num_base_bdevs_operational": 3, 00:23:42.244 "base_bdevs_list": [ 00:23:42.244 { 00:23:42.244 "name": "spare", 00:23:42.244 "uuid": "bb53043c-e730-564d-bdb4-4a0bc5bdc1c4", 00:23:42.244 "is_configured": true, 00:23:42.244 "data_offset": 0, 00:23:42.244 "data_size": 65536 00:23:42.244 }, 00:23:42.244 { 00:23:42.244 "name": null, 00:23:42.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:42.244 "is_configured": false, 00:23:42.244 "data_offset": 0, 00:23:42.244 "data_size": 65536 00:23:42.244 }, 00:23:42.244 { 00:23:42.244 "name": "BaseBdev3", 00:23:42.244 "uuid": "234464c2-bfd5-5336-8322-19906f0296f5", 00:23:42.244 "is_configured": true, 00:23:42.244 "data_offset": 0, 00:23:42.244 "data_size": 65536 00:23:42.244 }, 00:23:42.244 { 00:23:42.244 "name": "BaseBdev4", 00:23:42.244 "uuid": "343042a1-b3df-52a0-a1e4-ee1ee1c562b7", 00:23:42.244 "is_configured": true, 00:23:42.244 "data_offset": 0, 00:23:42.244 "data_size": 65536 00:23:42.244 } 00:23:42.244 ] 00:23:42.244 }' 00:23:42.244 22:07:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:42.244 22:07:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:42.244 22:07:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:42.502 22:07:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:23:42.502 22:07:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:23:42.502 22:07:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:42.502 22:07:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:42.502 22:07:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:42.502 22:07:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:42.502 22:07:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:42.502 22:07:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:42.502 22:07:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:42.502 22:07:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:42.502 "name": "raid_bdev1", 00:23:42.502 "uuid": "999af518-b1dd-4407-970a-82d02bc3e81f", 00:23:42.502 "strip_size_kb": 0, 00:23:42.502 "state": "online", 00:23:42.502 "raid_level": "raid1", 00:23:42.502 "superblock": false, 00:23:42.502 "num_base_bdevs": 4, 00:23:42.502 "num_base_bdevs_discovered": 3, 00:23:42.502 "num_base_bdevs_operational": 3, 00:23:42.502 "base_bdevs_list": [ 00:23:42.502 { 00:23:42.502 "name": "spare", 00:23:42.502 "uuid": "bb53043c-e730-564d-bdb4-4a0bc5bdc1c4", 00:23:42.502 "is_configured": true, 00:23:42.502 "data_offset": 0, 00:23:42.502 "data_size": 65536 00:23:42.502 }, 00:23:42.502 { 00:23:42.502 "name": null, 00:23:42.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:42.502 "is_configured": false, 00:23:42.502 "data_offset": 0, 00:23:42.502 "data_size": 65536 00:23:42.502 }, 00:23:42.502 { 00:23:42.502 "name": "BaseBdev3", 00:23:42.502 "uuid": "234464c2-bfd5-5336-8322-19906f0296f5", 00:23:42.502 "is_configured": true, 00:23:42.502 "data_offset": 0, 00:23:42.502 "data_size": 65536 00:23:42.502 }, 00:23:42.502 { 00:23:42.502 "name": "BaseBdev4", 00:23:42.502 "uuid": "343042a1-b3df-52a0-a1e4-ee1ee1c562b7", 00:23:42.502 "is_configured": true, 00:23:42.502 "data_offset": 0, 00:23:42.502 "data_size": 65536 00:23:42.502 } 00:23:42.502 ] 00:23:42.502 }' 00:23:42.502 22:07:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:42.502 22:07:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:42.502 22:07:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:42.502 22:07:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:42.503 22:07:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:42.503 22:07:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:42.503 22:07:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:42.503 22:07:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:42.503 22:07:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:42.503 22:07:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:42.503 22:07:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:42.503 22:07:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:42.503 22:07:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:42.503 22:07:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:42.503 22:07:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:42.503 22:07:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:42.760 22:07:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:42.760 "name": "raid_bdev1", 00:23:42.760 "uuid": "999af518-b1dd-4407-970a-82d02bc3e81f", 00:23:42.760 "strip_size_kb": 0, 00:23:42.760 "state": "online", 00:23:42.760 "raid_level": "raid1", 00:23:42.760 "superblock": false, 00:23:42.760 "num_base_bdevs": 4, 00:23:42.760 "num_base_bdevs_discovered": 3, 00:23:42.760 "num_base_bdevs_operational": 3, 00:23:42.760 "base_bdevs_list": [ 00:23:42.760 { 00:23:42.760 "name": "spare", 00:23:42.760 "uuid": "bb53043c-e730-564d-bdb4-4a0bc5bdc1c4", 00:23:42.760 "is_configured": true, 00:23:42.760 "data_offset": 0, 00:23:42.760 "data_size": 65536 00:23:42.760 }, 00:23:42.760 { 00:23:42.760 "name": null, 00:23:42.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:42.760 "is_configured": false, 00:23:42.760 "data_offset": 0, 00:23:42.760 "data_size": 65536 00:23:42.760 }, 00:23:42.760 { 00:23:42.760 "name": "BaseBdev3", 00:23:42.760 "uuid": "234464c2-bfd5-5336-8322-19906f0296f5", 00:23:42.760 "is_configured": true, 00:23:42.761 "data_offset": 0, 00:23:42.761 "data_size": 65536 00:23:42.761 }, 00:23:42.761 { 00:23:42.761 "name": "BaseBdev4", 00:23:42.761 "uuid": "343042a1-b3df-52a0-a1e4-ee1ee1c562b7", 00:23:42.761 "is_configured": true, 00:23:42.761 "data_offset": 0, 00:23:42.761 "data_size": 65536 00:23:42.761 } 00:23:42.761 ] 00:23:42.761 }' 00:23:42.761 22:07:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:42.761 22:07:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:43.327 22:07:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:43.327 [2024-07-13 22:07:02.679945] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:43.327 [2024-07-13 22:07:02.679975] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:43.327 [2024-07-13 22:07:02.680044] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:43.327 [2024-07-13 22:07:02.680119] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:43.327 [2024-07-13 22:07:02.680131] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000043280 name raid_bdev1, state offline 00:23:43.327 22:07:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:23:43.327 22:07:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:43.586 22:07:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:23:43.586 22:07:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:23:43.586 22:07:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:23:43.586 22:07:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:43.586 22:07:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:43.586 22:07:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:43.586 22:07:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:43.586 22:07:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:43.586 22:07:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:43.586 22:07:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:23:43.586 22:07:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:43.586 22:07:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:43.586 22:07:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:43.845 /dev/nbd0 00:23:43.845 22:07:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:43.845 22:07:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:43.845 22:07:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:23:43.845 22:07:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:23:43.845 22:07:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:23:43.845 22:07:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:23:43.845 22:07:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:23:43.845 22:07:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:23:43.845 22:07:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:23:43.845 22:07:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:23:43.845 22:07:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:43.845 1+0 records in 00:23:43.845 1+0 records out 00:23:43.845 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000258043 s, 15.9 MB/s 00:23:43.845 22:07:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:43.845 22:07:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:23:43.845 22:07:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:43.845 22:07:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:23:43.845 22:07:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:23:43.845 22:07:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:43.845 22:07:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:43.845 22:07:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:23:43.845 /dev/nbd1 00:23:44.103 22:07:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:44.103 22:07:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:44.103 22:07:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:23:44.103 22:07:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:23:44.104 22:07:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:23:44.104 22:07:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:23:44.104 22:07:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:23:44.104 22:07:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:23:44.104 22:07:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:23:44.104 22:07:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:23:44.104 22:07:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:44.104 1+0 records in 00:23:44.104 1+0 records out 00:23:44.104 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278662 s, 14.7 MB/s 00:23:44.104 22:07:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:44.104 22:07:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:23:44.104 22:07:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:44.104 22:07:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:23:44.104 22:07:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:23:44.104 22:07:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:44.104 22:07:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:44.104 22:07:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:23:44.104 22:07:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:23:44.104 22:07:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:44.104 22:07:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:44.104 22:07:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:44.104 22:07:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:23:44.104 22:07:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:44.104 22:07:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:44.362 22:07:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:44.362 22:07:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:44.362 22:07:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:44.362 22:07:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:44.362 22:07:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:44.362 22:07:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:44.362 22:07:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:23:44.362 22:07:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:23:44.362 22:07:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:44.362 22:07:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:44.621 22:07:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:44.621 22:07:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:44.621 22:07:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:44.621 22:07:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:44.621 22:07:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:44.621 22:07:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:44.621 22:07:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:23:44.621 22:07:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:23:44.621 22:07:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:23:44.621 22:07:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 1478172 00:23:44.621 22:07:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@948 -- # '[' -z 1478172 ']' 00:23:44.621 22:07:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # kill -0 1478172 00:23:44.621 22:07:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # uname 00:23:44.621 22:07:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:44.621 22:07:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1478172 00:23:44.621 22:07:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:44.621 22:07:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:44.621 22:07:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1478172' 00:23:44.621 killing process with pid 1478172 00:23:44.621 22:07:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@967 -- # kill 1478172 00:23:44.621 Received shutdown signal, test time was about 60.000000 seconds 00:23:44.621 00:23:44.621 Latency(us) 00:23:44.621 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.621 =================================================================================================================== 00:23:44.621 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:44.621 [2024-07-13 22:07:03.877442] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:44.621 22:07:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # wait 1478172 00:23:45.225 [2024-07-13 22:07:04.284052] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:46.162 22:07:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:23:46.162 00:23:46.162 real 0m21.166s 00:23:46.162 user 0m27.728s 00:23:46.162 sys 0m3.818s 00:23:46.162 22:07:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:46.162 22:07:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:46.162 ************************************ 00:23:46.162 END TEST raid_rebuild_test 00:23:46.162 ************************************ 00:23:46.162 22:07:05 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:23:46.162 22:07:05 bdev_raid -- bdev/bdev_raid.sh@878 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:23:46.162 22:07:05 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:23:46.162 22:07:05 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:46.162 22:07:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:46.420 ************************************ 00:23:46.420 START TEST raid_rebuild_test_sb 00:23:46.420 ************************************ 00:23:46.420 22:07:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 true false true 00:23:46.420 22:07:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:23:46.420 22:07:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:23:46.420 22:07:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:23:46.420 22:07:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:23:46.420 22:07:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:23:46.420 22:07:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:23:46.420 22:07:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:46.420 22:07:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:23:46.420 22:07:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:23:46.420 22:07:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:46.420 22:07:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:23:46.420 22:07:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:23:46.421 22:07:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:46.421 22:07:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev3 00:23:46.421 22:07:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:23:46.421 22:07:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:46.421 22:07:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev4 00:23:46.421 22:07:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:23:46.421 22:07:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:46.421 22:07:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:46.421 22:07:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:23:46.421 22:07:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:23:46.421 22:07:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:23:46.421 22:07:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:23:46.421 22:07:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:23:46.421 22:07:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:23:46.421 22:07:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:23:46.421 22:07:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:23:46.421 22:07:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:23:46.421 22:07:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:23:46.421 22:07:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=1482113 00:23:46.421 22:07:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 1482113 /var/tmp/spdk-raid.sock 00:23:46.421 22:07:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:46.421 22:07:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@829 -- # '[' -z 1482113 ']' 00:23:46.421 22:07:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:46.421 22:07:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:46.421 22:07:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:46.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:46.421 22:07:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:46.421 22:07:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:46.421 [2024-07-13 22:07:05.664113] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:46.421 [2024-07-13 22:07:05.664205] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1482113 ] 00:23:46.421 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:46.421 Zero copy mechanism will not be used. 00:23:46.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.421 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:46.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.421 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:46.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.421 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:46.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.421 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:46.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.421 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:46.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.421 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:46.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.421 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:46.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.421 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:46.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.421 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:46.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.421 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:46.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.421 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:46.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.421 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:46.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.421 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:46.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.421 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:46.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.421 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:46.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.421 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:46.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.421 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:46.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.421 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:46.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.421 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:46.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.421 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:46.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.421 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:46.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.421 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:46.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.421 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:46.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.421 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:46.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.421 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:46.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.421 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:46.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.421 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:46.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.421 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:46.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.421 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:46.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.421 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:46.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.421 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:46.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.421 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:46.679 [2024-07-13 22:07:05.825238] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.679 [2024-07-13 22:07:06.023447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.937 [2024-07-13 22:07:06.250451] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:46.937 [2024-07-13 22:07:06.250489] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:47.195 22:07:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:47.195 22:07:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # return 0 00:23:47.195 22:07:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:23:47.195 22:07:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:47.455 BaseBdev1_malloc 00:23:47.455 22:07:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:47.455 [2024-07-13 22:07:06.768078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:47.455 [2024-07-13 22:07:06.768143] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:47.455 [2024-07-13 22:07:06.768167] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:23:47.455 [2024-07-13 22:07:06.768181] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:47.455 [2024-07-13 22:07:06.770305] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:47.455 [2024-07-13 22:07:06.770339] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:47.455 BaseBdev1 00:23:47.455 22:07:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:23:47.455 22:07:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:47.713 BaseBdev2_malloc 00:23:47.713 22:07:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:47.972 [2024-07-13 22:07:07.152628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:47.972 [2024-07-13 22:07:07.152684] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:47.972 [2024-07-13 22:07:07.152706] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:23:47.972 [2024-07-13 22:07:07.152722] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:47.972 [2024-07-13 22:07:07.154813] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:47.972 [2024-07-13 22:07:07.154843] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:47.972 BaseBdev2 00:23:47.972 22:07:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:23:47.972 22:07:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:47.972 BaseBdev3_malloc 00:23:48.231 22:07:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:23:48.231 [2024-07-13 22:07:07.521322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:23:48.231 [2024-07-13 22:07:07.521377] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:48.231 [2024-07-13 22:07:07.521416] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040e80 00:23:48.231 [2024-07-13 22:07:07.521429] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:48.231 [2024-07-13 22:07:07.523434] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:48.231 [2024-07-13 22:07:07.523463] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:48.231 BaseBdev3 00:23:48.231 22:07:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:23:48.231 22:07:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:48.489 BaseBdev4_malloc 00:23:48.489 22:07:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:23:48.748 [2024-07-13 22:07:07.880998] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:23:48.748 [2024-07-13 22:07:07.881049] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:48.748 [2024-07-13 22:07:07.881070] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041a80 00:23:48.748 [2024-07-13 22:07:07.881083] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:48.748 [2024-07-13 22:07:07.883158] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:48.748 [2024-07-13 22:07:07.883188] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:48.748 BaseBdev4 00:23:48.748 22:07:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:48.748 spare_malloc 00:23:48.748 22:07:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:49.007 spare_delay 00:23:49.007 22:07:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:49.266 [2024-07-13 22:07:08.407839] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:49.266 [2024-07-13 22:07:08.407888] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:49.266 [2024-07-13 22:07:08.407915] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042c80 00:23:49.266 [2024-07-13 22:07:08.407930] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:49.266 [2024-07-13 22:07:08.410015] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:49.266 [2024-07-13 22:07:08.410044] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:49.266 spare 00:23:49.266 22:07:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:23:49.266 [2024-07-13 22:07:08.588339] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:49.266 [2024-07-13 22:07:08.590067] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:49.266 [2024-07-13 22:07:08.590121] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:49.266 [2024-07-13 22:07:08.590184] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:49.266 [2024-07-13 22:07:08.590354] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000043280 00:23:49.266 [2024-07-13 22:07:08.590373] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:49.266 [2024-07-13 22:07:08.590610] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:23:49.266 [2024-07-13 22:07:08.590791] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000043280 00:23:49.266 [2024-07-13 22:07:08.590802] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000043280 00:23:49.266 [2024-07-13 22:07:08.590947] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:49.266 22:07:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:23:49.266 22:07:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:49.266 22:07:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:49.266 22:07:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:49.266 22:07:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:49.266 22:07:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:49.266 22:07:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:49.266 22:07:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:49.266 22:07:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:49.266 22:07:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:49.266 22:07:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:49.266 22:07:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:49.525 22:07:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:49.525 "name": "raid_bdev1", 00:23:49.525 "uuid": "1983fe3c-c030-4dce-9f72-71a6d79693fe", 00:23:49.525 "strip_size_kb": 0, 00:23:49.525 "state": "online", 00:23:49.525 "raid_level": "raid1", 00:23:49.525 "superblock": true, 00:23:49.525 "num_base_bdevs": 4, 00:23:49.525 "num_base_bdevs_discovered": 4, 00:23:49.525 "num_base_bdevs_operational": 4, 00:23:49.525 "base_bdevs_list": [ 00:23:49.525 { 00:23:49.525 "name": "BaseBdev1", 00:23:49.525 "uuid": "0a433d6e-b11f-5d18-98f0-a6e94f52c2de", 00:23:49.525 "is_configured": true, 00:23:49.525 "data_offset": 2048, 00:23:49.525 "data_size": 63488 00:23:49.525 }, 00:23:49.525 { 00:23:49.525 "name": "BaseBdev2", 00:23:49.525 "uuid": "20b60e08-0f22-5c8c-973c-d6e9c9557cbf", 00:23:49.525 "is_configured": true, 00:23:49.525 "data_offset": 2048, 00:23:49.525 "data_size": 63488 00:23:49.525 }, 00:23:49.525 { 00:23:49.525 "name": "BaseBdev3", 00:23:49.525 "uuid": "e8584e91-b6eb-5cae-953b-d917dad89bdd", 00:23:49.525 "is_configured": true, 00:23:49.525 "data_offset": 2048, 00:23:49.525 "data_size": 63488 00:23:49.525 }, 00:23:49.525 { 00:23:49.525 "name": "BaseBdev4", 00:23:49.526 "uuid": "00148bfb-96c7-5c63-b8f9-829c2fbb5bea", 00:23:49.526 "is_configured": true, 00:23:49.526 "data_offset": 2048, 00:23:49.526 "data_size": 63488 00:23:49.526 } 00:23:49.526 ] 00:23:49.526 }' 00:23:49.526 22:07:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:49.526 22:07:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:50.094 22:07:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:50.094 22:07:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:23:50.094 [2024-07-13 22:07:09.438843] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:50.094 22:07:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:23:50.094 22:07:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:50.094 22:07:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:50.353 22:07:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:23:50.353 22:07:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:23:50.353 22:07:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:23:50.353 22:07:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:23:50.353 22:07:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:23:50.353 22:07:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:50.353 22:07:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:50.353 22:07:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:50.353 22:07:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:50.353 22:07:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:50.353 22:07:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:23:50.353 22:07:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:50.353 22:07:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:50.353 22:07:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:50.612 [2024-07-13 22:07:09.783552] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010980 00:23:50.612 /dev/nbd0 00:23:50.612 22:07:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:50.612 22:07:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:50.612 22:07:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:23:50.612 22:07:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:23:50.612 22:07:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:23:50.612 22:07:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:23:50.612 22:07:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:23:50.612 22:07:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:23:50.612 22:07:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:23:50.612 22:07:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:23:50.612 22:07:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:50.612 1+0 records in 00:23:50.612 1+0 records out 00:23:50.612 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246872 s, 16.6 MB/s 00:23:50.612 22:07:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:50.612 22:07:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:23:50.612 22:07:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:50.612 22:07:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:23:50.612 22:07:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:23:50.612 22:07:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:50.612 22:07:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:50.612 22:07:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:23:50.612 22:07:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:23:50.612 22:07:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:23:57.175 63488+0 records in 00:23:57.175 63488+0 records out 00:23:57.175 32505856 bytes (33 MB, 31 MiB) copied, 5.45491 s, 6.0 MB/s 00:23:57.175 22:07:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:57.175 22:07:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:57.175 22:07:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:57.175 22:07:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:57.175 22:07:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:23:57.175 22:07:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:57.175 22:07:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:57.175 22:07:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:57.175 [2024-07-13 22:07:15.496749] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:57.175 22:07:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:57.175 22:07:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:57.175 22:07:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:57.175 22:07:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:57.175 22:07:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:57.175 22:07:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:23:57.175 22:07:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:23:57.175 22:07:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:57.175 [2024-07-13 22:07:15.657251] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:57.175 22:07:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:57.175 22:07:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:57.175 22:07:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:57.175 22:07:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:57.175 22:07:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:57.175 22:07:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:57.175 22:07:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:57.175 22:07:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:57.175 22:07:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:57.175 22:07:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:57.175 22:07:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:57.175 22:07:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:57.175 22:07:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:57.175 "name": "raid_bdev1", 00:23:57.175 "uuid": "1983fe3c-c030-4dce-9f72-71a6d79693fe", 00:23:57.175 "strip_size_kb": 0, 00:23:57.175 "state": "online", 00:23:57.175 "raid_level": "raid1", 00:23:57.175 "superblock": true, 00:23:57.175 "num_base_bdevs": 4, 00:23:57.175 "num_base_bdevs_discovered": 3, 00:23:57.175 "num_base_bdevs_operational": 3, 00:23:57.175 "base_bdevs_list": [ 00:23:57.175 { 00:23:57.175 "name": null, 00:23:57.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:57.175 "is_configured": false, 00:23:57.175 "data_offset": 2048, 00:23:57.175 "data_size": 63488 00:23:57.175 }, 00:23:57.175 { 00:23:57.175 "name": "BaseBdev2", 00:23:57.175 "uuid": "20b60e08-0f22-5c8c-973c-d6e9c9557cbf", 00:23:57.175 "is_configured": true, 00:23:57.175 "data_offset": 2048, 00:23:57.175 "data_size": 63488 00:23:57.175 }, 00:23:57.175 { 00:23:57.175 "name": "BaseBdev3", 00:23:57.175 "uuid": "e8584e91-b6eb-5cae-953b-d917dad89bdd", 00:23:57.175 "is_configured": true, 00:23:57.175 "data_offset": 2048, 00:23:57.175 "data_size": 63488 00:23:57.175 }, 00:23:57.175 { 00:23:57.175 "name": "BaseBdev4", 00:23:57.175 "uuid": "00148bfb-96c7-5c63-b8f9-829c2fbb5bea", 00:23:57.175 "is_configured": true, 00:23:57.175 "data_offset": 2048, 00:23:57.175 "data_size": 63488 00:23:57.175 } 00:23:57.175 ] 00:23:57.175 }' 00:23:57.175 22:07:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:57.175 22:07:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:57.175 22:07:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:57.175 [2024-07-13 22:07:16.507521] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:57.175 [2024-07-13 22:07:16.523707] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caad40 00:23:57.175 [2024-07-13 22:07:16.525518] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:57.175 22:07:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:23:58.551 22:07:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:58.551 22:07:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:58.551 22:07:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:58.551 22:07:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:58.551 22:07:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:58.551 22:07:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:58.551 22:07:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:58.551 22:07:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:58.551 "name": "raid_bdev1", 00:23:58.551 "uuid": "1983fe3c-c030-4dce-9f72-71a6d79693fe", 00:23:58.551 "strip_size_kb": 0, 00:23:58.551 "state": "online", 00:23:58.551 "raid_level": "raid1", 00:23:58.551 "superblock": true, 00:23:58.551 "num_base_bdevs": 4, 00:23:58.551 "num_base_bdevs_discovered": 4, 00:23:58.551 "num_base_bdevs_operational": 4, 00:23:58.551 "process": { 00:23:58.551 "type": "rebuild", 00:23:58.551 "target": "spare", 00:23:58.551 "progress": { 00:23:58.551 "blocks": 22528, 00:23:58.551 "percent": 35 00:23:58.551 } 00:23:58.551 }, 00:23:58.551 "base_bdevs_list": [ 00:23:58.551 { 00:23:58.551 "name": "spare", 00:23:58.551 "uuid": "ead76a83-9b07-5e11-91d4-c7f902c6a847", 00:23:58.551 "is_configured": true, 00:23:58.551 "data_offset": 2048, 00:23:58.551 "data_size": 63488 00:23:58.551 }, 00:23:58.551 { 00:23:58.551 "name": "BaseBdev2", 00:23:58.551 "uuid": "20b60e08-0f22-5c8c-973c-d6e9c9557cbf", 00:23:58.551 "is_configured": true, 00:23:58.551 "data_offset": 2048, 00:23:58.551 "data_size": 63488 00:23:58.551 }, 00:23:58.551 { 00:23:58.551 "name": "BaseBdev3", 00:23:58.551 "uuid": "e8584e91-b6eb-5cae-953b-d917dad89bdd", 00:23:58.551 "is_configured": true, 00:23:58.551 "data_offset": 2048, 00:23:58.551 "data_size": 63488 00:23:58.551 }, 00:23:58.551 { 00:23:58.551 "name": "BaseBdev4", 00:23:58.551 "uuid": "00148bfb-96c7-5c63-b8f9-829c2fbb5bea", 00:23:58.551 "is_configured": true, 00:23:58.551 "data_offset": 2048, 00:23:58.551 "data_size": 63488 00:23:58.551 } 00:23:58.551 ] 00:23:58.551 }' 00:23:58.551 22:07:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:58.551 22:07:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:58.551 22:07:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:58.551 22:07:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:58.551 22:07:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:58.811 [2024-07-13 22:07:17.946880] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:58.811 [2024-07-13 22:07:18.036973] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:58.811 [2024-07-13 22:07:18.037026] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:58.811 [2024-07-13 22:07:18.037061] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:58.811 [2024-07-13 22:07:18.037072] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:58.811 22:07:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:58.811 22:07:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:58.811 22:07:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:58.811 22:07:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:58.811 22:07:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:58.811 22:07:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:58.811 22:07:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:58.811 22:07:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:58.811 22:07:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:58.811 22:07:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:58.811 22:07:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:58.811 22:07:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:59.070 22:07:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:59.070 "name": "raid_bdev1", 00:23:59.070 "uuid": "1983fe3c-c030-4dce-9f72-71a6d79693fe", 00:23:59.070 "strip_size_kb": 0, 00:23:59.070 "state": "online", 00:23:59.070 "raid_level": "raid1", 00:23:59.070 "superblock": true, 00:23:59.070 "num_base_bdevs": 4, 00:23:59.070 "num_base_bdevs_discovered": 3, 00:23:59.070 "num_base_bdevs_operational": 3, 00:23:59.070 "base_bdevs_list": [ 00:23:59.070 { 00:23:59.070 "name": null, 00:23:59.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:59.070 "is_configured": false, 00:23:59.070 "data_offset": 2048, 00:23:59.070 "data_size": 63488 00:23:59.070 }, 00:23:59.070 { 00:23:59.070 "name": "BaseBdev2", 00:23:59.070 "uuid": "20b60e08-0f22-5c8c-973c-d6e9c9557cbf", 00:23:59.070 "is_configured": true, 00:23:59.070 "data_offset": 2048, 00:23:59.070 "data_size": 63488 00:23:59.070 }, 00:23:59.070 { 00:23:59.070 "name": "BaseBdev3", 00:23:59.070 "uuid": "e8584e91-b6eb-5cae-953b-d917dad89bdd", 00:23:59.070 "is_configured": true, 00:23:59.070 "data_offset": 2048, 00:23:59.070 "data_size": 63488 00:23:59.070 }, 00:23:59.070 { 00:23:59.070 "name": "BaseBdev4", 00:23:59.070 "uuid": "00148bfb-96c7-5c63-b8f9-829c2fbb5bea", 00:23:59.070 "is_configured": true, 00:23:59.070 "data_offset": 2048, 00:23:59.070 "data_size": 63488 00:23:59.070 } 00:23:59.070 ] 00:23:59.070 }' 00:23:59.070 22:07:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:59.070 22:07:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:59.638 22:07:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:59.638 22:07:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:59.638 22:07:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:59.638 22:07:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:59.638 22:07:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:59.638 22:07:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:59.638 22:07:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:59.638 22:07:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:59.638 "name": "raid_bdev1", 00:23:59.638 "uuid": "1983fe3c-c030-4dce-9f72-71a6d79693fe", 00:23:59.638 "strip_size_kb": 0, 00:23:59.638 "state": "online", 00:23:59.638 "raid_level": "raid1", 00:23:59.638 "superblock": true, 00:23:59.638 "num_base_bdevs": 4, 00:23:59.638 "num_base_bdevs_discovered": 3, 00:23:59.638 "num_base_bdevs_operational": 3, 00:23:59.638 "base_bdevs_list": [ 00:23:59.638 { 00:23:59.638 "name": null, 00:23:59.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:59.638 "is_configured": false, 00:23:59.638 "data_offset": 2048, 00:23:59.638 "data_size": 63488 00:23:59.638 }, 00:23:59.638 { 00:23:59.638 "name": "BaseBdev2", 00:23:59.638 "uuid": "20b60e08-0f22-5c8c-973c-d6e9c9557cbf", 00:23:59.638 "is_configured": true, 00:23:59.638 "data_offset": 2048, 00:23:59.638 "data_size": 63488 00:23:59.638 }, 00:23:59.638 { 00:23:59.638 "name": "BaseBdev3", 00:23:59.638 "uuid": "e8584e91-b6eb-5cae-953b-d917dad89bdd", 00:23:59.638 "is_configured": true, 00:23:59.638 "data_offset": 2048, 00:23:59.638 "data_size": 63488 00:23:59.638 }, 00:23:59.638 { 00:23:59.638 "name": "BaseBdev4", 00:23:59.638 "uuid": "00148bfb-96c7-5c63-b8f9-829c2fbb5bea", 00:23:59.638 "is_configured": true, 00:23:59.638 "data_offset": 2048, 00:23:59.638 "data_size": 63488 00:23:59.638 } 00:23:59.638 ] 00:23:59.638 }' 00:23:59.638 22:07:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:59.638 22:07:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:59.638 22:07:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:59.638 22:07:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:59.638 22:07:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:59.897 [2024-07-13 22:07:19.107115] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:59.897 [2024-07-13 22:07:19.122623] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caae10 00:23:59.897 [2024-07-13 22:07:19.124438] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:59.897 22:07:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:00.835 22:07:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:00.835 22:07:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:00.835 22:07:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:00.835 22:07:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:00.835 22:07:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:00.835 22:07:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:00.835 22:07:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:01.094 22:07:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:01.094 "name": "raid_bdev1", 00:24:01.094 "uuid": "1983fe3c-c030-4dce-9f72-71a6d79693fe", 00:24:01.094 "strip_size_kb": 0, 00:24:01.094 "state": "online", 00:24:01.094 "raid_level": "raid1", 00:24:01.094 "superblock": true, 00:24:01.094 "num_base_bdevs": 4, 00:24:01.094 "num_base_bdevs_discovered": 4, 00:24:01.094 "num_base_bdevs_operational": 4, 00:24:01.094 "process": { 00:24:01.094 "type": "rebuild", 00:24:01.094 "target": "spare", 00:24:01.094 "progress": { 00:24:01.094 "blocks": 22528, 00:24:01.094 "percent": 35 00:24:01.094 } 00:24:01.094 }, 00:24:01.094 "base_bdevs_list": [ 00:24:01.094 { 00:24:01.094 "name": "spare", 00:24:01.094 "uuid": "ead76a83-9b07-5e11-91d4-c7f902c6a847", 00:24:01.094 "is_configured": true, 00:24:01.094 "data_offset": 2048, 00:24:01.094 "data_size": 63488 00:24:01.094 }, 00:24:01.094 { 00:24:01.095 "name": "BaseBdev2", 00:24:01.095 "uuid": "20b60e08-0f22-5c8c-973c-d6e9c9557cbf", 00:24:01.095 "is_configured": true, 00:24:01.095 "data_offset": 2048, 00:24:01.095 "data_size": 63488 00:24:01.095 }, 00:24:01.095 { 00:24:01.095 "name": "BaseBdev3", 00:24:01.095 "uuid": "e8584e91-b6eb-5cae-953b-d917dad89bdd", 00:24:01.095 "is_configured": true, 00:24:01.095 "data_offset": 2048, 00:24:01.095 "data_size": 63488 00:24:01.095 }, 00:24:01.095 { 00:24:01.095 "name": "BaseBdev4", 00:24:01.095 "uuid": "00148bfb-96c7-5c63-b8f9-829c2fbb5bea", 00:24:01.095 "is_configured": true, 00:24:01.095 "data_offset": 2048, 00:24:01.095 "data_size": 63488 00:24:01.095 } 00:24:01.095 ] 00:24:01.095 }' 00:24:01.095 22:07:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:01.095 22:07:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:01.095 22:07:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:01.095 22:07:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:01.095 22:07:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:24:01.095 22:07:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:24:01.095 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:24:01.095 22:07:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:24:01.095 22:07:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:24:01.095 22:07:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:24:01.095 22:07:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@694 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:24:01.355 [2024-07-13 22:07:20.554382] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:01.355 [2024-07-13 22:07:20.736279] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000caae10 00:24:01.614 22:07:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:24:01.614 22:07:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:24:01.614 22:07:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:01.614 22:07:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:01.614 22:07:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:01.614 22:07:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:01.614 22:07:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:01.614 22:07:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:01.614 22:07:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:01.614 22:07:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:01.614 "name": "raid_bdev1", 00:24:01.614 "uuid": "1983fe3c-c030-4dce-9f72-71a6d79693fe", 00:24:01.614 "strip_size_kb": 0, 00:24:01.614 "state": "online", 00:24:01.614 "raid_level": "raid1", 00:24:01.614 "superblock": true, 00:24:01.614 "num_base_bdevs": 4, 00:24:01.614 "num_base_bdevs_discovered": 3, 00:24:01.614 "num_base_bdevs_operational": 3, 00:24:01.614 "process": { 00:24:01.614 "type": "rebuild", 00:24:01.614 "target": "spare", 00:24:01.614 "progress": { 00:24:01.614 "blocks": 32768, 00:24:01.614 "percent": 51 00:24:01.614 } 00:24:01.614 }, 00:24:01.614 "base_bdevs_list": [ 00:24:01.614 { 00:24:01.614 "name": "spare", 00:24:01.614 "uuid": "ead76a83-9b07-5e11-91d4-c7f902c6a847", 00:24:01.614 "is_configured": true, 00:24:01.614 "data_offset": 2048, 00:24:01.614 "data_size": 63488 00:24:01.614 }, 00:24:01.614 { 00:24:01.614 "name": null, 00:24:01.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:01.614 "is_configured": false, 00:24:01.614 "data_offset": 2048, 00:24:01.614 "data_size": 63488 00:24:01.614 }, 00:24:01.614 { 00:24:01.614 "name": "BaseBdev3", 00:24:01.614 "uuid": "e8584e91-b6eb-5cae-953b-d917dad89bdd", 00:24:01.614 "is_configured": true, 00:24:01.614 "data_offset": 2048, 00:24:01.614 "data_size": 63488 00:24:01.614 }, 00:24:01.614 { 00:24:01.614 "name": "BaseBdev4", 00:24:01.614 "uuid": "00148bfb-96c7-5c63-b8f9-829c2fbb5bea", 00:24:01.614 "is_configured": true, 00:24:01.614 "data_offset": 2048, 00:24:01.614 "data_size": 63488 00:24:01.614 } 00:24:01.614 ] 00:24:01.614 }' 00:24:01.614 22:07:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:01.614 22:07:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:01.614 22:07:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:01.614 22:07:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:01.614 22:07:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=772 00:24:01.614 22:07:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:24:01.875 22:07:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:01.875 22:07:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:01.875 22:07:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:01.875 22:07:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:01.875 22:07:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:01.875 22:07:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:01.875 22:07:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:01.875 22:07:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:01.875 "name": "raid_bdev1", 00:24:01.875 "uuid": "1983fe3c-c030-4dce-9f72-71a6d79693fe", 00:24:01.875 "strip_size_kb": 0, 00:24:01.875 "state": "online", 00:24:01.875 "raid_level": "raid1", 00:24:01.875 "superblock": true, 00:24:01.875 "num_base_bdevs": 4, 00:24:01.875 "num_base_bdevs_discovered": 3, 00:24:01.875 "num_base_bdevs_operational": 3, 00:24:01.875 "process": { 00:24:01.875 "type": "rebuild", 00:24:01.875 "target": "spare", 00:24:01.875 "progress": { 00:24:01.875 "blocks": 38912, 00:24:01.875 "percent": 61 00:24:01.875 } 00:24:01.875 }, 00:24:01.875 "base_bdevs_list": [ 00:24:01.875 { 00:24:01.875 "name": "spare", 00:24:01.875 "uuid": "ead76a83-9b07-5e11-91d4-c7f902c6a847", 00:24:01.875 "is_configured": true, 00:24:01.875 "data_offset": 2048, 00:24:01.875 "data_size": 63488 00:24:01.875 }, 00:24:01.875 { 00:24:01.875 "name": null, 00:24:01.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:01.875 "is_configured": false, 00:24:01.875 "data_offset": 2048, 00:24:01.875 "data_size": 63488 00:24:01.875 }, 00:24:01.875 { 00:24:01.875 "name": "BaseBdev3", 00:24:01.875 "uuid": "e8584e91-b6eb-5cae-953b-d917dad89bdd", 00:24:01.875 "is_configured": true, 00:24:01.875 "data_offset": 2048, 00:24:01.875 "data_size": 63488 00:24:01.875 }, 00:24:01.875 { 00:24:01.875 "name": "BaseBdev4", 00:24:01.875 "uuid": "00148bfb-96c7-5c63-b8f9-829c2fbb5bea", 00:24:01.875 "is_configured": true, 00:24:01.875 "data_offset": 2048, 00:24:01.875 "data_size": 63488 00:24:01.875 } 00:24:01.875 ] 00:24:01.875 }' 00:24:01.875 22:07:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:01.875 22:07:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:01.875 22:07:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:01.875 22:07:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:01.875 22:07:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:24:02.909 22:07:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:24:02.909 22:07:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:02.909 22:07:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:02.909 22:07:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:02.909 22:07:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:02.909 22:07:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:02.909 22:07:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:02.909 22:07:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:03.168 [2024-07-13 22:07:22.348574] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:03.168 [2024-07-13 22:07:22.348641] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:03.168 [2024-07-13 22:07:22.348745] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:03.168 22:07:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:03.168 "name": "raid_bdev1", 00:24:03.168 "uuid": "1983fe3c-c030-4dce-9f72-71a6d79693fe", 00:24:03.168 "strip_size_kb": 0, 00:24:03.168 "state": "online", 00:24:03.168 "raid_level": "raid1", 00:24:03.168 "superblock": true, 00:24:03.168 "num_base_bdevs": 4, 00:24:03.168 "num_base_bdevs_discovered": 3, 00:24:03.168 "num_base_bdevs_operational": 3, 00:24:03.168 "base_bdevs_list": [ 00:24:03.168 { 00:24:03.168 "name": "spare", 00:24:03.168 "uuid": "ead76a83-9b07-5e11-91d4-c7f902c6a847", 00:24:03.168 "is_configured": true, 00:24:03.168 "data_offset": 2048, 00:24:03.168 "data_size": 63488 00:24:03.168 }, 00:24:03.168 { 00:24:03.168 "name": null, 00:24:03.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:03.168 "is_configured": false, 00:24:03.168 "data_offset": 2048, 00:24:03.168 "data_size": 63488 00:24:03.168 }, 00:24:03.168 { 00:24:03.168 "name": "BaseBdev3", 00:24:03.168 "uuid": "e8584e91-b6eb-5cae-953b-d917dad89bdd", 00:24:03.168 "is_configured": true, 00:24:03.168 "data_offset": 2048, 00:24:03.168 "data_size": 63488 00:24:03.168 }, 00:24:03.168 { 00:24:03.168 "name": "BaseBdev4", 00:24:03.168 "uuid": "00148bfb-96c7-5c63-b8f9-829c2fbb5bea", 00:24:03.168 "is_configured": true, 00:24:03.168 "data_offset": 2048, 00:24:03.168 "data_size": 63488 00:24:03.168 } 00:24:03.168 ] 00:24:03.168 }' 00:24:03.169 22:07:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:03.169 22:07:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:03.169 22:07:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:03.169 22:07:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:24:03.169 22:07:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:24:03.169 22:07:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:03.169 22:07:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:03.169 22:07:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:03.169 22:07:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:03.169 22:07:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:03.169 22:07:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:03.169 22:07:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:03.428 22:07:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:03.428 "name": "raid_bdev1", 00:24:03.428 "uuid": "1983fe3c-c030-4dce-9f72-71a6d79693fe", 00:24:03.428 "strip_size_kb": 0, 00:24:03.428 "state": "online", 00:24:03.428 "raid_level": "raid1", 00:24:03.428 "superblock": true, 00:24:03.428 "num_base_bdevs": 4, 00:24:03.428 "num_base_bdevs_discovered": 3, 00:24:03.428 "num_base_bdevs_operational": 3, 00:24:03.428 "base_bdevs_list": [ 00:24:03.428 { 00:24:03.428 "name": "spare", 00:24:03.428 "uuid": "ead76a83-9b07-5e11-91d4-c7f902c6a847", 00:24:03.428 "is_configured": true, 00:24:03.428 "data_offset": 2048, 00:24:03.428 "data_size": 63488 00:24:03.428 }, 00:24:03.428 { 00:24:03.428 "name": null, 00:24:03.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:03.428 "is_configured": false, 00:24:03.428 "data_offset": 2048, 00:24:03.428 "data_size": 63488 00:24:03.428 }, 00:24:03.428 { 00:24:03.428 "name": "BaseBdev3", 00:24:03.429 "uuid": "e8584e91-b6eb-5cae-953b-d917dad89bdd", 00:24:03.429 "is_configured": true, 00:24:03.429 "data_offset": 2048, 00:24:03.429 "data_size": 63488 00:24:03.429 }, 00:24:03.429 { 00:24:03.429 "name": "BaseBdev4", 00:24:03.429 "uuid": "00148bfb-96c7-5c63-b8f9-829c2fbb5bea", 00:24:03.429 "is_configured": true, 00:24:03.429 "data_offset": 2048, 00:24:03.429 "data_size": 63488 00:24:03.429 } 00:24:03.429 ] 00:24:03.429 }' 00:24:03.429 22:07:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:03.429 22:07:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:03.429 22:07:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:03.429 22:07:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:03.429 22:07:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:03.429 22:07:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:03.429 22:07:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:03.429 22:07:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:03.429 22:07:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:03.429 22:07:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:03.429 22:07:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:03.429 22:07:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:03.429 22:07:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:03.429 22:07:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:03.429 22:07:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:03.429 22:07:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:03.688 22:07:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:03.688 "name": "raid_bdev1", 00:24:03.688 "uuid": "1983fe3c-c030-4dce-9f72-71a6d79693fe", 00:24:03.688 "strip_size_kb": 0, 00:24:03.688 "state": "online", 00:24:03.688 "raid_level": "raid1", 00:24:03.688 "superblock": true, 00:24:03.688 "num_base_bdevs": 4, 00:24:03.688 "num_base_bdevs_discovered": 3, 00:24:03.688 "num_base_bdevs_operational": 3, 00:24:03.688 "base_bdevs_list": [ 00:24:03.688 { 00:24:03.688 "name": "spare", 00:24:03.688 "uuid": "ead76a83-9b07-5e11-91d4-c7f902c6a847", 00:24:03.688 "is_configured": true, 00:24:03.688 "data_offset": 2048, 00:24:03.688 "data_size": 63488 00:24:03.688 }, 00:24:03.688 { 00:24:03.688 "name": null, 00:24:03.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:03.688 "is_configured": false, 00:24:03.688 "data_offset": 2048, 00:24:03.689 "data_size": 63488 00:24:03.689 }, 00:24:03.689 { 00:24:03.689 "name": "BaseBdev3", 00:24:03.689 "uuid": "e8584e91-b6eb-5cae-953b-d917dad89bdd", 00:24:03.689 "is_configured": true, 00:24:03.689 "data_offset": 2048, 00:24:03.689 "data_size": 63488 00:24:03.689 }, 00:24:03.689 { 00:24:03.689 "name": "BaseBdev4", 00:24:03.689 "uuid": "00148bfb-96c7-5c63-b8f9-829c2fbb5bea", 00:24:03.689 "is_configured": true, 00:24:03.689 "data_offset": 2048, 00:24:03.689 "data_size": 63488 00:24:03.689 } 00:24:03.689 ] 00:24:03.689 }' 00:24:03.689 22:07:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:03.689 22:07:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:04.257 22:07:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:04.257 [2024-07-13 22:07:23.582533] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:04.257 [2024-07-13 22:07:23.582570] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:04.257 [2024-07-13 22:07:23.582657] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:04.257 [2024-07-13 22:07:23.582749] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:04.257 [2024-07-13 22:07:23.582761] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000043280 name raid_bdev1, state offline 00:24:04.257 22:07:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:04.257 22:07:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:24:04.515 22:07:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:24:04.515 22:07:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:24:04.515 22:07:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:24:04.515 22:07:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:04.515 22:07:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:04.515 22:07:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:24:04.515 22:07:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:04.515 22:07:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:04.515 22:07:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:04.515 22:07:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:24:04.515 22:07:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:04.515 22:07:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:04.515 22:07:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:04.774 /dev/nbd0 00:24:04.774 22:07:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:04.774 22:07:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:04.774 22:07:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:24:04.774 22:07:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:24:04.774 22:07:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:04.774 22:07:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:04.774 22:07:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:24:04.774 22:07:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:24:04.774 22:07:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:04.774 22:07:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:04.774 22:07:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:04.774 1+0 records in 00:24:04.774 1+0 records out 00:24:04.774 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262362 s, 15.6 MB/s 00:24:04.774 22:07:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:04.774 22:07:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:24:04.774 22:07:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:04.774 22:07:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:04.774 22:07:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:24:04.774 22:07:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:04.774 22:07:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:04.774 22:07:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:24:05.034 /dev/nbd1 00:24:05.034 22:07:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:05.034 22:07:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:05.034 22:07:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:24:05.034 22:07:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:24:05.034 22:07:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:05.034 22:07:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:05.034 22:07:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:24:05.034 22:07:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:24:05.034 22:07:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:05.034 22:07:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:05.034 22:07:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:05.034 1+0 records in 00:24:05.034 1+0 records out 00:24:05.034 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288064 s, 14.2 MB/s 00:24:05.034 22:07:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:05.034 22:07:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:24:05.034 22:07:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:05.034 22:07:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:05.034 22:07:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:24:05.034 22:07:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:05.034 22:07:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:05.034 22:07:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:05.034 22:07:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:24:05.034 22:07:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:05.034 22:07:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:05.034 22:07:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:05.034 22:07:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:24:05.034 22:07:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:05.034 22:07:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:05.292 22:07:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:05.292 22:07:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:05.292 22:07:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:05.292 22:07:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:05.292 22:07:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:05.292 22:07:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:05.292 22:07:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:24:05.292 22:07:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:24:05.292 22:07:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:05.293 22:07:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:05.551 22:07:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:05.551 22:07:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:05.551 22:07:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:05.552 22:07:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:05.552 22:07:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:05.552 22:07:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:05.552 22:07:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:24:05.552 22:07:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:24:05.552 22:07:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:24:05.552 22:07:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:05.552 22:07:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:05.811 [2024-07-13 22:07:25.079599] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:05.811 [2024-07-13 22:07:25.079651] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:05.811 [2024-07-13 22:07:25.079676] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000044a80 00:24:05.811 [2024-07-13 22:07:25.079687] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:05.811 [2024-07-13 22:07:25.081798] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:05.811 [2024-07-13 22:07:25.081825] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:05.811 [2024-07-13 22:07:25.081911] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:05.811 [2024-07-13 22:07:25.081963] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:05.811 [2024-07-13 22:07:25.082123] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:05.811 [2024-07-13 22:07:25.082204] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:05.811 spare 00:24:05.811 22:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:05.811 22:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:05.811 22:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:05.811 22:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:05.811 22:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:05.811 22:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:05.811 22:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:05.811 22:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:05.811 22:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:05.811 22:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:05.811 22:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:05.811 22:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:05.811 [2024-07-13 22:07:25.182520] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000045080 00:24:05.811 [2024-07-13 22:07:25.182542] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:05.811 [2024-07-13 22:07:25.182805] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc94c0 00:24:05.811 [2024-07-13 22:07:25.182998] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000045080 00:24:05.811 [2024-07-13 22:07:25.183012] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000045080 00:24:05.811 [2024-07-13 22:07:25.183141] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:06.071 22:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:06.071 "name": "raid_bdev1", 00:24:06.071 "uuid": "1983fe3c-c030-4dce-9f72-71a6d79693fe", 00:24:06.071 "strip_size_kb": 0, 00:24:06.071 "state": "online", 00:24:06.071 "raid_level": "raid1", 00:24:06.071 "superblock": true, 00:24:06.071 "num_base_bdevs": 4, 00:24:06.071 "num_base_bdevs_discovered": 3, 00:24:06.071 "num_base_bdevs_operational": 3, 00:24:06.071 "base_bdevs_list": [ 00:24:06.071 { 00:24:06.071 "name": "spare", 00:24:06.071 "uuid": "ead76a83-9b07-5e11-91d4-c7f902c6a847", 00:24:06.071 "is_configured": true, 00:24:06.071 "data_offset": 2048, 00:24:06.071 "data_size": 63488 00:24:06.071 }, 00:24:06.071 { 00:24:06.071 "name": null, 00:24:06.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:06.071 "is_configured": false, 00:24:06.071 "data_offset": 2048, 00:24:06.071 "data_size": 63488 00:24:06.071 }, 00:24:06.071 { 00:24:06.071 "name": "BaseBdev3", 00:24:06.071 "uuid": "e8584e91-b6eb-5cae-953b-d917dad89bdd", 00:24:06.071 "is_configured": true, 00:24:06.071 "data_offset": 2048, 00:24:06.071 "data_size": 63488 00:24:06.071 }, 00:24:06.071 { 00:24:06.071 "name": "BaseBdev4", 00:24:06.071 "uuid": "00148bfb-96c7-5c63-b8f9-829c2fbb5bea", 00:24:06.071 "is_configured": true, 00:24:06.071 "data_offset": 2048, 00:24:06.071 "data_size": 63488 00:24:06.071 } 00:24:06.071 ] 00:24:06.071 }' 00:24:06.071 22:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:06.071 22:07:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:06.639 22:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:06.639 22:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:06.639 22:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:06.639 22:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:06.639 22:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:06.639 22:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:06.639 22:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:06.639 22:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:06.639 "name": "raid_bdev1", 00:24:06.639 "uuid": "1983fe3c-c030-4dce-9f72-71a6d79693fe", 00:24:06.639 "strip_size_kb": 0, 00:24:06.639 "state": "online", 00:24:06.639 "raid_level": "raid1", 00:24:06.639 "superblock": true, 00:24:06.639 "num_base_bdevs": 4, 00:24:06.639 "num_base_bdevs_discovered": 3, 00:24:06.639 "num_base_bdevs_operational": 3, 00:24:06.639 "base_bdevs_list": [ 00:24:06.639 { 00:24:06.639 "name": "spare", 00:24:06.639 "uuid": "ead76a83-9b07-5e11-91d4-c7f902c6a847", 00:24:06.639 "is_configured": true, 00:24:06.639 "data_offset": 2048, 00:24:06.639 "data_size": 63488 00:24:06.639 }, 00:24:06.639 { 00:24:06.639 "name": null, 00:24:06.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:06.639 "is_configured": false, 00:24:06.639 "data_offset": 2048, 00:24:06.639 "data_size": 63488 00:24:06.639 }, 00:24:06.639 { 00:24:06.639 "name": "BaseBdev3", 00:24:06.639 "uuid": "e8584e91-b6eb-5cae-953b-d917dad89bdd", 00:24:06.639 "is_configured": true, 00:24:06.639 "data_offset": 2048, 00:24:06.639 "data_size": 63488 00:24:06.639 }, 00:24:06.639 { 00:24:06.639 "name": "BaseBdev4", 00:24:06.639 "uuid": "00148bfb-96c7-5c63-b8f9-829c2fbb5bea", 00:24:06.639 "is_configured": true, 00:24:06.639 "data_offset": 2048, 00:24:06.639 "data_size": 63488 00:24:06.639 } 00:24:06.639 ] 00:24:06.639 }' 00:24:06.639 22:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:06.639 22:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:06.639 22:07:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:06.639 22:07:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:06.639 22:07:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:06.898 22:07:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:06.898 22:07:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:24:06.898 22:07:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:07.157 [2024-07-13 22:07:26.338995] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:07.157 22:07:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:07.157 22:07:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:07.157 22:07:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:07.157 22:07:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:07.157 22:07:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:07.157 22:07:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:07.157 22:07:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:07.157 22:07:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:07.157 22:07:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:07.157 22:07:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:07.157 22:07:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:07.157 22:07:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:07.157 22:07:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:07.157 "name": "raid_bdev1", 00:24:07.157 "uuid": "1983fe3c-c030-4dce-9f72-71a6d79693fe", 00:24:07.157 "strip_size_kb": 0, 00:24:07.157 "state": "online", 00:24:07.157 "raid_level": "raid1", 00:24:07.157 "superblock": true, 00:24:07.157 "num_base_bdevs": 4, 00:24:07.157 "num_base_bdevs_discovered": 2, 00:24:07.157 "num_base_bdevs_operational": 2, 00:24:07.157 "base_bdevs_list": [ 00:24:07.157 { 00:24:07.157 "name": null, 00:24:07.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:07.157 "is_configured": false, 00:24:07.157 "data_offset": 2048, 00:24:07.157 "data_size": 63488 00:24:07.157 }, 00:24:07.157 { 00:24:07.157 "name": null, 00:24:07.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:07.157 "is_configured": false, 00:24:07.157 "data_offset": 2048, 00:24:07.157 "data_size": 63488 00:24:07.157 }, 00:24:07.157 { 00:24:07.157 "name": "BaseBdev3", 00:24:07.157 "uuid": "e8584e91-b6eb-5cae-953b-d917dad89bdd", 00:24:07.157 "is_configured": true, 00:24:07.157 "data_offset": 2048, 00:24:07.157 "data_size": 63488 00:24:07.157 }, 00:24:07.157 { 00:24:07.157 "name": "BaseBdev4", 00:24:07.157 "uuid": "00148bfb-96c7-5c63-b8f9-829c2fbb5bea", 00:24:07.157 "is_configured": true, 00:24:07.157 "data_offset": 2048, 00:24:07.157 "data_size": 63488 00:24:07.157 } 00:24:07.157 ] 00:24:07.157 }' 00:24:07.157 22:07:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:07.157 22:07:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:07.724 22:07:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:07.983 [2024-07-13 22:07:27.197246] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:07.983 [2024-07-13 22:07:27.197437] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:24:07.983 [2024-07-13 22:07:27.197457] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:07.983 [2024-07-13 22:07:27.197489] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:07.983 [2024-07-13 22:07:27.214090] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc9590 00:24:07.983 [2024-07-13 22:07:27.215838] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:07.983 22:07:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:24:08.922 22:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:08.922 22:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:08.922 22:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:08.922 22:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:08.922 22:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:08.922 22:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:08.922 22:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:09.181 22:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:09.181 "name": "raid_bdev1", 00:24:09.181 "uuid": "1983fe3c-c030-4dce-9f72-71a6d79693fe", 00:24:09.181 "strip_size_kb": 0, 00:24:09.181 "state": "online", 00:24:09.181 "raid_level": "raid1", 00:24:09.181 "superblock": true, 00:24:09.181 "num_base_bdevs": 4, 00:24:09.181 "num_base_bdevs_discovered": 3, 00:24:09.181 "num_base_bdevs_operational": 3, 00:24:09.181 "process": { 00:24:09.181 "type": "rebuild", 00:24:09.181 "target": "spare", 00:24:09.181 "progress": { 00:24:09.181 "blocks": 22528, 00:24:09.181 "percent": 35 00:24:09.181 } 00:24:09.181 }, 00:24:09.181 "base_bdevs_list": [ 00:24:09.181 { 00:24:09.181 "name": "spare", 00:24:09.181 "uuid": "ead76a83-9b07-5e11-91d4-c7f902c6a847", 00:24:09.181 "is_configured": true, 00:24:09.181 "data_offset": 2048, 00:24:09.181 "data_size": 63488 00:24:09.181 }, 00:24:09.181 { 00:24:09.181 "name": null, 00:24:09.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:09.181 "is_configured": false, 00:24:09.181 "data_offset": 2048, 00:24:09.181 "data_size": 63488 00:24:09.181 }, 00:24:09.181 { 00:24:09.181 "name": "BaseBdev3", 00:24:09.181 "uuid": "e8584e91-b6eb-5cae-953b-d917dad89bdd", 00:24:09.181 "is_configured": true, 00:24:09.181 "data_offset": 2048, 00:24:09.181 "data_size": 63488 00:24:09.181 }, 00:24:09.181 { 00:24:09.181 "name": "BaseBdev4", 00:24:09.181 "uuid": "00148bfb-96c7-5c63-b8f9-829c2fbb5bea", 00:24:09.181 "is_configured": true, 00:24:09.181 "data_offset": 2048, 00:24:09.181 "data_size": 63488 00:24:09.181 } 00:24:09.181 ] 00:24:09.181 }' 00:24:09.181 22:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:09.181 22:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:09.181 22:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:09.181 22:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:09.181 22:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:09.440 [2024-07-13 22:07:28.653763] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:09.440 [2024-07-13 22:07:28.727309] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:09.440 [2024-07-13 22:07:28.727358] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:09.440 [2024-07-13 22:07:28.727376] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:09.440 [2024-07-13 22:07:28.727385] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:09.440 22:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:09.440 22:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:09.440 22:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:09.440 22:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:09.440 22:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:09.440 22:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:09.440 22:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:09.440 22:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:09.440 22:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:09.440 22:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:09.440 22:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:09.440 22:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:09.699 22:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:09.699 "name": "raid_bdev1", 00:24:09.699 "uuid": "1983fe3c-c030-4dce-9f72-71a6d79693fe", 00:24:09.699 "strip_size_kb": 0, 00:24:09.699 "state": "online", 00:24:09.699 "raid_level": "raid1", 00:24:09.699 "superblock": true, 00:24:09.699 "num_base_bdevs": 4, 00:24:09.699 "num_base_bdevs_discovered": 2, 00:24:09.699 "num_base_bdevs_operational": 2, 00:24:09.699 "base_bdevs_list": [ 00:24:09.699 { 00:24:09.699 "name": null, 00:24:09.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:09.699 "is_configured": false, 00:24:09.699 "data_offset": 2048, 00:24:09.699 "data_size": 63488 00:24:09.699 }, 00:24:09.699 { 00:24:09.699 "name": null, 00:24:09.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:09.699 "is_configured": false, 00:24:09.699 "data_offset": 2048, 00:24:09.699 "data_size": 63488 00:24:09.699 }, 00:24:09.699 { 00:24:09.699 "name": "BaseBdev3", 00:24:09.699 "uuid": "e8584e91-b6eb-5cae-953b-d917dad89bdd", 00:24:09.699 "is_configured": true, 00:24:09.699 "data_offset": 2048, 00:24:09.699 "data_size": 63488 00:24:09.699 }, 00:24:09.699 { 00:24:09.699 "name": "BaseBdev4", 00:24:09.699 "uuid": "00148bfb-96c7-5c63-b8f9-829c2fbb5bea", 00:24:09.699 "is_configured": true, 00:24:09.699 "data_offset": 2048, 00:24:09.699 "data_size": 63488 00:24:09.699 } 00:24:09.699 ] 00:24:09.699 }' 00:24:09.699 22:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:09.699 22:07:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:10.267 22:07:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:10.267 [2024-07-13 22:07:29.555115] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:10.267 [2024-07-13 22:07:29.555177] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:10.267 [2024-07-13 22:07:29.555220] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000045680 00:24:10.267 [2024-07-13 22:07:29.555231] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:10.267 [2024-07-13 22:07:29.555761] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:10.267 [2024-07-13 22:07:29.555785] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:10.267 [2024-07-13 22:07:29.555884] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:10.267 [2024-07-13 22:07:29.555897] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:24:10.267 [2024-07-13 22:07:29.555919] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:10.267 [2024-07-13 22:07:29.555951] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:10.267 [2024-07-13 22:07:29.571633] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc9660 00:24:10.267 spare 00:24:10.267 [2024-07-13 22:07:29.573425] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:10.267 22:07:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:24:11.642 22:07:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:11.642 22:07:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:11.642 22:07:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:11.642 22:07:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:11.642 22:07:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:11.642 22:07:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:11.642 22:07:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:11.642 22:07:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:11.642 "name": "raid_bdev1", 00:24:11.642 "uuid": "1983fe3c-c030-4dce-9f72-71a6d79693fe", 00:24:11.642 "strip_size_kb": 0, 00:24:11.642 "state": "online", 00:24:11.642 "raid_level": "raid1", 00:24:11.642 "superblock": true, 00:24:11.642 "num_base_bdevs": 4, 00:24:11.643 "num_base_bdevs_discovered": 3, 00:24:11.643 "num_base_bdevs_operational": 3, 00:24:11.643 "process": { 00:24:11.643 "type": "rebuild", 00:24:11.643 "target": "spare", 00:24:11.643 "progress": { 00:24:11.643 "blocks": 22528, 00:24:11.643 "percent": 35 00:24:11.643 } 00:24:11.643 }, 00:24:11.643 "base_bdevs_list": [ 00:24:11.643 { 00:24:11.643 "name": "spare", 00:24:11.643 "uuid": "ead76a83-9b07-5e11-91d4-c7f902c6a847", 00:24:11.643 "is_configured": true, 00:24:11.643 "data_offset": 2048, 00:24:11.643 "data_size": 63488 00:24:11.643 }, 00:24:11.643 { 00:24:11.643 "name": null, 00:24:11.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:11.643 "is_configured": false, 00:24:11.643 "data_offset": 2048, 00:24:11.643 "data_size": 63488 00:24:11.643 }, 00:24:11.643 { 00:24:11.643 "name": "BaseBdev3", 00:24:11.643 "uuid": "e8584e91-b6eb-5cae-953b-d917dad89bdd", 00:24:11.643 "is_configured": true, 00:24:11.643 "data_offset": 2048, 00:24:11.643 "data_size": 63488 00:24:11.643 }, 00:24:11.643 { 00:24:11.643 "name": "BaseBdev4", 00:24:11.643 "uuid": "00148bfb-96c7-5c63-b8f9-829c2fbb5bea", 00:24:11.643 "is_configured": true, 00:24:11.643 "data_offset": 2048, 00:24:11.643 "data_size": 63488 00:24:11.643 } 00:24:11.643 ] 00:24:11.643 }' 00:24:11.643 22:07:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:11.643 22:07:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:11.643 22:07:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:11.643 22:07:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:11.643 22:07:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:11.643 [2024-07-13 22:07:31.014966] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:11.900 [2024-07-13 22:07:31.084893] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:11.900 [2024-07-13 22:07:31.084951] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:11.900 [2024-07-13 22:07:31.084968] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:11.900 [2024-07-13 22:07:31.084979] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:11.900 22:07:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:11.900 22:07:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:11.900 22:07:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:11.900 22:07:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:11.900 22:07:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:11.900 22:07:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:11.900 22:07:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:11.900 22:07:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:11.900 22:07:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:11.900 22:07:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:11.900 22:07:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:11.900 22:07:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:12.159 22:07:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:12.159 "name": "raid_bdev1", 00:24:12.159 "uuid": "1983fe3c-c030-4dce-9f72-71a6d79693fe", 00:24:12.159 "strip_size_kb": 0, 00:24:12.159 "state": "online", 00:24:12.159 "raid_level": "raid1", 00:24:12.159 "superblock": true, 00:24:12.159 "num_base_bdevs": 4, 00:24:12.159 "num_base_bdevs_discovered": 2, 00:24:12.159 "num_base_bdevs_operational": 2, 00:24:12.159 "base_bdevs_list": [ 00:24:12.159 { 00:24:12.159 "name": null, 00:24:12.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:12.159 "is_configured": false, 00:24:12.159 "data_offset": 2048, 00:24:12.159 "data_size": 63488 00:24:12.159 }, 00:24:12.159 { 00:24:12.159 "name": null, 00:24:12.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:12.159 "is_configured": false, 00:24:12.159 "data_offset": 2048, 00:24:12.159 "data_size": 63488 00:24:12.159 }, 00:24:12.159 { 00:24:12.159 "name": "BaseBdev3", 00:24:12.159 "uuid": "e8584e91-b6eb-5cae-953b-d917dad89bdd", 00:24:12.159 "is_configured": true, 00:24:12.159 "data_offset": 2048, 00:24:12.159 "data_size": 63488 00:24:12.159 }, 00:24:12.159 { 00:24:12.159 "name": "BaseBdev4", 00:24:12.159 "uuid": "00148bfb-96c7-5c63-b8f9-829c2fbb5bea", 00:24:12.159 "is_configured": true, 00:24:12.159 "data_offset": 2048, 00:24:12.159 "data_size": 63488 00:24:12.159 } 00:24:12.159 ] 00:24:12.159 }' 00:24:12.159 22:07:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:12.159 22:07:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:12.417 22:07:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:12.417 22:07:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:12.417 22:07:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:12.417 22:07:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:12.417 22:07:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:12.417 22:07:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:12.417 22:07:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:12.675 22:07:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:12.675 "name": "raid_bdev1", 00:24:12.675 "uuid": "1983fe3c-c030-4dce-9f72-71a6d79693fe", 00:24:12.675 "strip_size_kb": 0, 00:24:12.675 "state": "online", 00:24:12.675 "raid_level": "raid1", 00:24:12.675 "superblock": true, 00:24:12.675 "num_base_bdevs": 4, 00:24:12.675 "num_base_bdevs_discovered": 2, 00:24:12.675 "num_base_bdevs_operational": 2, 00:24:12.675 "base_bdevs_list": [ 00:24:12.675 { 00:24:12.675 "name": null, 00:24:12.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:12.675 "is_configured": false, 00:24:12.675 "data_offset": 2048, 00:24:12.675 "data_size": 63488 00:24:12.675 }, 00:24:12.675 { 00:24:12.675 "name": null, 00:24:12.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:12.675 "is_configured": false, 00:24:12.675 "data_offset": 2048, 00:24:12.675 "data_size": 63488 00:24:12.675 }, 00:24:12.675 { 00:24:12.675 "name": "BaseBdev3", 00:24:12.675 "uuid": "e8584e91-b6eb-5cae-953b-d917dad89bdd", 00:24:12.675 "is_configured": true, 00:24:12.675 "data_offset": 2048, 00:24:12.675 "data_size": 63488 00:24:12.675 }, 00:24:12.675 { 00:24:12.675 "name": "BaseBdev4", 00:24:12.675 "uuid": "00148bfb-96c7-5c63-b8f9-829c2fbb5bea", 00:24:12.675 "is_configured": true, 00:24:12.675 "data_offset": 2048, 00:24:12.675 "data_size": 63488 00:24:12.675 } 00:24:12.675 ] 00:24:12.675 }' 00:24:12.675 22:07:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:12.675 22:07:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:12.675 22:07:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:12.675 22:07:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:12.675 22:07:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:24:12.934 22:07:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:13.192 [2024-07-13 22:07:32.345172] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:13.192 [2024-07-13 22:07:32.345258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:13.192 [2024-07-13 22:07:32.345284] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000045c80 00:24:13.192 [2024-07-13 22:07:32.345297] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:13.192 [2024-07-13 22:07:32.345778] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:13.192 [2024-07-13 22:07:32.345801] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:13.192 [2024-07-13 22:07:32.345881] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:13.192 [2024-07-13 22:07:32.345899] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:24:13.192 [2024-07-13 22:07:32.345918] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:13.192 BaseBdev1 00:24:13.192 22:07:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:24:14.126 22:07:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:14.126 22:07:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:14.126 22:07:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:14.126 22:07:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:14.126 22:07:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:14.126 22:07:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:14.126 22:07:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:14.126 22:07:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:14.126 22:07:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:14.126 22:07:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:14.126 22:07:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:14.126 22:07:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:14.383 22:07:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:14.383 "name": "raid_bdev1", 00:24:14.383 "uuid": "1983fe3c-c030-4dce-9f72-71a6d79693fe", 00:24:14.383 "strip_size_kb": 0, 00:24:14.383 "state": "online", 00:24:14.383 "raid_level": "raid1", 00:24:14.383 "superblock": true, 00:24:14.383 "num_base_bdevs": 4, 00:24:14.383 "num_base_bdevs_discovered": 2, 00:24:14.383 "num_base_bdevs_operational": 2, 00:24:14.383 "base_bdevs_list": [ 00:24:14.383 { 00:24:14.383 "name": null, 00:24:14.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:14.383 "is_configured": false, 00:24:14.383 "data_offset": 2048, 00:24:14.383 "data_size": 63488 00:24:14.383 }, 00:24:14.383 { 00:24:14.383 "name": null, 00:24:14.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:14.383 "is_configured": false, 00:24:14.383 "data_offset": 2048, 00:24:14.383 "data_size": 63488 00:24:14.383 }, 00:24:14.383 { 00:24:14.383 "name": "BaseBdev3", 00:24:14.383 "uuid": "e8584e91-b6eb-5cae-953b-d917dad89bdd", 00:24:14.383 "is_configured": true, 00:24:14.383 "data_offset": 2048, 00:24:14.383 "data_size": 63488 00:24:14.383 }, 00:24:14.383 { 00:24:14.383 "name": "BaseBdev4", 00:24:14.383 "uuid": "00148bfb-96c7-5c63-b8f9-829c2fbb5bea", 00:24:14.383 "is_configured": true, 00:24:14.383 "data_offset": 2048, 00:24:14.383 "data_size": 63488 00:24:14.383 } 00:24:14.383 ] 00:24:14.383 }' 00:24:14.383 22:07:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:14.383 22:07:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:14.947 22:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:14.947 22:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:14.947 22:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:14.947 22:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:14.947 22:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:14.947 22:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:14.947 22:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:14.947 22:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:14.947 "name": "raid_bdev1", 00:24:14.947 "uuid": "1983fe3c-c030-4dce-9f72-71a6d79693fe", 00:24:14.947 "strip_size_kb": 0, 00:24:14.947 "state": "online", 00:24:14.947 "raid_level": "raid1", 00:24:14.947 "superblock": true, 00:24:14.947 "num_base_bdevs": 4, 00:24:14.947 "num_base_bdevs_discovered": 2, 00:24:14.947 "num_base_bdevs_operational": 2, 00:24:14.947 "base_bdevs_list": [ 00:24:14.947 { 00:24:14.947 "name": null, 00:24:14.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:14.947 "is_configured": false, 00:24:14.947 "data_offset": 2048, 00:24:14.947 "data_size": 63488 00:24:14.947 }, 00:24:14.947 { 00:24:14.947 "name": null, 00:24:14.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:14.947 "is_configured": false, 00:24:14.947 "data_offset": 2048, 00:24:14.947 "data_size": 63488 00:24:14.947 }, 00:24:14.947 { 00:24:14.947 "name": "BaseBdev3", 00:24:14.947 "uuid": "e8584e91-b6eb-5cae-953b-d917dad89bdd", 00:24:14.947 "is_configured": true, 00:24:14.947 "data_offset": 2048, 00:24:14.947 "data_size": 63488 00:24:14.947 }, 00:24:14.947 { 00:24:14.947 "name": "BaseBdev4", 00:24:14.947 "uuid": "00148bfb-96c7-5c63-b8f9-829c2fbb5bea", 00:24:14.947 "is_configured": true, 00:24:14.947 "data_offset": 2048, 00:24:14.947 "data_size": 63488 00:24:14.947 } 00:24:14.947 ] 00:24:14.947 }' 00:24:14.947 22:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:14.947 22:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:14.947 22:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:14.947 22:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:14.947 22:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:14.947 22:07:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@648 -- # local es=0 00:24:14.947 22:07:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:14.947 22:07:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:14.947 22:07:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:14.947 22:07:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:14.947 22:07:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:14.947 22:07:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:14.947 22:07:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:14.947 22:07:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:14.947 22:07:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:24:14.947 22:07:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:15.205 [2024-07-13 22:07:34.470801] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:15.205 [2024-07-13 22:07:34.470972] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:24:15.205 [2024-07-13 22:07:34.470998] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:15.205 request: 00:24:15.205 { 00:24:15.205 "base_bdev": "BaseBdev1", 00:24:15.205 "raid_bdev": "raid_bdev1", 00:24:15.205 "method": "bdev_raid_add_base_bdev", 00:24:15.205 "req_id": 1 00:24:15.205 } 00:24:15.205 Got JSON-RPC error response 00:24:15.205 response: 00:24:15.205 { 00:24:15.205 "code": -22, 00:24:15.205 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:24:15.205 } 00:24:15.205 22:07:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # es=1 00:24:15.205 22:07:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:15.205 22:07:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:15.205 22:07:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:15.205 22:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:24:16.172 22:07:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:16.172 22:07:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:16.172 22:07:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:16.172 22:07:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:16.172 22:07:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:16.172 22:07:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:16.172 22:07:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:16.172 22:07:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:16.172 22:07:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:16.172 22:07:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:16.172 22:07:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:16.172 22:07:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:16.429 22:07:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:16.429 "name": "raid_bdev1", 00:24:16.429 "uuid": "1983fe3c-c030-4dce-9f72-71a6d79693fe", 00:24:16.429 "strip_size_kb": 0, 00:24:16.429 "state": "online", 00:24:16.429 "raid_level": "raid1", 00:24:16.429 "superblock": true, 00:24:16.429 "num_base_bdevs": 4, 00:24:16.429 "num_base_bdevs_discovered": 2, 00:24:16.429 "num_base_bdevs_operational": 2, 00:24:16.429 "base_bdevs_list": [ 00:24:16.429 { 00:24:16.429 "name": null, 00:24:16.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:16.429 "is_configured": false, 00:24:16.429 "data_offset": 2048, 00:24:16.429 "data_size": 63488 00:24:16.429 }, 00:24:16.429 { 00:24:16.429 "name": null, 00:24:16.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:16.429 "is_configured": false, 00:24:16.429 "data_offset": 2048, 00:24:16.429 "data_size": 63488 00:24:16.429 }, 00:24:16.429 { 00:24:16.429 "name": "BaseBdev3", 00:24:16.429 "uuid": "e8584e91-b6eb-5cae-953b-d917dad89bdd", 00:24:16.429 "is_configured": true, 00:24:16.429 "data_offset": 2048, 00:24:16.429 "data_size": 63488 00:24:16.429 }, 00:24:16.429 { 00:24:16.429 "name": "BaseBdev4", 00:24:16.429 "uuid": "00148bfb-96c7-5c63-b8f9-829c2fbb5bea", 00:24:16.429 "is_configured": true, 00:24:16.429 "data_offset": 2048, 00:24:16.429 "data_size": 63488 00:24:16.429 } 00:24:16.429 ] 00:24:16.429 }' 00:24:16.429 22:07:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:16.429 22:07:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:16.994 22:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:16.994 22:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:16.994 22:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:16.994 22:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:16.994 22:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:16.994 22:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:16.994 22:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:16.994 22:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:16.994 "name": "raid_bdev1", 00:24:16.994 "uuid": "1983fe3c-c030-4dce-9f72-71a6d79693fe", 00:24:16.994 "strip_size_kb": 0, 00:24:16.994 "state": "online", 00:24:16.994 "raid_level": "raid1", 00:24:16.994 "superblock": true, 00:24:16.994 "num_base_bdevs": 4, 00:24:16.994 "num_base_bdevs_discovered": 2, 00:24:16.994 "num_base_bdevs_operational": 2, 00:24:16.994 "base_bdevs_list": [ 00:24:16.994 { 00:24:16.994 "name": null, 00:24:16.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:16.994 "is_configured": false, 00:24:16.994 "data_offset": 2048, 00:24:16.994 "data_size": 63488 00:24:16.994 }, 00:24:16.994 { 00:24:16.994 "name": null, 00:24:16.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:16.994 "is_configured": false, 00:24:16.994 "data_offset": 2048, 00:24:16.994 "data_size": 63488 00:24:16.994 }, 00:24:16.994 { 00:24:16.994 "name": "BaseBdev3", 00:24:16.994 "uuid": "e8584e91-b6eb-5cae-953b-d917dad89bdd", 00:24:16.994 "is_configured": true, 00:24:16.994 "data_offset": 2048, 00:24:16.994 "data_size": 63488 00:24:16.994 }, 00:24:16.994 { 00:24:16.994 "name": "BaseBdev4", 00:24:16.994 "uuid": "00148bfb-96c7-5c63-b8f9-829c2fbb5bea", 00:24:16.994 "is_configured": true, 00:24:16.994 "data_offset": 2048, 00:24:16.994 "data_size": 63488 00:24:16.994 } 00:24:16.994 ] 00:24:16.994 }' 00:24:16.994 22:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:16.994 22:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:16.994 22:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:17.252 22:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:17.252 22:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 1482113 00:24:17.252 22:07:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@948 -- # '[' -z 1482113 ']' 00:24:17.252 22:07:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # kill -0 1482113 00:24:17.252 22:07:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # uname 00:24:17.252 22:07:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:17.252 22:07:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1482113 00:24:17.252 22:07:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:17.252 22:07:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:17.252 22:07:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1482113' 00:24:17.252 killing process with pid 1482113 00:24:17.252 22:07:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@967 -- # kill 1482113 00:24:17.252 Received shutdown signal, test time was about 60.000000 seconds 00:24:17.252 00:24:17.253 Latency(us) 00:24:17.253 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:17.253 =================================================================================================================== 00:24:17.253 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:17.253 [2024-07-13 22:07:36.478120] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:17.253 [2024-07-13 22:07:36.478244] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:17.253 22:07:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # wait 1482113 00:24:17.253 [2024-07-13 22:07:36.478308] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:17.253 [2024-07-13 22:07:36.478324] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000045080 name raid_bdev1, state offline 00:24:17.510 [2024-07-13 22:07:36.881710] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:18.883 22:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:24:18.883 00:24:18.883 real 0m32.540s 00:24:18.883 user 0m45.017s 00:24:18.883 sys 0m5.598s 00:24:18.883 22:07:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:18.883 22:07:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:18.883 ************************************ 00:24:18.883 END TEST raid_rebuild_test_sb 00:24:18.883 ************************************ 00:24:18.883 22:07:38 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:24:18.883 22:07:38 bdev_raid -- bdev/bdev_raid.sh@879 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:24:18.883 22:07:38 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:24:18.883 22:07:38 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:18.883 22:07:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:18.883 ************************************ 00:24:18.883 START TEST raid_rebuild_test_io 00:24:18.883 ************************************ 00:24:18.883 22:07:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 false true true 00:24:18.883 22:07:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:24:18.883 22:07:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:24:18.883 22:07:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:24:18.883 22:07:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:24:18.883 22:07:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:24:18.883 22:07:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:24:18.883 22:07:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:18.883 22:07:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:24:18.883 22:07:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:24:18.883 22:07:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:18.883 22:07:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:24:18.883 22:07:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:24:18.883 22:07:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:18.883 22:07:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev3 00:24:18.883 22:07:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:24:18.883 22:07:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:18.883 22:07:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev4 00:24:18.883 22:07:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:24:18.883 22:07:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:18.883 22:07:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:18.883 22:07:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:24:18.883 22:07:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:24:18.883 22:07:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:24:18.883 22:07:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:24:18.883 22:07:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:24:18.883 22:07:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:24:18.883 22:07:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:24:18.883 22:07:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:24:18.883 22:07:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:24:18.883 22:07:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # raid_pid=1487879 00:24:18.883 22:07:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 1487879 /var/tmp/spdk-raid.sock 00:24:18.883 22:07:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:18.883 22:07:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@829 -- # '[' -z 1487879 ']' 00:24:18.883 22:07:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:18.883 22:07:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:18.883 22:07:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:18.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:18.883 22:07:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:18.883 22:07:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:19.141 [2024-07-13 22:07:38.294323] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:19.141 [2024-07-13 22:07:38.294441] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1487879 ] 00:24:19.141 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:19.141 Zero copy mechanism will not be used. 00:24:19.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:19.141 EAL: Requested device 0000:3d:01.0 cannot be used 00:24:19.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:19.141 EAL: Requested device 0000:3d:01.1 cannot be used 00:24:19.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:19.141 EAL: Requested device 0000:3d:01.2 cannot be used 00:24:19.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:19.141 EAL: Requested device 0000:3d:01.3 cannot be used 00:24:19.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:19.141 EAL: Requested device 0000:3d:01.4 cannot be used 00:24:19.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:19.141 EAL: Requested device 0000:3d:01.5 cannot be used 00:24:19.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:19.141 EAL: Requested device 0000:3d:01.6 cannot be used 00:24:19.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:19.141 EAL: Requested device 0000:3d:01.7 cannot be used 00:24:19.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:19.141 EAL: Requested device 0000:3d:02.0 cannot be used 00:24:19.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:19.141 EAL: Requested device 0000:3d:02.1 cannot be used 00:24:19.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:19.141 EAL: Requested device 0000:3d:02.2 cannot be used 00:24:19.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:19.141 EAL: Requested device 0000:3d:02.3 cannot be used 00:24:19.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:19.141 EAL: Requested device 0000:3d:02.4 cannot be used 00:24:19.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:19.141 EAL: Requested device 0000:3d:02.5 cannot be used 00:24:19.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:19.141 EAL: Requested device 0000:3d:02.6 cannot be used 00:24:19.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:19.141 EAL: Requested device 0000:3d:02.7 cannot be used 00:24:19.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:19.141 EAL: Requested device 0000:3f:01.0 cannot be used 00:24:19.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:19.141 EAL: Requested device 0000:3f:01.1 cannot be used 00:24:19.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:19.141 EAL: Requested device 0000:3f:01.2 cannot be used 00:24:19.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:19.141 EAL: Requested device 0000:3f:01.3 cannot be used 00:24:19.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:19.141 EAL: Requested device 0000:3f:01.4 cannot be used 00:24:19.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:19.141 EAL: Requested device 0000:3f:01.5 cannot be used 00:24:19.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:19.141 EAL: Requested device 0000:3f:01.6 cannot be used 00:24:19.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:19.141 EAL: Requested device 0000:3f:01.7 cannot be used 00:24:19.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:19.141 EAL: Requested device 0000:3f:02.0 cannot be used 00:24:19.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:19.141 EAL: Requested device 0000:3f:02.1 cannot be used 00:24:19.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:19.141 EAL: Requested device 0000:3f:02.2 cannot be used 00:24:19.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:19.141 EAL: Requested device 0000:3f:02.3 cannot be used 00:24:19.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:19.141 EAL: Requested device 0000:3f:02.4 cannot be used 00:24:19.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:19.141 EAL: Requested device 0000:3f:02.5 cannot be used 00:24:19.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:19.141 EAL: Requested device 0000:3f:02.6 cannot be used 00:24:19.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:19.141 EAL: Requested device 0000:3f:02.7 cannot be used 00:24:19.141 [2024-07-13 22:07:38.458792] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.399 [2024-07-13 22:07:38.661224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:19.657 [2024-07-13 22:07:38.905437] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:19.657 [2024-07-13 22:07:38.905464] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:19.657 22:07:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:19.915 22:07:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # return 0 00:24:19.915 22:07:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:24:19.915 22:07:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:19.915 BaseBdev1_malloc 00:24:19.915 22:07:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:20.172 [2024-07-13 22:07:39.415503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:20.172 [2024-07-13 22:07:39.415556] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:20.172 [2024-07-13 22:07:39.415596] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:24:20.172 [2024-07-13 22:07:39.415611] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:20.172 [2024-07-13 22:07:39.417628] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:20.172 [2024-07-13 22:07:39.417659] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:20.172 BaseBdev1 00:24:20.172 22:07:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:24:20.173 22:07:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:20.430 BaseBdev2_malloc 00:24:20.430 22:07:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:20.430 [2024-07-13 22:07:39.810770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:20.430 [2024-07-13 22:07:39.810824] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:20.430 [2024-07-13 22:07:39.810847] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:24:20.430 [2024-07-13 22:07:39.810863] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:20.430 [2024-07-13 22:07:39.812941] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:20.430 [2024-07-13 22:07:39.812972] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:20.430 BaseBdev2 00:24:20.689 22:07:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:24:20.689 22:07:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:20.689 BaseBdev3_malloc 00:24:20.689 22:07:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:20.947 [2024-07-13 22:07:40.191041] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:20.947 [2024-07-13 22:07:40.191131] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:20.947 [2024-07-13 22:07:40.191158] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040e80 00:24:20.947 [2024-07-13 22:07:40.191172] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:20.947 [2024-07-13 22:07:40.193270] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:20.947 [2024-07-13 22:07:40.193301] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:20.947 BaseBdev3 00:24:20.947 22:07:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:24:20.947 22:07:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:21.204 BaseBdev4_malloc 00:24:21.204 22:07:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:24:21.204 [2024-07-13 22:07:40.554146] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:24:21.204 [2024-07-13 22:07:40.554221] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:21.204 [2024-07-13 22:07:40.554245] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041a80 00:24:21.204 [2024-07-13 22:07:40.554258] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:21.204 [2024-07-13 22:07:40.556379] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:21.204 [2024-07-13 22:07:40.556410] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:21.204 BaseBdev4 00:24:21.204 22:07:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:24:21.462 spare_malloc 00:24:21.462 22:07:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:21.720 spare_delay 00:24:21.720 22:07:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:21.720 [2024-07-13 22:07:41.104115] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:21.720 [2024-07-13 22:07:41.104170] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:21.720 [2024-07-13 22:07:41.104209] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042c80 00:24:21.720 [2024-07-13 22:07:41.104224] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:21.720 [2024-07-13 22:07:41.106361] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:21.720 [2024-07-13 22:07:41.106394] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:21.720 spare 00:24:21.978 22:07:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:24:21.978 [2024-07-13 22:07:41.260550] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:21.978 [2024-07-13 22:07:41.262220] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:21.978 [2024-07-13 22:07:41.262275] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:21.978 [2024-07-13 22:07:41.262321] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:21.978 [2024-07-13 22:07:41.262401] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000043280 00:24:21.978 [2024-07-13 22:07:41.262413] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:24:21.978 [2024-07-13 22:07:41.262681] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:24:21.978 [2024-07-13 22:07:41.262870] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000043280 00:24:21.978 [2024-07-13 22:07:41.262883] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000043280 00:24:21.978 [2024-07-13 22:07:41.263075] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:21.978 22:07:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:24:21.978 22:07:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:21.978 22:07:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:21.978 22:07:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:21.978 22:07:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:21.978 22:07:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:21.978 22:07:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:21.978 22:07:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:21.978 22:07:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:21.978 22:07:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:21.978 22:07:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:21.978 22:07:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:22.236 22:07:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:22.236 "name": "raid_bdev1", 00:24:22.236 "uuid": "dc28459c-5da3-4f74-ad5d-421ca1211206", 00:24:22.236 "strip_size_kb": 0, 00:24:22.236 "state": "online", 00:24:22.236 "raid_level": "raid1", 00:24:22.236 "superblock": false, 00:24:22.236 "num_base_bdevs": 4, 00:24:22.236 "num_base_bdevs_discovered": 4, 00:24:22.236 "num_base_bdevs_operational": 4, 00:24:22.236 "base_bdevs_list": [ 00:24:22.236 { 00:24:22.236 "name": "BaseBdev1", 00:24:22.236 "uuid": "eb267ca2-5749-59c1-8672-54e4848439fb", 00:24:22.236 "is_configured": true, 00:24:22.236 "data_offset": 0, 00:24:22.236 "data_size": 65536 00:24:22.236 }, 00:24:22.236 { 00:24:22.236 "name": "BaseBdev2", 00:24:22.236 "uuid": "b03895dc-9b12-5edd-ac33-443f8b5fe436", 00:24:22.236 "is_configured": true, 00:24:22.236 "data_offset": 0, 00:24:22.236 "data_size": 65536 00:24:22.236 }, 00:24:22.236 { 00:24:22.236 "name": "BaseBdev3", 00:24:22.236 "uuid": "ddc2971e-b8df-5972-8349-7ffe52dca2f8", 00:24:22.236 "is_configured": true, 00:24:22.236 "data_offset": 0, 00:24:22.236 "data_size": 65536 00:24:22.236 }, 00:24:22.236 { 00:24:22.236 "name": "BaseBdev4", 00:24:22.236 "uuid": "371075cd-47c6-53d8-8aaf-07e4fc6812e9", 00:24:22.236 "is_configured": true, 00:24:22.236 "data_offset": 0, 00:24:22.236 "data_size": 65536 00:24:22.236 } 00:24:22.236 ] 00:24:22.236 }' 00:24:22.236 22:07:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:22.236 22:07:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:22.801 22:07:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:22.801 22:07:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:24:22.801 [2024-07-13 22:07:42.062976] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:22.801 22:07:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:24:22.801 22:07:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:22.801 22:07:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:23.059 22:07:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:24:23.059 22:07:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:24:23.059 22:07:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:23.059 22:07:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:24:23.059 [2024-07-13 22:07:42.332629] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010a50 00:24:23.059 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:23.059 Zero copy mechanism will not be used. 00:24:23.059 Running I/O for 60 seconds... 00:24:23.059 [2024-07-13 22:07:42.398648] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:23.059 [2024-07-13 22:07:42.408920] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000010a50 00:24:23.059 22:07:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:23.059 22:07:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:23.059 22:07:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:23.059 22:07:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:23.059 22:07:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:23.059 22:07:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:23.059 22:07:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:23.059 22:07:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:23.059 22:07:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:23.059 22:07:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:23.059 22:07:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:23.059 22:07:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:23.318 22:07:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:23.318 "name": "raid_bdev1", 00:24:23.318 "uuid": "dc28459c-5da3-4f74-ad5d-421ca1211206", 00:24:23.318 "strip_size_kb": 0, 00:24:23.318 "state": "online", 00:24:23.318 "raid_level": "raid1", 00:24:23.318 "superblock": false, 00:24:23.318 "num_base_bdevs": 4, 00:24:23.318 "num_base_bdevs_discovered": 3, 00:24:23.318 "num_base_bdevs_operational": 3, 00:24:23.318 "base_bdevs_list": [ 00:24:23.318 { 00:24:23.318 "name": null, 00:24:23.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:23.318 "is_configured": false, 00:24:23.318 "data_offset": 0, 00:24:23.318 "data_size": 65536 00:24:23.318 }, 00:24:23.318 { 00:24:23.318 "name": "BaseBdev2", 00:24:23.318 "uuid": "b03895dc-9b12-5edd-ac33-443f8b5fe436", 00:24:23.318 "is_configured": true, 00:24:23.318 "data_offset": 0, 00:24:23.318 "data_size": 65536 00:24:23.318 }, 00:24:23.318 { 00:24:23.318 "name": "BaseBdev3", 00:24:23.318 "uuid": "ddc2971e-b8df-5972-8349-7ffe52dca2f8", 00:24:23.318 "is_configured": true, 00:24:23.318 "data_offset": 0, 00:24:23.318 "data_size": 65536 00:24:23.318 }, 00:24:23.318 { 00:24:23.318 "name": "BaseBdev4", 00:24:23.318 "uuid": "371075cd-47c6-53d8-8aaf-07e4fc6812e9", 00:24:23.318 "is_configured": true, 00:24:23.318 "data_offset": 0, 00:24:23.318 "data_size": 65536 00:24:23.318 } 00:24:23.318 ] 00:24:23.318 }' 00:24:23.318 22:07:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:23.318 22:07:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:23.884 22:07:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:23.884 [2024-07-13 22:07:43.244122] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:24.142 [2024-07-13 22:07:43.291757] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010b20 00:24:24.142 [2024-07-13 22:07:43.293605] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:24.142 22:07:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:24:24.142 [2024-07-13 22:07:43.426291] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:24.399 [2024-07-13 22:07:43.645326] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:24.399 [2024-07-13 22:07:43.645611] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:24.965 22:07:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:24.965 22:07:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:24.965 22:07:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:24.965 22:07:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:24.965 22:07:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:24.965 22:07:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:24.965 22:07:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:25.223 [2024-07-13 22:07:44.378735] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:25.223 22:07:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:25.223 "name": "raid_bdev1", 00:24:25.223 "uuid": "dc28459c-5da3-4f74-ad5d-421ca1211206", 00:24:25.223 "strip_size_kb": 0, 00:24:25.223 "state": "online", 00:24:25.223 "raid_level": "raid1", 00:24:25.223 "superblock": false, 00:24:25.223 "num_base_bdevs": 4, 00:24:25.223 "num_base_bdevs_discovered": 4, 00:24:25.223 "num_base_bdevs_operational": 4, 00:24:25.223 "process": { 00:24:25.223 "type": "rebuild", 00:24:25.223 "target": "spare", 00:24:25.223 "progress": { 00:24:25.223 "blocks": 16384, 00:24:25.223 "percent": 25 00:24:25.223 } 00:24:25.223 }, 00:24:25.223 "base_bdevs_list": [ 00:24:25.223 { 00:24:25.223 "name": "spare", 00:24:25.223 "uuid": "07d01f86-a1a8-5c13-bf3c-1a7675a138f6", 00:24:25.223 "is_configured": true, 00:24:25.223 "data_offset": 0, 00:24:25.223 "data_size": 65536 00:24:25.223 }, 00:24:25.223 { 00:24:25.223 "name": "BaseBdev2", 00:24:25.223 "uuid": "b03895dc-9b12-5edd-ac33-443f8b5fe436", 00:24:25.223 "is_configured": true, 00:24:25.223 "data_offset": 0, 00:24:25.223 "data_size": 65536 00:24:25.223 }, 00:24:25.223 { 00:24:25.223 "name": "BaseBdev3", 00:24:25.223 "uuid": "ddc2971e-b8df-5972-8349-7ffe52dca2f8", 00:24:25.223 "is_configured": true, 00:24:25.223 "data_offset": 0, 00:24:25.223 "data_size": 65536 00:24:25.223 }, 00:24:25.223 { 00:24:25.223 "name": "BaseBdev4", 00:24:25.223 "uuid": "371075cd-47c6-53d8-8aaf-07e4fc6812e9", 00:24:25.223 "is_configured": true, 00:24:25.223 "data_offset": 0, 00:24:25.223 "data_size": 65536 00:24:25.223 } 00:24:25.223 ] 00:24:25.223 }' 00:24:25.223 22:07:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:25.223 22:07:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:25.223 22:07:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:25.223 22:07:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:25.223 22:07:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:25.482 [2024-07-13 22:07:44.715772] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:24:25.482 [2024-07-13 22:07:44.724797] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:25.482 [2024-07-13 22:07:44.845952] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:24:25.482 [2024-07-13 22:07:44.846555] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:24:25.740 [2024-07-13 22:07:44.954006] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:25.740 [2024-07-13 22:07:44.964860] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:25.740 [2024-07-13 22:07:44.964893] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:25.740 [2024-07-13 22:07:44.964917] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:25.740 [2024-07-13 22:07:44.991837] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000010a50 00:24:25.740 22:07:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:25.740 22:07:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:25.740 22:07:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:25.740 22:07:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:25.740 22:07:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:25.740 22:07:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:25.740 22:07:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:25.740 22:07:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:25.740 22:07:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:25.740 22:07:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:25.740 22:07:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:25.740 22:07:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:25.998 22:07:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:25.998 "name": "raid_bdev1", 00:24:25.998 "uuid": "dc28459c-5da3-4f74-ad5d-421ca1211206", 00:24:25.998 "strip_size_kb": 0, 00:24:25.998 "state": "online", 00:24:25.998 "raid_level": "raid1", 00:24:25.998 "superblock": false, 00:24:25.998 "num_base_bdevs": 4, 00:24:25.998 "num_base_bdevs_discovered": 3, 00:24:25.998 "num_base_bdevs_operational": 3, 00:24:25.998 "base_bdevs_list": [ 00:24:25.998 { 00:24:25.998 "name": null, 00:24:25.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:25.998 "is_configured": false, 00:24:25.998 "data_offset": 0, 00:24:25.998 "data_size": 65536 00:24:25.998 }, 00:24:25.998 { 00:24:25.998 "name": "BaseBdev2", 00:24:25.998 "uuid": "b03895dc-9b12-5edd-ac33-443f8b5fe436", 00:24:25.998 "is_configured": true, 00:24:25.998 "data_offset": 0, 00:24:25.998 "data_size": 65536 00:24:25.998 }, 00:24:25.998 { 00:24:25.998 "name": "BaseBdev3", 00:24:25.998 "uuid": "ddc2971e-b8df-5972-8349-7ffe52dca2f8", 00:24:25.998 "is_configured": true, 00:24:25.998 "data_offset": 0, 00:24:25.998 "data_size": 65536 00:24:25.998 }, 00:24:25.998 { 00:24:25.998 "name": "BaseBdev4", 00:24:25.998 "uuid": "371075cd-47c6-53d8-8aaf-07e4fc6812e9", 00:24:25.998 "is_configured": true, 00:24:25.998 "data_offset": 0, 00:24:25.998 "data_size": 65536 00:24:25.998 } 00:24:25.998 ] 00:24:25.998 }' 00:24:25.998 22:07:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:25.998 22:07:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:26.564 22:07:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:26.564 22:07:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:26.564 22:07:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:26.564 22:07:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:26.564 22:07:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:26.564 22:07:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:26.564 22:07:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:26.564 22:07:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:26.564 "name": "raid_bdev1", 00:24:26.564 "uuid": "dc28459c-5da3-4f74-ad5d-421ca1211206", 00:24:26.564 "strip_size_kb": 0, 00:24:26.564 "state": "online", 00:24:26.564 "raid_level": "raid1", 00:24:26.564 "superblock": false, 00:24:26.564 "num_base_bdevs": 4, 00:24:26.564 "num_base_bdevs_discovered": 3, 00:24:26.564 "num_base_bdevs_operational": 3, 00:24:26.564 "base_bdevs_list": [ 00:24:26.564 { 00:24:26.564 "name": null, 00:24:26.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:26.564 "is_configured": false, 00:24:26.564 "data_offset": 0, 00:24:26.564 "data_size": 65536 00:24:26.564 }, 00:24:26.564 { 00:24:26.564 "name": "BaseBdev2", 00:24:26.564 "uuid": "b03895dc-9b12-5edd-ac33-443f8b5fe436", 00:24:26.564 "is_configured": true, 00:24:26.564 "data_offset": 0, 00:24:26.564 "data_size": 65536 00:24:26.564 }, 00:24:26.564 { 00:24:26.564 "name": "BaseBdev3", 00:24:26.564 "uuid": "ddc2971e-b8df-5972-8349-7ffe52dca2f8", 00:24:26.564 "is_configured": true, 00:24:26.564 "data_offset": 0, 00:24:26.564 "data_size": 65536 00:24:26.564 }, 00:24:26.564 { 00:24:26.564 "name": "BaseBdev4", 00:24:26.564 "uuid": "371075cd-47c6-53d8-8aaf-07e4fc6812e9", 00:24:26.564 "is_configured": true, 00:24:26.564 "data_offset": 0, 00:24:26.564 "data_size": 65536 00:24:26.564 } 00:24:26.564 ] 00:24:26.564 }' 00:24:26.564 22:07:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:26.564 22:07:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:26.564 22:07:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:26.822 22:07:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:26.822 22:07:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:26.822 [2024-07-13 22:07:46.146256] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:26.822 22:07:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:26.822 [2024-07-13 22:07:46.210731] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010bf0 00:24:27.080 [2024-07-13 22:07:46.212575] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:27.080 [2024-07-13 22:07:46.335439] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:27.080 [2024-07-13 22:07:46.335944] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:27.338 [2024-07-13 22:07:46.539081] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:27.338 [2024-07-13 22:07:46.539665] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:27.597 [2024-07-13 22:07:46.884950] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:24:27.855 [2024-07-13 22:07:47.101253] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:27.855 [2024-07-13 22:07:47.101843] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:27.855 22:07:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:27.855 22:07:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:27.855 22:07:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:27.855 22:07:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:27.855 22:07:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:27.855 22:07:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:27.855 22:07:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:28.114 22:07:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:28.114 "name": "raid_bdev1", 00:24:28.114 "uuid": "dc28459c-5da3-4f74-ad5d-421ca1211206", 00:24:28.114 "strip_size_kb": 0, 00:24:28.114 "state": "online", 00:24:28.114 "raid_level": "raid1", 00:24:28.114 "superblock": false, 00:24:28.114 "num_base_bdevs": 4, 00:24:28.114 "num_base_bdevs_discovered": 4, 00:24:28.114 "num_base_bdevs_operational": 4, 00:24:28.114 "process": { 00:24:28.114 "type": "rebuild", 00:24:28.114 "target": "spare", 00:24:28.114 "progress": { 00:24:28.114 "blocks": 12288, 00:24:28.114 "percent": 18 00:24:28.114 } 00:24:28.114 }, 00:24:28.114 "base_bdevs_list": [ 00:24:28.114 { 00:24:28.114 "name": "spare", 00:24:28.114 "uuid": "07d01f86-a1a8-5c13-bf3c-1a7675a138f6", 00:24:28.114 "is_configured": true, 00:24:28.114 "data_offset": 0, 00:24:28.114 "data_size": 65536 00:24:28.114 }, 00:24:28.114 { 00:24:28.114 "name": "BaseBdev2", 00:24:28.114 "uuid": "b03895dc-9b12-5edd-ac33-443f8b5fe436", 00:24:28.114 "is_configured": true, 00:24:28.114 "data_offset": 0, 00:24:28.114 "data_size": 65536 00:24:28.114 }, 00:24:28.114 { 00:24:28.114 "name": "BaseBdev3", 00:24:28.114 "uuid": "ddc2971e-b8df-5972-8349-7ffe52dca2f8", 00:24:28.114 "is_configured": true, 00:24:28.114 "data_offset": 0, 00:24:28.114 "data_size": 65536 00:24:28.114 }, 00:24:28.114 { 00:24:28.114 "name": "BaseBdev4", 00:24:28.114 "uuid": "371075cd-47c6-53d8-8aaf-07e4fc6812e9", 00:24:28.114 "is_configured": true, 00:24:28.114 "data_offset": 0, 00:24:28.114 "data_size": 65536 00:24:28.114 } 00:24:28.114 ] 00:24:28.114 }' 00:24:28.114 22:07:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:28.114 22:07:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:28.114 22:07:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:28.114 [2024-07-13 22:07:47.442301] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:24:28.114 22:07:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:28.114 22:07:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:24:28.114 22:07:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:24:28.114 22:07:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:24:28.114 22:07:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:24:28.114 22:07:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@694 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:24:28.373 [2024-07-13 22:07:47.628440] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:28.373 [2024-07-13 22:07:47.647106] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:28.373 [2024-07-13 22:07:47.754748] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000010a50 00:24:28.373 [2024-07-13 22:07:47.754774] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000010bf0 00:24:28.631 22:07:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:24:28.631 22:07:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:24:28.631 22:07:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:28.631 22:07:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:28.631 22:07:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:28.631 22:07:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:28.631 22:07:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:28.631 22:07:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:28.631 22:07:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:28.631 22:07:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:28.631 "name": "raid_bdev1", 00:24:28.631 "uuid": "dc28459c-5da3-4f74-ad5d-421ca1211206", 00:24:28.631 "strip_size_kb": 0, 00:24:28.631 "state": "online", 00:24:28.631 "raid_level": "raid1", 00:24:28.631 "superblock": false, 00:24:28.631 "num_base_bdevs": 4, 00:24:28.631 "num_base_bdevs_discovered": 3, 00:24:28.631 "num_base_bdevs_operational": 3, 00:24:28.631 "process": { 00:24:28.631 "type": "rebuild", 00:24:28.631 "target": "spare", 00:24:28.631 "progress": { 00:24:28.631 "blocks": 18432, 00:24:28.631 "percent": 28 00:24:28.631 } 00:24:28.631 }, 00:24:28.631 "base_bdevs_list": [ 00:24:28.631 { 00:24:28.631 "name": "spare", 00:24:28.631 "uuid": "07d01f86-a1a8-5c13-bf3c-1a7675a138f6", 00:24:28.631 "is_configured": true, 00:24:28.631 "data_offset": 0, 00:24:28.631 "data_size": 65536 00:24:28.631 }, 00:24:28.631 { 00:24:28.631 "name": null, 00:24:28.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:28.631 "is_configured": false, 00:24:28.631 "data_offset": 0, 00:24:28.631 "data_size": 65536 00:24:28.631 }, 00:24:28.631 { 00:24:28.631 "name": "BaseBdev3", 00:24:28.631 "uuid": "ddc2971e-b8df-5972-8349-7ffe52dca2f8", 00:24:28.631 "is_configured": true, 00:24:28.631 "data_offset": 0, 00:24:28.631 "data_size": 65536 00:24:28.631 }, 00:24:28.631 { 00:24:28.631 "name": "BaseBdev4", 00:24:28.631 "uuid": "371075cd-47c6-53d8-8aaf-07e4fc6812e9", 00:24:28.631 "is_configured": true, 00:24:28.631 "data_offset": 0, 00:24:28.631 "data_size": 65536 00:24:28.631 } 00:24:28.631 ] 00:24:28.631 }' 00:24:28.631 22:07:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:28.631 [2024-07-13 22:07:47.980236] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:24:28.631 22:07:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:28.631 22:07:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:28.889 22:07:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:28.889 22:07:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@705 -- # local timeout=799 00:24:28.890 22:07:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:24:28.890 22:07:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:28.890 22:07:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:28.890 22:07:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:28.890 22:07:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:28.890 22:07:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:28.890 22:07:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:28.890 22:07:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:28.890 [2024-07-13 22:07:48.100882] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:24:28.890 22:07:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:28.890 "name": "raid_bdev1", 00:24:28.890 "uuid": "dc28459c-5da3-4f74-ad5d-421ca1211206", 00:24:28.890 "strip_size_kb": 0, 00:24:28.890 "state": "online", 00:24:28.890 "raid_level": "raid1", 00:24:28.890 "superblock": false, 00:24:28.890 "num_base_bdevs": 4, 00:24:28.890 "num_base_bdevs_discovered": 3, 00:24:28.890 "num_base_bdevs_operational": 3, 00:24:28.890 "process": { 00:24:28.890 "type": "rebuild", 00:24:28.890 "target": "spare", 00:24:28.890 "progress": { 00:24:28.890 "blocks": 22528, 00:24:28.890 "percent": 34 00:24:28.890 } 00:24:28.890 }, 00:24:28.890 "base_bdevs_list": [ 00:24:28.890 { 00:24:28.890 "name": "spare", 00:24:28.890 "uuid": "07d01f86-a1a8-5c13-bf3c-1a7675a138f6", 00:24:28.890 "is_configured": true, 00:24:28.890 "data_offset": 0, 00:24:28.890 "data_size": 65536 00:24:28.890 }, 00:24:28.890 { 00:24:28.890 "name": null, 00:24:28.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:28.890 "is_configured": false, 00:24:28.890 "data_offset": 0, 00:24:28.890 "data_size": 65536 00:24:28.890 }, 00:24:28.890 { 00:24:28.890 "name": "BaseBdev3", 00:24:28.890 "uuid": "ddc2971e-b8df-5972-8349-7ffe52dca2f8", 00:24:28.890 "is_configured": true, 00:24:28.890 "data_offset": 0, 00:24:28.890 "data_size": 65536 00:24:28.890 }, 00:24:28.890 { 00:24:28.890 "name": "BaseBdev4", 00:24:28.890 "uuid": "371075cd-47c6-53d8-8aaf-07e4fc6812e9", 00:24:28.890 "is_configured": true, 00:24:28.890 "data_offset": 0, 00:24:28.890 "data_size": 65536 00:24:28.890 } 00:24:28.890 ] 00:24:28.890 }' 00:24:28.890 22:07:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:28.890 22:07:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:28.890 22:07:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:29.178 22:07:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:29.178 22:07:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:24:29.448 [2024-07-13 22:07:48.573046] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:24:29.448 [2024-07-13 22:07:48.789213] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:24:29.448 [2024-07-13 22:07:48.789993] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:24:29.706 [2024-07-13 22:07:49.016685] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:24:29.963 22:07:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:24:29.963 22:07:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:29.963 22:07:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:29.963 22:07:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:29.963 22:07:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:29.963 22:07:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:29.963 22:07:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:29.963 22:07:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:30.221 [2024-07-13 22:07:49.367003] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:24:30.221 22:07:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:30.221 "name": "raid_bdev1", 00:24:30.221 "uuid": "dc28459c-5da3-4f74-ad5d-421ca1211206", 00:24:30.221 "strip_size_kb": 0, 00:24:30.221 "state": "online", 00:24:30.221 "raid_level": "raid1", 00:24:30.221 "superblock": false, 00:24:30.221 "num_base_bdevs": 4, 00:24:30.221 "num_base_bdevs_discovered": 3, 00:24:30.221 "num_base_bdevs_operational": 3, 00:24:30.221 "process": { 00:24:30.221 "type": "rebuild", 00:24:30.221 "target": "spare", 00:24:30.221 "progress": { 00:24:30.221 "blocks": 38912, 00:24:30.221 "percent": 59 00:24:30.221 } 00:24:30.221 }, 00:24:30.221 "base_bdevs_list": [ 00:24:30.221 { 00:24:30.221 "name": "spare", 00:24:30.221 "uuid": "07d01f86-a1a8-5c13-bf3c-1a7675a138f6", 00:24:30.221 "is_configured": true, 00:24:30.221 "data_offset": 0, 00:24:30.221 "data_size": 65536 00:24:30.221 }, 00:24:30.221 { 00:24:30.221 "name": null, 00:24:30.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:30.221 "is_configured": false, 00:24:30.221 "data_offset": 0, 00:24:30.221 "data_size": 65536 00:24:30.221 }, 00:24:30.221 { 00:24:30.221 "name": "BaseBdev3", 00:24:30.221 "uuid": "ddc2971e-b8df-5972-8349-7ffe52dca2f8", 00:24:30.221 "is_configured": true, 00:24:30.221 "data_offset": 0, 00:24:30.221 "data_size": 65536 00:24:30.221 }, 00:24:30.221 { 00:24:30.221 "name": "BaseBdev4", 00:24:30.221 "uuid": "371075cd-47c6-53d8-8aaf-07e4fc6812e9", 00:24:30.221 "is_configured": true, 00:24:30.221 "data_offset": 0, 00:24:30.221 "data_size": 65536 00:24:30.221 } 00:24:30.221 ] 00:24:30.221 }' 00:24:30.221 22:07:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:30.221 22:07:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:30.221 22:07:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:30.221 22:07:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:30.221 22:07:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:24:30.479 [2024-07-13 22:07:49.745137] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:24:30.479 [2024-07-13 22:07:49.860233] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:24:30.738 [2024-07-13 22:07:50.088122] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:24:30.996 [2024-07-13 22:07:50.310992] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:24:31.254 22:07:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:24:31.254 22:07:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:31.254 22:07:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:31.254 22:07:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:31.254 22:07:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:31.254 22:07:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:31.254 22:07:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:31.254 22:07:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:31.254 [2024-07-13 22:07:50.643832] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:24:31.513 22:07:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:31.513 "name": "raid_bdev1", 00:24:31.513 "uuid": "dc28459c-5da3-4f74-ad5d-421ca1211206", 00:24:31.513 "strip_size_kb": 0, 00:24:31.513 "state": "online", 00:24:31.513 "raid_level": "raid1", 00:24:31.513 "superblock": false, 00:24:31.513 "num_base_bdevs": 4, 00:24:31.513 "num_base_bdevs_discovered": 3, 00:24:31.513 "num_base_bdevs_operational": 3, 00:24:31.513 "process": { 00:24:31.513 "type": "rebuild", 00:24:31.513 "target": "spare", 00:24:31.513 "progress": { 00:24:31.513 "blocks": 57344, 00:24:31.513 "percent": 87 00:24:31.513 } 00:24:31.513 }, 00:24:31.513 "base_bdevs_list": [ 00:24:31.513 { 00:24:31.513 "name": "spare", 00:24:31.513 "uuid": "07d01f86-a1a8-5c13-bf3c-1a7675a138f6", 00:24:31.513 "is_configured": true, 00:24:31.513 "data_offset": 0, 00:24:31.513 "data_size": 65536 00:24:31.513 }, 00:24:31.513 { 00:24:31.513 "name": null, 00:24:31.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:31.513 "is_configured": false, 00:24:31.513 "data_offset": 0, 00:24:31.513 "data_size": 65536 00:24:31.513 }, 00:24:31.513 { 00:24:31.513 "name": "BaseBdev3", 00:24:31.513 "uuid": "ddc2971e-b8df-5972-8349-7ffe52dca2f8", 00:24:31.513 "is_configured": true, 00:24:31.513 "data_offset": 0, 00:24:31.513 "data_size": 65536 00:24:31.513 }, 00:24:31.513 { 00:24:31.513 "name": "BaseBdev4", 00:24:31.513 "uuid": "371075cd-47c6-53d8-8aaf-07e4fc6812e9", 00:24:31.513 "is_configured": true, 00:24:31.513 "data_offset": 0, 00:24:31.513 "data_size": 65536 00:24:31.513 } 00:24:31.513 ] 00:24:31.513 }' 00:24:31.513 22:07:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:31.513 22:07:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:31.513 22:07:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:31.513 22:07:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:31.513 22:07:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:24:31.771 [2024-07-13 22:07:51.082120] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:32.027 [2024-07-13 22:07:51.182354] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:32.027 [2024-07-13 22:07:51.183640] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:32.592 22:07:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:24:32.592 22:07:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:32.592 22:07:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:32.593 22:07:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:32.593 22:07:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:32.593 22:07:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:32.593 22:07:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:32.593 22:07:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:32.850 22:07:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:32.850 "name": "raid_bdev1", 00:24:32.850 "uuid": "dc28459c-5da3-4f74-ad5d-421ca1211206", 00:24:32.850 "strip_size_kb": 0, 00:24:32.850 "state": "online", 00:24:32.850 "raid_level": "raid1", 00:24:32.850 "superblock": false, 00:24:32.850 "num_base_bdevs": 4, 00:24:32.850 "num_base_bdevs_discovered": 3, 00:24:32.850 "num_base_bdevs_operational": 3, 00:24:32.850 "base_bdevs_list": [ 00:24:32.850 { 00:24:32.850 "name": "spare", 00:24:32.850 "uuid": "07d01f86-a1a8-5c13-bf3c-1a7675a138f6", 00:24:32.850 "is_configured": true, 00:24:32.850 "data_offset": 0, 00:24:32.850 "data_size": 65536 00:24:32.850 }, 00:24:32.850 { 00:24:32.850 "name": null, 00:24:32.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:32.850 "is_configured": false, 00:24:32.850 "data_offset": 0, 00:24:32.850 "data_size": 65536 00:24:32.850 }, 00:24:32.850 { 00:24:32.850 "name": "BaseBdev3", 00:24:32.850 "uuid": "ddc2971e-b8df-5972-8349-7ffe52dca2f8", 00:24:32.850 "is_configured": true, 00:24:32.850 "data_offset": 0, 00:24:32.850 "data_size": 65536 00:24:32.850 }, 00:24:32.850 { 00:24:32.850 "name": "BaseBdev4", 00:24:32.850 "uuid": "371075cd-47c6-53d8-8aaf-07e4fc6812e9", 00:24:32.850 "is_configured": true, 00:24:32.850 "data_offset": 0, 00:24:32.850 "data_size": 65536 00:24:32.850 } 00:24:32.850 ] 00:24:32.850 }' 00:24:32.850 22:07:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:32.850 22:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:32.850 22:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:32.850 22:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:24:32.850 22:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # break 00:24:32.850 22:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:32.850 22:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:32.850 22:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:32.850 22:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:32.850 22:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:32.850 22:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:32.850 22:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:33.108 22:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:33.108 "name": "raid_bdev1", 00:24:33.108 "uuid": "dc28459c-5da3-4f74-ad5d-421ca1211206", 00:24:33.108 "strip_size_kb": 0, 00:24:33.108 "state": "online", 00:24:33.108 "raid_level": "raid1", 00:24:33.108 "superblock": false, 00:24:33.108 "num_base_bdevs": 4, 00:24:33.108 "num_base_bdevs_discovered": 3, 00:24:33.108 "num_base_bdevs_operational": 3, 00:24:33.108 "base_bdevs_list": [ 00:24:33.108 { 00:24:33.108 "name": "spare", 00:24:33.108 "uuid": "07d01f86-a1a8-5c13-bf3c-1a7675a138f6", 00:24:33.108 "is_configured": true, 00:24:33.108 "data_offset": 0, 00:24:33.108 "data_size": 65536 00:24:33.108 }, 00:24:33.108 { 00:24:33.108 "name": null, 00:24:33.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:33.108 "is_configured": false, 00:24:33.108 "data_offset": 0, 00:24:33.108 "data_size": 65536 00:24:33.108 }, 00:24:33.108 { 00:24:33.108 "name": "BaseBdev3", 00:24:33.108 "uuid": "ddc2971e-b8df-5972-8349-7ffe52dca2f8", 00:24:33.108 "is_configured": true, 00:24:33.108 "data_offset": 0, 00:24:33.108 "data_size": 65536 00:24:33.108 }, 00:24:33.108 { 00:24:33.108 "name": "BaseBdev4", 00:24:33.108 "uuid": "371075cd-47c6-53d8-8aaf-07e4fc6812e9", 00:24:33.108 "is_configured": true, 00:24:33.108 "data_offset": 0, 00:24:33.108 "data_size": 65536 00:24:33.108 } 00:24:33.108 ] 00:24:33.108 }' 00:24:33.108 22:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:33.108 22:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:33.108 22:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:33.108 22:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:33.108 22:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:33.108 22:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:33.108 22:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:33.108 22:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:33.108 22:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:33.108 22:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:33.108 22:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:33.108 22:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:33.108 22:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:33.108 22:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:33.108 22:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:33.108 22:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:33.365 22:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:33.365 "name": "raid_bdev1", 00:24:33.365 "uuid": "dc28459c-5da3-4f74-ad5d-421ca1211206", 00:24:33.365 "strip_size_kb": 0, 00:24:33.365 "state": "online", 00:24:33.365 "raid_level": "raid1", 00:24:33.365 "superblock": false, 00:24:33.365 "num_base_bdevs": 4, 00:24:33.365 "num_base_bdevs_discovered": 3, 00:24:33.365 "num_base_bdevs_operational": 3, 00:24:33.365 "base_bdevs_list": [ 00:24:33.365 { 00:24:33.365 "name": "spare", 00:24:33.365 "uuid": "07d01f86-a1a8-5c13-bf3c-1a7675a138f6", 00:24:33.365 "is_configured": true, 00:24:33.365 "data_offset": 0, 00:24:33.366 "data_size": 65536 00:24:33.366 }, 00:24:33.366 { 00:24:33.366 "name": null, 00:24:33.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:33.366 "is_configured": false, 00:24:33.366 "data_offset": 0, 00:24:33.366 "data_size": 65536 00:24:33.366 }, 00:24:33.366 { 00:24:33.366 "name": "BaseBdev3", 00:24:33.366 "uuid": "ddc2971e-b8df-5972-8349-7ffe52dca2f8", 00:24:33.366 "is_configured": true, 00:24:33.366 "data_offset": 0, 00:24:33.366 "data_size": 65536 00:24:33.366 }, 00:24:33.366 { 00:24:33.366 "name": "BaseBdev4", 00:24:33.366 "uuid": "371075cd-47c6-53d8-8aaf-07e4fc6812e9", 00:24:33.366 "is_configured": true, 00:24:33.366 "data_offset": 0, 00:24:33.366 "data_size": 65536 00:24:33.366 } 00:24:33.366 ] 00:24:33.366 }' 00:24:33.366 22:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:33.366 22:07:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:33.622 22:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:33.878 [2024-07-13 22:07:53.144179] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:33.878 [2024-07-13 22:07:53.144210] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:33.878 00:24:33.878 Latency(us) 00:24:33.878 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.878 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:24:33.878 raid_bdev1 : 10.88 107.69 323.06 0.00 0.00 13280.86 288.36 114923.93 00:24:33.878 =================================================================================================================== 00:24:33.878 Total : 107.69 323.06 0.00 0.00 13280.86 288.36 114923.93 00:24:33.878 [2024-07-13 22:07:53.264311] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:33.878 [2024-07-13 22:07:53.264347] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:33.878 [2024-07-13 22:07:53.264435] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:33.878 [2024-07-13 22:07:53.264447] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000043280 name raid_bdev1, state offline 00:24:33.878 0 00:24:34.134 22:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:34.134 22:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # jq length 00:24:34.134 22:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:24:34.134 22:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:24:34.134 22:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:24:34.134 22:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:24:34.134 22:07:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:34.134 22:07:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:24:34.134 22:07:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:34.134 22:07:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:34.134 22:07:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:34.134 22:07:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:24:34.134 22:07:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:34.134 22:07:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:34.134 22:07:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:24:34.391 /dev/nbd0 00:24:34.391 22:07:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:34.391 22:07:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:34.391 22:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:24:34.391 22:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:24:34.391 22:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:34.391 22:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:34.391 22:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:24:34.391 22:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:24:34.391 22:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:34.391 22:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:34.391 22:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:34.391 1+0 records in 00:24:34.391 1+0 records out 00:24:34.391 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275417 s, 14.9 MB/s 00:24:34.391 22:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:34.391 22:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:24:34.391 22:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:34.391 22:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:34.391 22:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:24:34.391 22:07:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:34.391 22:07:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:34.391 22:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:24:34.391 22:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z '' ']' 00:24:34.391 22:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # continue 00:24:34.391 22:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:24:34.391 22:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev3 ']' 00:24:34.391 22:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:24:34.391 22:07:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:34.391 22:07:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:24:34.391 22:07:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:34.391 22:07:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:24:34.391 22:07:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:34.391 22:07:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:24:34.391 22:07:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:34.391 22:07:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:34.391 22:07:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:24:34.649 /dev/nbd1 00:24:34.649 22:07:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:34.649 22:07:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:34.649 22:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:24:34.649 22:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:24:34.649 22:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:34.649 22:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:34.649 22:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:24:34.649 22:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:24:34.649 22:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:34.649 22:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:34.649 22:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:34.649 1+0 records in 00:24:34.649 1+0 records out 00:24:34.649 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260377 s, 15.7 MB/s 00:24:34.649 22:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:34.649 22:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:24:34.649 22:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:34.649 22:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:34.649 22:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:24:34.649 22:07:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:34.649 22:07:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:34.649 22:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:24:34.649 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:24:34.649 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:34.649 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:24:34.649 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:34.649 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:24:34.649 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:34.649 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:34.906 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:34.906 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:34.906 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:34.906 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:34.906 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:34.906 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:34.906 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:24:34.906 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:24:34.906 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:24:34.907 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev4 ']' 00:24:34.907 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:24:34.907 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:34.907 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:24:34.907 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:34.907 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:24:34.907 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:34.907 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:24:34.907 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:34.907 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:34.907 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:24:35.164 /dev/nbd1 00:24:35.164 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:35.164 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:35.164 22:07:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:24:35.164 22:07:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:24:35.164 22:07:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:35.164 22:07:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:35.164 22:07:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:24:35.164 22:07:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:24:35.164 22:07:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:35.164 22:07:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:35.164 22:07:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:35.164 1+0 records in 00:24:35.164 1+0 records out 00:24:35.164 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253198 s, 16.2 MB/s 00:24:35.164 22:07:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:35.164 22:07:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:24:35.164 22:07:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:35.164 22:07:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:35.164 22:07:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:24:35.164 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:35.164 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:35.164 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:24:35.164 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:24:35.164 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:35.164 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:24:35.164 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:35.164 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:24:35.164 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:35.164 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:35.422 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:35.422 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:35.422 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:35.422 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:35.422 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:35.422 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:35.422 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:24:35.422 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:24:35.422 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:35.422 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:35.422 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:35.422 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:35.422 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:24:35.422 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:35.422 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:35.680 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:35.680 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:35.680 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:35.680 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:35.680 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:35.680 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:35.680 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:24:35.680 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:24:35.680 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:24:35.680 22:07:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@782 -- # killprocess 1487879 00:24:35.680 22:07:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@948 -- # '[' -z 1487879 ']' 00:24:35.680 22:07:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # kill -0 1487879 00:24:35.680 22:07:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # uname 00:24:35.680 22:07:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:35.680 22:07:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1487879 00:24:35.680 22:07:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:35.680 22:07:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:35.680 22:07:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1487879' 00:24:35.680 killing process with pid 1487879 00:24:35.680 22:07:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@967 -- # kill 1487879 00:24:35.680 Received shutdown signal, test time was about 12.626213 seconds 00:24:35.680 00:24:35.680 Latency(us) 00:24:35.680 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:35.681 =================================================================================================================== 00:24:35.681 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:35.681 [2024-07-13 22:07:54.991493] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:35.681 22:07:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # wait 1487879 00:24:36.246 [2024-07-13 22:07:55.330154] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:37.620 22:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # return 0 00:24:37.620 00:24:37.620 real 0m18.390s 00:24:37.620 user 0m26.046s 00:24:37.620 sys 0m2.939s 00:24:37.620 22:07:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:37.620 22:07:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:37.620 ************************************ 00:24:37.620 END TEST raid_rebuild_test_io 00:24:37.620 ************************************ 00:24:37.620 22:07:56 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:24:37.620 22:07:56 bdev_raid -- bdev/bdev_raid.sh@880 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:24:37.620 22:07:56 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:24:37.620 22:07:56 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:37.620 22:07:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:37.620 ************************************ 00:24:37.620 START TEST raid_rebuild_test_sb_io 00:24:37.620 ************************************ 00:24:37.620 22:07:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 true true true 00:24:37.620 22:07:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:24:37.620 22:07:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:24:37.620 22:07:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:24:37.620 22:07:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:24:37.620 22:07:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:24:37.620 22:07:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:24:37.620 22:07:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:37.620 22:07:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:24:37.620 22:07:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:24:37.620 22:07:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:37.620 22:07:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:24:37.620 22:07:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:24:37.620 22:07:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:37.620 22:07:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev3 00:24:37.620 22:07:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:24:37.621 22:07:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:37.621 22:07:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev4 00:24:37.621 22:07:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:24:37.621 22:07:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:37.621 22:07:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:37.621 22:07:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:24:37.621 22:07:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:24:37.621 22:07:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:24:37.621 22:07:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:24:37.621 22:07:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:24:37.621 22:07:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:24:37.621 22:07:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:24:37.621 22:07:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:24:37.621 22:07:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:24:37.621 22:07:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:24:37.621 22:07:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # raid_pid=1491306 00:24:37.621 22:07:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 1491306 /var/tmp/spdk-raid.sock 00:24:37.621 22:07:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:37.621 22:07:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@829 -- # '[' -z 1491306 ']' 00:24:37.621 22:07:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:37.621 22:07:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:37.621 22:07:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:37.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:37.621 22:07:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:37.621 22:07:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:37.621 [2024-07-13 22:07:56.764963] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:37.621 [2024-07-13 22:07:56.765055] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1491306 ] 00:24:37.621 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:37.621 Zero copy mechanism will not be used. 00:24:37.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:37.621 EAL: Requested device 0000:3d:01.0 cannot be used 00:24:37.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:37.621 EAL: Requested device 0000:3d:01.1 cannot be used 00:24:37.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:37.621 EAL: Requested device 0000:3d:01.2 cannot be used 00:24:37.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:37.621 EAL: Requested device 0000:3d:01.3 cannot be used 00:24:37.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:37.621 EAL: Requested device 0000:3d:01.4 cannot be used 00:24:37.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:37.621 EAL: Requested device 0000:3d:01.5 cannot be used 00:24:37.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:37.621 EAL: Requested device 0000:3d:01.6 cannot be used 00:24:37.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:37.621 EAL: Requested device 0000:3d:01.7 cannot be used 00:24:37.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:37.621 EAL: Requested device 0000:3d:02.0 cannot be used 00:24:37.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:37.621 EAL: Requested device 0000:3d:02.1 cannot be used 00:24:37.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:37.621 EAL: Requested device 0000:3d:02.2 cannot be used 00:24:37.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:37.621 EAL: Requested device 0000:3d:02.3 cannot be used 00:24:37.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:37.621 EAL: Requested device 0000:3d:02.4 cannot be used 00:24:37.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:37.621 EAL: Requested device 0000:3d:02.5 cannot be used 00:24:37.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:37.621 EAL: Requested device 0000:3d:02.6 cannot be used 00:24:37.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:37.621 EAL: Requested device 0000:3d:02.7 cannot be used 00:24:37.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:37.621 EAL: Requested device 0000:3f:01.0 cannot be used 00:24:37.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:37.621 EAL: Requested device 0000:3f:01.1 cannot be used 00:24:37.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:37.621 EAL: Requested device 0000:3f:01.2 cannot be used 00:24:37.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:37.621 EAL: Requested device 0000:3f:01.3 cannot be used 00:24:37.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:37.621 EAL: Requested device 0000:3f:01.4 cannot be used 00:24:37.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:37.621 EAL: Requested device 0000:3f:01.5 cannot be used 00:24:37.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:37.621 EAL: Requested device 0000:3f:01.6 cannot be used 00:24:37.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:37.621 EAL: Requested device 0000:3f:01.7 cannot be used 00:24:37.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:37.621 EAL: Requested device 0000:3f:02.0 cannot be used 00:24:37.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:37.621 EAL: Requested device 0000:3f:02.1 cannot be used 00:24:37.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:37.621 EAL: Requested device 0000:3f:02.2 cannot be used 00:24:37.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:37.621 EAL: Requested device 0000:3f:02.3 cannot be used 00:24:37.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:37.621 EAL: Requested device 0000:3f:02.4 cannot be used 00:24:37.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:37.621 EAL: Requested device 0000:3f:02.5 cannot be used 00:24:37.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:37.621 EAL: Requested device 0000:3f:02.6 cannot be used 00:24:37.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:37.621 EAL: Requested device 0000:3f:02.7 cannot be used 00:24:37.621 [2024-07-13 22:07:56.923846] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.880 [2024-07-13 22:07:57.122663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:38.139 [2024-07-13 22:07:57.346377] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:38.139 [2024-07-13 22:07:57.346412] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:38.139 22:07:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:38.139 22:07:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # return 0 00:24:38.139 22:07:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:24:38.139 22:07:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:38.396 BaseBdev1_malloc 00:24:38.396 22:07:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:38.654 [2024-07-13 22:07:57.892911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:38.654 [2024-07-13 22:07:57.892975] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:38.654 [2024-07-13 22:07:57.893000] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:24:38.654 [2024-07-13 22:07:57.893014] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:38.654 [2024-07-13 22:07:57.895154] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:38.654 [2024-07-13 22:07:57.895186] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:38.654 BaseBdev1 00:24:38.654 22:07:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:24:38.654 22:07:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:38.913 BaseBdev2_malloc 00:24:38.913 22:07:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:38.913 [2024-07-13 22:07:58.265660] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:38.913 [2024-07-13 22:07:58.265710] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:38.913 [2024-07-13 22:07:58.265746] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:24:38.913 [2024-07-13 22:07:58.265762] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:38.913 [2024-07-13 22:07:58.267765] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:38.913 [2024-07-13 22:07:58.267793] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:38.913 BaseBdev2 00:24:38.913 22:07:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:24:38.913 22:07:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:39.173 BaseBdev3_malloc 00:24:39.173 22:07:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:39.432 [2024-07-13 22:07:58.638754] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:39.432 [2024-07-13 22:07:58.638807] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:39.432 [2024-07-13 22:07:58.638849] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040e80 00:24:39.432 [2024-07-13 22:07:58.638862] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:39.432 [2024-07-13 22:07:58.640933] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:39.432 [2024-07-13 22:07:58.640962] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:39.432 BaseBdev3 00:24:39.432 22:07:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:24:39.432 22:07:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:39.691 BaseBdev4_malloc 00:24:39.691 22:07:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:24:39.691 [2024-07-13 22:07:59.007188] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:24:39.691 [2024-07-13 22:07:59.007239] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:39.691 [2024-07-13 22:07:59.007259] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041a80 00:24:39.691 [2024-07-13 22:07:59.007272] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:39.691 [2024-07-13 22:07:59.009328] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:39.691 [2024-07-13 22:07:59.009357] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:39.691 BaseBdev4 00:24:39.691 22:07:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:24:39.951 spare_malloc 00:24:39.951 22:07:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:40.210 spare_delay 00:24:40.210 22:07:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:40.210 [2024-07-13 22:07:59.559098] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:40.210 [2024-07-13 22:07:59.559154] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:40.210 [2024-07-13 22:07:59.559191] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042c80 00:24:40.210 [2024-07-13 22:07:59.559205] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:40.210 [2024-07-13 22:07:59.561305] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:40.210 [2024-07-13 22:07:59.561338] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:40.210 spare 00:24:40.210 22:07:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:24:40.469 [2024-07-13 22:07:59.727598] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:40.469 [2024-07-13 22:07:59.729341] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:40.469 [2024-07-13 22:07:59.729397] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:40.469 [2024-07-13 22:07:59.729444] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:40.469 [2024-07-13 22:07:59.729634] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000043280 00:24:40.469 [2024-07-13 22:07:59.729653] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:40.469 [2024-07-13 22:07:59.729930] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:24:40.469 [2024-07-13 22:07:59.730137] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000043280 00:24:40.469 [2024-07-13 22:07:59.730148] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000043280 00:24:40.469 [2024-07-13 22:07:59.730308] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:40.469 22:07:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:24:40.469 22:07:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:40.469 22:07:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:40.469 22:07:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:40.469 22:07:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:40.469 22:07:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:40.469 22:07:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:40.469 22:07:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:40.469 22:07:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:40.469 22:07:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:40.469 22:07:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:40.469 22:07:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:40.727 22:07:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:40.727 "name": "raid_bdev1", 00:24:40.727 "uuid": "ce454044-c8f4-4749-bb03-b57bbc5d5d3c", 00:24:40.727 "strip_size_kb": 0, 00:24:40.727 "state": "online", 00:24:40.727 "raid_level": "raid1", 00:24:40.727 "superblock": true, 00:24:40.727 "num_base_bdevs": 4, 00:24:40.727 "num_base_bdevs_discovered": 4, 00:24:40.727 "num_base_bdevs_operational": 4, 00:24:40.727 "base_bdevs_list": [ 00:24:40.727 { 00:24:40.727 "name": "BaseBdev1", 00:24:40.727 "uuid": "c57bab76-acb5-565d-a970-10159fd92453", 00:24:40.727 "is_configured": true, 00:24:40.727 "data_offset": 2048, 00:24:40.727 "data_size": 63488 00:24:40.727 }, 00:24:40.727 { 00:24:40.727 "name": "BaseBdev2", 00:24:40.727 "uuid": "25b63f55-d930-5f31-a3f7-fd3fb10a05f8", 00:24:40.727 "is_configured": true, 00:24:40.727 "data_offset": 2048, 00:24:40.727 "data_size": 63488 00:24:40.727 }, 00:24:40.727 { 00:24:40.727 "name": "BaseBdev3", 00:24:40.727 "uuid": "e1ecbf56-24f9-58b2-b77b-e9b4b692dc58", 00:24:40.727 "is_configured": true, 00:24:40.727 "data_offset": 2048, 00:24:40.727 "data_size": 63488 00:24:40.727 }, 00:24:40.727 { 00:24:40.727 "name": "BaseBdev4", 00:24:40.727 "uuid": "82aae25b-5930-5937-9ec7-04fea8641eef", 00:24:40.727 "is_configured": true, 00:24:40.727 "data_offset": 2048, 00:24:40.727 "data_size": 63488 00:24:40.727 } 00:24:40.727 ] 00:24:40.727 }' 00:24:40.727 22:07:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:40.727 22:07:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:40.985 22:08:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:40.985 22:08:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:24:41.243 [2024-07-13 22:08:00.525957] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:41.243 22:08:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:24:41.243 22:08:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:41.243 22:08:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:41.501 22:08:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:24:41.501 22:08:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:24:41.501 22:08:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:41.501 22:08:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:24:41.501 [2024-07-13 22:08:00.807698] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010a50 00:24:41.501 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:41.501 Zero copy mechanism will not be used. 00:24:41.501 Running I/O for 60 seconds... 00:24:41.501 [2024-07-13 22:08:00.881057] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:41.759 [2024-07-13 22:08:00.892041] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000010a50 00:24:41.759 22:08:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:41.759 22:08:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:41.759 22:08:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:41.759 22:08:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:41.759 22:08:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:41.759 22:08:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:41.759 22:08:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:41.759 22:08:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:41.759 22:08:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:41.759 22:08:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:41.759 22:08:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:41.759 22:08:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:41.759 22:08:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:41.759 "name": "raid_bdev1", 00:24:41.759 "uuid": "ce454044-c8f4-4749-bb03-b57bbc5d5d3c", 00:24:41.759 "strip_size_kb": 0, 00:24:41.759 "state": "online", 00:24:41.759 "raid_level": "raid1", 00:24:41.759 "superblock": true, 00:24:41.759 "num_base_bdevs": 4, 00:24:41.759 "num_base_bdevs_discovered": 3, 00:24:41.759 "num_base_bdevs_operational": 3, 00:24:41.759 "base_bdevs_list": [ 00:24:41.759 { 00:24:41.759 "name": null, 00:24:41.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:41.759 "is_configured": false, 00:24:41.759 "data_offset": 2048, 00:24:41.759 "data_size": 63488 00:24:41.759 }, 00:24:41.759 { 00:24:41.759 "name": "BaseBdev2", 00:24:41.759 "uuid": "25b63f55-d930-5f31-a3f7-fd3fb10a05f8", 00:24:41.759 "is_configured": true, 00:24:41.759 "data_offset": 2048, 00:24:41.759 "data_size": 63488 00:24:41.759 }, 00:24:41.759 { 00:24:41.759 "name": "BaseBdev3", 00:24:41.759 "uuid": "e1ecbf56-24f9-58b2-b77b-e9b4b692dc58", 00:24:41.759 "is_configured": true, 00:24:41.759 "data_offset": 2048, 00:24:41.759 "data_size": 63488 00:24:41.759 }, 00:24:41.759 { 00:24:41.759 "name": "BaseBdev4", 00:24:41.759 "uuid": "82aae25b-5930-5937-9ec7-04fea8641eef", 00:24:41.759 "is_configured": true, 00:24:41.759 "data_offset": 2048, 00:24:41.759 "data_size": 63488 00:24:41.759 } 00:24:41.759 ] 00:24:41.759 }' 00:24:41.759 22:08:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:41.759 22:08:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:42.364 22:08:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:42.622 [2024-07-13 22:08:01.778891] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:42.622 [2024-07-13 22:08:01.825125] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010b20 00:24:42.622 [2024-07-13 22:08:01.827005] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:42.622 22:08:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:24:42.622 [2024-07-13 22:08:01.943480] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:42.622 [2024-07-13 22:08:01.943795] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:42.879 [2024-07-13 22:08:02.159616] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:42.879 [2024-07-13 22:08:02.159819] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:43.137 [2024-07-13 22:08:02.490786] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:24:43.137 [2024-07-13 22:08:02.491184] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:24:43.395 [2024-07-13 22:08:02.714944] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:43.395 [2024-07-13 22:08:02.715220] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:43.654 22:08:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:43.654 22:08:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:43.654 22:08:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:43.654 22:08:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:43.654 22:08:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:43.654 22:08:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:43.654 22:08:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:43.654 22:08:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:43.654 "name": "raid_bdev1", 00:24:43.654 "uuid": "ce454044-c8f4-4749-bb03-b57bbc5d5d3c", 00:24:43.654 "strip_size_kb": 0, 00:24:43.654 "state": "online", 00:24:43.654 "raid_level": "raid1", 00:24:43.654 "superblock": true, 00:24:43.654 "num_base_bdevs": 4, 00:24:43.654 "num_base_bdevs_discovered": 4, 00:24:43.654 "num_base_bdevs_operational": 4, 00:24:43.654 "process": { 00:24:43.654 "type": "rebuild", 00:24:43.654 "target": "spare", 00:24:43.654 "progress": { 00:24:43.654 "blocks": 12288, 00:24:43.654 "percent": 19 00:24:43.654 } 00:24:43.654 }, 00:24:43.654 "base_bdevs_list": [ 00:24:43.654 { 00:24:43.654 "name": "spare", 00:24:43.654 "uuid": "28974fc2-551b-5e24-9224-3ac52abed6ad", 00:24:43.654 "is_configured": true, 00:24:43.654 "data_offset": 2048, 00:24:43.654 "data_size": 63488 00:24:43.654 }, 00:24:43.654 { 00:24:43.654 "name": "BaseBdev2", 00:24:43.654 "uuid": "25b63f55-d930-5f31-a3f7-fd3fb10a05f8", 00:24:43.654 "is_configured": true, 00:24:43.654 "data_offset": 2048, 00:24:43.654 "data_size": 63488 00:24:43.654 }, 00:24:43.654 { 00:24:43.654 "name": "BaseBdev3", 00:24:43.654 "uuid": "e1ecbf56-24f9-58b2-b77b-e9b4b692dc58", 00:24:43.654 "is_configured": true, 00:24:43.654 "data_offset": 2048, 00:24:43.654 "data_size": 63488 00:24:43.654 }, 00:24:43.654 { 00:24:43.654 "name": "BaseBdev4", 00:24:43.654 "uuid": "82aae25b-5930-5937-9ec7-04fea8641eef", 00:24:43.654 "is_configured": true, 00:24:43.654 "data_offset": 2048, 00:24:43.654 "data_size": 63488 00:24:43.654 } 00:24:43.654 ] 00:24:43.654 }' 00:24:43.654 22:08:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:43.912 [2024-07-13 22:08:03.048258] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:24:43.912 22:08:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:43.912 22:08:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:43.912 22:08:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:43.912 22:08:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:43.912 [2024-07-13 22:08:03.150711] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:43.912 [2024-07-13 22:08:03.248076] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:44.170 [2024-07-13 22:08:03.366979] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:44.170 [2024-07-13 22:08:03.377060] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:44.170 [2024-07-13 22:08:03.377101] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:44.170 [2024-07-13 22:08:03.377114] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:44.170 [2024-07-13 22:08:03.408800] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000010a50 00:24:44.170 22:08:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:44.170 22:08:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:44.170 22:08:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:44.170 22:08:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:44.170 22:08:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:44.170 22:08:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:44.170 22:08:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:44.170 22:08:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:44.170 22:08:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:44.170 22:08:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:44.170 22:08:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:44.170 22:08:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:44.427 22:08:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:44.427 "name": "raid_bdev1", 00:24:44.427 "uuid": "ce454044-c8f4-4749-bb03-b57bbc5d5d3c", 00:24:44.427 "strip_size_kb": 0, 00:24:44.427 "state": "online", 00:24:44.427 "raid_level": "raid1", 00:24:44.427 "superblock": true, 00:24:44.427 "num_base_bdevs": 4, 00:24:44.427 "num_base_bdevs_discovered": 3, 00:24:44.427 "num_base_bdevs_operational": 3, 00:24:44.427 "base_bdevs_list": [ 00:24:44.427 { 00:24:44.427 "name": null, 00:24:44.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:44.427 "is_configured": false, 00:24:44.427 "data_offset": 2048, 00:24:44.427 "data_size": 63488 00:24:44.427 }, 00:24:44.427 { 00:24:44.427 "name": "BaseBdev2", 00:24:44.427 "uuid": "25b63f55-d930-5f31-a3f7-fd3fb10a05f8", 00:24:44.427 "is_configured": true, 00:24:44.427 "data_offset": 2048, 00:24:44.427 "data_size": 63488 00:24:44.427 }, 00:24:44.427 { 00:24:44.427 "name": "BaseBdev3", 00:24:44.427 "uuid": "e1ecbf56-24f9-58b2-b77b-e9b4b692dc58", 00:24:44.427 "is_configured": true, 00:24:44.427 "data_offset": 2048, 00:24:44.427 "data_size": 63488 00:24:44.427 }, 00:24:44.427 { 00:24:44.427 "name": "BaseBdev4", 00:24:44.427 "uuid": "82aae25b-5930-5937-9ec7-04fea8641eef", 00:24:44.427 "is_configured": true, 00:24:44.427 "data_offset": 2048, 00:24:44.427 "data_size": 63488 00:24:44.427 } 00:24:44.427 ] 00:24:44.427 }' 00:24:44.427 22:08:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:44.427 22:08:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:44.993 22:08:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:44.993 22:08:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:44.993 22:08:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:44.993 22:08:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:44.993 22:08:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:44.993 22:08:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:44.993 22:08:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:44.993 22:08:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:44.993 "name": "raid_bdev1", 00:24:44.993 "uuid": "ce454044-c8f4-4749-bb03-b57bbc5d5d3c", 00:24:44.993 "strip_size_kb": 0, 00:24:44.993 "state": "online", 00:24:44.993 "raid_level": "raid1", 00:24:44.993 "superblock": true, 00:24:44.993 "num_base_bdevs": 4, 00:24:44.993 "num_base_bdevs_discovered": 3, 00:24:44.993 "num_base_bdevs_operational": 3, 00:24:44.993 "base_bdevs_list": [ 00:24:44.993 { 00:24:44.993 "name": null, 00:24:44.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:44.993 "is_configured": false, 00:24:44.993 "data_offset": 2048, 00:24:44.993 "data_size": 63488 00:24:44.993 }, 00:24:44.993 { 00:24:44.993 "name": "BaseBdev2", 00:24:44.993 "uuid": "25b63f55-d930-5f31-a3f7-fd3fb10a05f8", 00:24:44.993 "is_configured": true, 00:24:44.993 "data_offset": 2048, 00:24:44.993 "data_size": 63488 00:24:44.993 }, 00:24:44.993 { 00:24:44.993 "name": "BaseBdev3", 00:24:44.993 "uuid": "e1ecbf56-24f9-58b2-b77b-e9b4b692dc58", 00:24:44.993 "is_configured": true, 00:24:44.993 "data_offset": 2048, 00:24:44.993 "data_size": 63488 00:24:44.993 }, 00:24:44.993 { 00:24:44.993 "name": "BaseBdev4", 00:24:44.993 "uuid": "82aae25b-5930-5937-9ec7-04fea8641eef", 00:24:44.993 "is_configured": true, 00:24:44.993 "data_offset": 2048, 00:24:44.993 "data_size": 63488 00:24:44.993 } 00:24:44.993 ] 00:24:44.993 }' 00:24:44.993 22:08:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:44.993 22:08:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:44.993 22:08:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:45.251 22:08:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:45.251 22:08:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:45.251 [2024-07-13 22:08:04.547972] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:45.251 22:08:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:45.251 [2024-07-13 22:08:04.599659] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010bf0 00:24:45.251 [2024-07-13 22:08:04.601553] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:45.510 [2024-07-13 22:08:04.731156] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:45.510 [2024-07-13 22:08:04.732302] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:45.769 [2024-07-13 22:08:04.991796] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:46.028 [2024-07-13 22:08:05.346428] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:24:46.286 [2024-07-13 22:08:05.568265] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:46.286 [2024-07-13 22:08:05.568809] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:46.286 22:08:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:46.286 22:08:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:46.286 22:08:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:46.286 22:08:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:46.286 22:08:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:46.286 22:08:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:46.286 22:08:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:46.546 22:08:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:46.546 "name": "raid_bdev1", 00:24:46.546 "uuid": "ce454044-c8f4-4749-bb03-b57bbc5d5d3c", 00:24:46.546 "strip_size_kb": 0, 00:24:46.546 "state": "online", 00:24:46.546 "raid_level": "raid1", 00:24:46.546 "superblock": true, 00:24:46.546 "num_base_bdevs": 4, 00:24:46.546 "num_base_bdevs_discovered": 4, 00:24:46.546 "num_base_bdevs_operational": 4, 00:24:46.546 "process": { 00:24:46.546 "type": "rebuild", 00:24:46.546 "target": "spare", 00:24:46.546 "progress": { 00:24:46.546 "blocks": 10240, 00:24:46.546 "percent": 16 00:24:46.546 } 00:24:46.546 }, 00:24:46.546 "base_bdevs_list": [ 00:24:46.546 { 00:24:46.546 "name": "spare", 00:24:46.546 "uuid": "28974fc2-551b-5e24-9224-3ac52abed6ad", 00:24:46.546 "is_configured": true, 00:24:46.546 "data_offset": 2048, 00:24:46.546 "data_size": 63488 00:24:46.546 }, 00:24:46.546 { 00:24:46.546 "name": "BaseBdev2", 00:24:46.546 "uuid": "25b63f55-d930-5f31-a3f7-fd3fb10a05f8", 00:24:46.546 "is_configured": true, 00:24:46.546 "data_offset": 2048, 00:24:46.546 "data_size": 63488 00:24:46.546 }, 00:24:46.546 { 00:24:46.546 "name": "BaseBdev3", 00:24:46.546 "uuid": "e1ecbf56-24f9-58b2-b77b-e9b4b692dc58", 00:24:46.546 "is_configured": true, 00:24:46.546 "data_offset": 2048, 00:24:46.546 "data_size": 63488 00:24:46.546 }, 00:24:46.546 { 00:24:46.546 "name": "BaseBdev4", 00:24:46.546 "uuid": "82aae25b-5930-5937-9ec7-04fea8641eef", 00:24:46.546 "is_configured": true, 00:24:46.546 "data_offset": 2048, 00:24:46.546 "data_size": 63488 00:24:46.546 } 00:24:46.546 ] 00:24:46.546 }' 00:24:46.546 22:08:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:46.546 22:08:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:46.546 22:08:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:46.546 22:08:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:46.546 22:08:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:24:46.546 22:08:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:24:46.546 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:24:46.546 22:08:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:24:46.546 22:08:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:24:46.546 22:08:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:24:46.546 22:08:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@694 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:24:46.546 [2024-07-13 22:08:05.926302] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:24:46.805 [2024-07-13 22:08:06.020363] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:47.063 [2024-07-13 22:08:06.335421] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000010a50 00:24:47.064 [2024-07-13 22:08:06.335460] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000010bf0 00:24:47.064 22:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:24:47.064 22:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:24:47.064 22:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:47.064 22:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:47.064 22:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:47.064 22:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:47.064 22:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:47.064 22:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:47.064 22:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:47.322 22:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:47.322 "name": "raid_bdev1", 00:24:47.322 "uuid": "ce454044-c8f4-4749-bb03-b57bbc5d5d3c", 00:24:47.322 "strip_size_kb": 0, 00:24:47.322 "state": "online", 00:24:47.322 "raid_level": "raid1", 00:24:47.322 "superblock": true, 00:24:47.322 "num_base_bdevs": 4, 00:24:47.322 "num_base_bdevs_discovered": 3, 00:24:47.322 "num_base_bdevs_operational": 3, 00:24:47.322 "process": { 00:24:47.322 "type": "rebuild", 00:24:47.322 "target": "spare", 00:24:47.322 "progress": { 00:24:47.322 "blocks": 18432, 00:24:47.322 "percent": 29 00:24:47.322 } 00:24:47.322 }, 00:24:47.322 "base_bdevs_list": [ 00:24:47.322 { 00:24:47.322 "name": "spare", 00:24:47.322 "uuid": "28974fc2-551b-5e24-9224-3ac52abed6ad", 00:24:47.322 "is_configured": true, 00:24:47.322 "data_offset": 2048, 00:24:47.322 "data_size": 63488 00:24:47.322 }, 00:24:47.322 { 00:24:47.322 "name": null, 00:24:47.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:47.322 "is_configured": false, 00:24:47.322 "data_offset": 2048, 00:24:47.322 "data_size": 63488 00:24:47.322 }, 00:24:47.322 { 00:24:47.322 "name": "BaseBdev3", 00:24:47.322 "uuid": "e1ecbf56-24f9-58b2-b77b-e9b4b692dc58", 00:24:47.322 "is_configured": true, 00:24:47.322 "data_offset": 2048, 00:24:47.322 "data_size": 63488 00:24:47.322 }, 00:24:47.322 { 00:24:47.322 "name": "BaseBdev4", 00:24:47.322 "uuid": "82aae25b-5930-5937-9ec7-04fea8641eef", 00:24:47.322 "is_configured": true, 00:24:47.322 "data_offset": 2048, 00:24:47.322 "data_size": 63488 00:24:47.322 } 00:24:47.322 ] 00:24:47.322 }' 00:24:47.322 22:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:47.322 22:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:47.322 22:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:47.322 22:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:47.322 22:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@705 -- # local timeout=817 00:24:47.322 22:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:24:47.322 22:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:47.322 22:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:47.322 22:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:47.322 22:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:47.322 22:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:47.322 22:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:47.322 22:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:47.580 22:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:47.580 "name": "raid_bdev1", 00:24:47.580 "uuid": "ce454044-c8f4-4749-bb03-b57bbc5d5d3c", 00:24:47.580 "strip_size_kb": 0, 00:24:47.580 "state": "online", 00:24:47.580 "raid_level": "raid1", 00:24:47.580 "superblock": true, 00:24:47.580 "num_base_bdevs": 4, 00:24:47.580 "num_base_bdevs_discovered": 3, 00:24:47.580 "num_base_bdevs_operational": 3, 00:24:47.580 "process": { 00:24:47.580 "type": "rebuild", 00:24:47.580 "target": "spare", 00:24:47.580 "progress": { 00:24:47.580 "blocks": 22528, 00:24:47.580 "percent": 35 00:24:47.580 } 00:24:47.580 }, 00:24:47.580 "base_bdevs_list": [ 00:24:47.580 { 00:24:47.580 "name": "spare", 00:24:47.580 "uuid": "28974fc2-551b-5e24-9224-3ac52abed6ad", 00:24:47.580 "is_configured": true, 00:24:47.580 "data_offset": 2048, 00:24:47.580 "data_size": 63488 00:24:47.580 }, 00:24:47.580 { 00:24:47.580 "name": null, 00:24:47.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:47.580 "is_configured": false, 00:24:47.580 "data_offset": 2048, 00:24:47.580 "data_size": 63488 00:24:47.580 }, 00:24:47.580 { 00:24:47.580 "name": "BaseBdev3", 00:24:47.580 "uuid": "e1ecbf56-24f9-58b2-b77b-e9b4b692dc58", 00:24:47.580 "is_configured": true, 00:24:47.580 "data_offset": 2048, 00:24:47.580 "data_size": 63488 00:24:47.580 }, 00:24:47.580 { 00:24:47.580 "name": "BaseBdev4", 00:24:47.580 "uuid": "82aae25b-5930-5937-9ec7-04fea8641eef", 00:24:47.580 "is_configured": true, 00:24:47.580 "data_offset": 2048, 00:24:47.580 "data_size": 63488 00:24:47.580 } 00:24:47.580 ] 00:24:47.580 }' 00:24:47.580 22:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:47.580 22:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:47.580 22:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:47.580 22:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:47.580 22:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:24:47.580 [2024-07-13 22:08:06.954721] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:24:47.838 [2024-07-13 22:08:07.068907] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:24:47.838 [2024-07-13 22:08:07.069318] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:24:48.403 [2024-07-13 22:08:07.703446] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:24:48.661 [2024-07-13 22:08:07.830923] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:24:48.661 22:08:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:24:48.661 22:08:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:48.661 22:08:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:48.661 22:08:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:48.661 22:08:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:48.661 22:08:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:48.661 22:08:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:48.661 22:08:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:48.661 22:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:48.661 "name": "raid_bdev1", 00:24:48.661 "uuid": "ce454044-c8f4-4749-bb03-b57bbc5d5d3c", 00:24:48.661 "strip_size_kb": 0, 00:24:48.661 "state": "online", 00:24:48.661 "raid_level": "raid1", 00:24:48.661 "superblock": true, 00:24:48.661 "num_base_bdevs": 4, 00:24:48.661 "num_base_bdevs_discovered": 3, 00:24:48.661 "num_base_bdevs_operational": 3, 00:24:48.661 "process": { 00:24:48.661 "type": "rebuild", 00:24:48.661 "target": "spare", 00:24:48.661 "progress": { 00:24:48.661 "blocks": 40960, 00:24:48.661 "percent": 64 00:24:48.661 } 00:24:48.661 }, 00:24:48.661 "base_bdevs_list": [ 00:24:48.661 { 00:24:48.661 "name": "spare", 00:24:48.661 "uuid": "28974fc2-551b-5e24-9224-3ac52abed6ad", 00:24:48.661 "is_configured": true, 00:24:48.661 "data_offset": 2048, 00:24:48.661 "data_size": 63488 00:24:48.661 }, 00:24:48.661 { 00:24:48.661 "name": null, 00:24:48.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:48.661 "is_configured": false, 00:24:48.661 "data_offset": 2048, 00:24:48.661 "data_size": 63488 00:24:48.661 }, 00:24:48.661 { 00:24:48.661 "name": "BaseBdev3", 00:24:48.661 "uuid": "e1ecbf56-24f9-58b2-b77b-e9b4b692dc58", 00:24:48.661 "is_configured": true, 00:24:48.661 "data_offset": 2048, 00:24:48.661 "data_size": 63488 00:24:48.661 }, 00:24:48.661 { 00:24:48.661 "name": "BaseBdev4", 00:24:48.661 "uuid": "82aae25b-5930-5937-9ec7-04fea8641eef", 00:24:48.662 "is_configured": true, 00:24:48.662 "data_offset": 2048, 00:24:48.662 "data_size": 63488 00:24:48.662 } 00:24:48.662 ] 00:24:48.662 }' 00:24:48.662 22:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:48.919 22:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:48.919 22:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:48.919 22:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:48.919 22:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:24:48.919 [2024-07-13 22:08:08.154843] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:24:49.177 [2024-07-13 22:08:08.381005] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:24:49.744 [2024-07-13 22:08:09.028829] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:24:49.744 22:08:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:24:49.744 22:08:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:49.744 22:08:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:49.744 22:08:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:49.744 22:08:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:49.744 22:08:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:49.744 22:08:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:49.744 22:08:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:50.001 [2024-07-13 22:08:09.252852] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:24:50.001 22:08:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:50.001 "name": "raid_bdev1", 00:24:50.001 "uuid": "ce454044-c8f4-4749-bb03-b57bbc5d5d3c", 00:24:50.001 "strip_size_kb": 0, 00:24:50.001 "state": "online", 00:24:50.001 "raid_level": "raid1", 00:24:50.001 "superblock": true, 00:24:50.001 "num_base_bdevs": 4, 00:24:50.001 "num_base_bdevs_discovered": 3, 00:24:50.001 "num_base_bdevs_operational": 3, 00:24:50.001 "process": { 00:24:50.001 "type": "rebuild", 00:24:50.001 "target": "spare", 00:24:50.001 "progress": { 00:24:50.001 "blocks": 59392, 00:24:50.001 "percent": 93 00:24:50.001 } 00:24:50.001 }, 00:24:50.001 "base_bdevs_list": [ 00:24:50.001 { 00:24:50.001 "name": "spare", 00:24:50.001 "uuid": "28974fc2-551b-5e24-9224-3ac52abed6ad", 00:24:50.001 "is_configured": true, 00:24:50.001 "data_offset": 2048, 00:24:50.001 "data_size": 63488 00:24:50.001 }, 00:24:50.001 { 00:24:50.001 "name": null, 00:24:50.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:50.001 "is_configured": false, 00:24:50.001 "data_offset": 2048, 00:24:50.002 "data_size": 63488 00:24:50.002 }, 00:24:50.002 { 00:24:50.002 "name": "BaseBdev3", 00:24:50.002 "uuid": "e1ecbf56-24f9-58b2-b77b-e9b4b692dc58", 00:24:50.002 "is_configured": true, 00:24:50.002 "data_offset": 2048, 00:24:50.002 "data_size": 63488 00:24:50.002 }, 00:24:50.002 { 00:24:50.002 "name": "BaseBdev4", 00:24:50.002 "uuid": "82aae25b-5930-5937-9ec7-04fea8641eef", 00:24:50.002 "is_configured": true, 00:24:50.002 "data_offset": 2048, 00:24:50.002 "data_size": 63488 00:24:50.002 } 00:24:50.002 ] 00:24:50.002 }' 00:24:50.002 22:08:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:50.002 22:08:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:50.002 22:08:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:50.002 22:08:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:50.002 22:08:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:24:50.259 [2024-07-13 22:08:09.583658] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:50.517 [2024-07-13 22:08:09.688956] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:50.517 [2024-07-13 22:08:09.691513] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:51.082 22:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:24:51.082 22:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:51.082 22:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:51.082 22:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:51.082 22:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:51.082 22:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:51.082 22:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:51.082 22:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:51.341 22:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:51.341 "name": "raid_bdev1", 00:24:51.341 "uuid": "ce454044-c8f4-4749-bb03-b57bbc5d5d3c", 00:24:51.341 "strip_size_kb": 0, 00:24:51.341 "state": "online", 00:24:51.341 "raid_level": "raid1", 00:24:51.341 "superblock": true, 00:24:51.341 "num_base_bdevs": 4, 00:24:51.341 "num_base_bdevs_discovered": 3, 00:24:51.341 "num_base_bdevs_operational": 3, 00:24:51.341 "base_bdevs_list": [ 00:24:51.341 { 00:24:51.341 "name": "spare", 00:24:51.341 "uuid": "28974fc2-551b-5e24-9224-3ac52abed6ad", 00:24:51.341 "is_configured": true, 00:24:51.341 "data_offset": 2048, 00:24:51.341 "data_size": 63488 00:24:51.341 }, 00:24:51.341 { 00:24:51.341 "name": null, 00:24:51.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:51.341 "is_configured": false, 00:24:51.341 "data_offset": 2048, 00:24:51.341 "data_size": 63488 00:24:51.341 }, 00:24:51.341 { 00:24:51.341 "name": "BaseBdev3", 00:24:51.341 "uuid": "e1ecbf56-24f9-58b2-b77b-e9b4b692dc58", 00:24:51.341 "is_configured": true, 00:24:51.341 "data_offset": 2048, 00:24:51.341 "data_size": 63488 00:24:51.341 }, 00:24:51.341 { 00:24:51.341 "name": "BaseBdev4", 00:24:51.341 "uuid": "82aae25b-5930-5937-9ec7-04fea8641eef", 00:24:51.341 "is_configured": true, 00:24:51.341 "data_offset": 2048, 00:24:51.341 "data_size": 63488 00:24:51.341 } 00:24:51.341 ] 00:24:51.341 }' 00:24:51.341 22:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:51.341 22:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:51.341 22:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:51.341 22:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:24:51.341 22:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # break 00:24:51.341 22:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:51.341 22:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:51.341 22:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:51.341 22:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:51.341 22:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:51.341 22:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:51.341 22:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:51.599 22:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:51.599 "name": "raid_bdev1", 00:24:51.599 "uuid": "ce454044-c8f4-4749-bb03-b57bbc5d5d3c", 00:24:51.599 "strip_size_kb": 0, 00:24:51.599 "state": "online", 00:24:51.599 "raid_level": "raid1", 00:24:51.599 "superblock": true, 00:24:51.599 "num_base_bdevs": 4, 00:24:51.599 "num_base_bdevs_discovered": 3, 00:24:51.599 "num_base_bdevs_operational": 3, 00:24:51.599 "base_bdevs_list": [ 00:24:51.599 { 00:24:51.599 "name": "spare", 00:24:51.599 "uuid": "28974fc2-551b-5e24-9224-3ac52abed6ad", 00:24:51.599 "is_configured": true, 00:24:51.599 "data_offset": 2048, 00:24:51.599 "data_size": 63488 00:24:51.599 }, 00:24:51.599 { 00:24:51.599 "name": null, 00:24:51.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:51.599 "is_configured": false, 00:24:51.599 "data_offset": 2048, 00:24:51.599 "data_size": 63488 00:24:51.599 }, 00:24:51.599 { 00:24:51.599 "name": "BaseBdev3", 00:24:51.599 "uuid": "e1ecbf56-24f9-58b2-b77b-e9b4b692dc58", 00:24:51.599 "is_configured": true, 00:24:51.599 "data_offset": 2048, 00:24:51.599 "data_size": 63488 00:24:51.599 }, 00:24:51.599 { 00:24:51.599 "name": "BaseBdev4", 00:24:51.599 "uuid": "82aae25b-5930-5937-9ec7-04fea8641eef", 00:24:51.599 "is_configured": true, 00:24:51.599 "data_offset": 2048, 00:24:51.599 "data_size": 63488 00:24:51.599 } 00:24:51.599 ] 00:24:51.599 }' 00:24:51.599 22:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:51.599 22:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:51.599 22:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:51.599 22:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:51.599 22:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:51.599 22:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:51.599 22:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:51.599 22:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:51.599 22:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:51.599 22:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:51.599 22:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:51.599 22:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:51.599 22:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:51.599 22:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:51.599 22:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:51.599 22:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:51.857 22:08:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:51.857 "name": "raid_bdev1", 00:24:51.857 "uuid": "ce454044-c8f4-4749-bb03-b57bbc5d5d3c", 00:24:51.857 "strip_size_kb": 0, 00:24:51.857 "state": "online", 00:24:51.857 "raid_level": "raid1", 00:24:51.857 "superblock": true, 00:24:51.857 "num_base_bdevs": 4, 00:24:51.857 "num_base_bdevs_discovered": 3, 00:24:51.857 "num_base_bdevs_operational": 3, 00:24:51.857 "base_bdevs_list": [ 00:24:51.857 { 00:24:51.857 "name": "spare", 00:24:51.857 "uuid": "28974fc2-551b-5e24-9224-3ac52abed6ad", 00:24:51.857 "is_configured": true, 00:24:51.857 "data_offset": 2048, 00:24:51.857 "data_size": 63488 00:24:51.857 }, 00:24:51.857 { 00:24:51.857 "name": null, 00:24:51.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:51.857 "is_configured": false, 00:24:51.857 "data_offset": 2048, 00:24:51.857 "data_size": 63488 00:24:51.857 }, 00:24:51.857 { 00:24:51.857 "name": "BaseBdev3", 00:24:51.857 "uuid": "e1ecbf56-24f9-58b2-b77b-e9b4b692dc58", 00:24:51.857 "is_configured": true, 00:24:51.857 "data_offset": 2048, 00:24:51.857 "data_size": 63488 00:24:51.857 }, 00:24:51.857 { 00:24:51.857 "name": "BaseBdev4", 00:24:51.857 "uuid": "82aae25b-5930-5937-9ec7-04fea8641eef", 00:24:51.857 "is_configured": true, 00:24:51.857 "data_offset": 2048, 00:24:51.857 "data_size": 63488 00:24:51.857 } 00:24:51.857 ] 00:24:51.857 }' 00:24:51.857 22:08:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:51.857 22:08:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:52.423 22:08:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:52.423 [2024-07-13 22:08:11.741911] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:52.423 [2024-07-13 22:08:11.741953] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:52.681 00:24:52.681 Latency(us) 00:24:52.681 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:52.681 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:24:52.681 raid_bdev1 : 11.01 101.86 305.57 0.00 0.00 13752.35 290.00 116601.65 00:24:52.681 =================================================================================================================== 00:24:52.681 Total : 101.86 305.57 0.00 0.00 13752.35 290.00 116601.65 00:24:52.681 [2024-07-13 22:08:11.862037] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:52.681 [2024-07-13 22:08:11.862080] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:52.681 [2024-07-13 22:08:11.862170] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:52.681 [2024-07-13 22:08:11.862184] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000043280 name raid_bdev1, state offline 00:24:52.681 0 00:24:52.681 22:08:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:52.681 22:08:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # jq length 00:24:52.681 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:24:52.681 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:24:52.681 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:24:52.681 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:24:52.681 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:52.681 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:24:52.681 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:52.681 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:52.681 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:52.681 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:24:52.681 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:52.681 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:52.681 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:24:52.940 /dev/nbd0 00:24:52.940 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:52.940 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:52.940 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:24:52.940 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:24:52.940 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:52.940 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:52.940 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:24:52.940 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:24:52.940 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:52.940 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:52.940 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:52.940 1+0 records in 00:24:52.940 1+0 records out 00:24:52.940 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275938 s, 14.8 MB/s 00:24:52.940 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:52.940 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:24:52.940 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:52.940 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:52.940 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:24:52.940 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:52.940 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:52.940 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:24:52.940 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z '' ']' 00:24:52.940 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # continue 00:24:52.940 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:24:52.940 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev3 ']' 00:24:52.940 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:24:52.940 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:52.940 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:24:52.940 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:52.940 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:24:52.940 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:52.940 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:24:52.940 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:52.940 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:52.940 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:24:53.200 /dev/nbd1 00:24:53.200 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:53.200 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:53.200 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:24:53.200 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:24:53.200 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:53.200 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:53.200 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:24:53.200 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:24:53.200 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:53.200 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:53.200 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:53.200 1+0 records in 00:24:53.200 1+0 records out 00:24:53.200 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000232384 s, 17.6 MB/s 00:24:53.200 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:53.200 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:24:53.200 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:53.200 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:53.200 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:24:53.200 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:53.200 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:53.200 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:53.459 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:24:53.459 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:53.459 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:24:53.459 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:53.459 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:24:53.459 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:53.459 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:53.459 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:53.459 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:53.459 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:53.459 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:53.459 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:53.459 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:53.459 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:24:53.459 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:24:53.459 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:24:53.459 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev4 ']' 00:24:53.459 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:24:53.459 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:53.459 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:24:53.459 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:53.459 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:24:53.459 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:53.459 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:24:53.459 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:53.459 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:53.459 22:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:24:53.718 /dev/nbd1 00:24:53.718 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:53.718 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:53.718 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:24:53.718 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:24:53.718 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:53.718 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:53.718 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:24:53.718 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:24:53.718 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:53.718 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:53.718 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:53.718 1+0 records in 00:24:53.718 1+0 records out 00:24:53.718 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271172 s, 15.1 MB/s 00:24:53.719 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:53.719 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:24:53.719 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:53.719 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:53.719 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:24:53.719 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:53.719 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:53.719 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:53.987 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:24:53.987 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:53.987 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:24:53.988 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:53.988 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:24:53.988 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:53.988 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:53.988 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:53.988 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:53.988 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:53.988 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:53.988 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:53.988 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:53.988 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:24:53.988 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:24:53.988 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:53.988 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:53.988 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:53.988 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:53.988 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:24:53.988 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:53.989 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:54.252 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:54.252 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:54.252 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:54.252 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:54.252 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:54.252 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:54.252 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:24:54.252 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:24:54.252 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:24:54.252 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:54.510 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:54.510 [2024-07-13 22:08:13.825182] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:54.510 [2024-07-13 22:08:13.825237] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:54.510 [2024-07-13 22:08:13.825278] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000044d80 00:24:54.510 [2024-07-13 22:08:13.825290] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:54.510 [2024-07-13 22:08:13.827486] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:54.510 [2024-07-13 22:08:13.827516] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:54.511 [2024-07-13 22:08:13.827609] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:54.511 [2024-07-13 22:08:13.827662] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:54.511 [2024-07-13 22:08:13.827842] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:54.511 [2024-07-13 22:08:13.827940] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:54.511 spare 00:24:54.511 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:54.511 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:54.511 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:54.511 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:54.511 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:54.511 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:54.511 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:54.511 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:54.511 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:54.511 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:54.511 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:54.511 22:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:54.770 [2024-07-13 22:08:13.928269] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000045380 00:24:54.770 [2024-07-13 22:08:13.928296] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:54.770 [2024-07-13 22:08:13.928575] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000041990 00:24:54.770 [2024-07-13 22:08:13.928775] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000045380 00:24:54.770 [2024-07-13 22:08:13.928792] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000045380 00:24:54.770 [2024-07-13 22:08:13.928951] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:54.770 22:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:54.770 "name": "raid_bdev1", 00:24:54.770 "uuid": "ce454044-c8f4-4749-bb03-b57bbc5d5d3c", 00:24:54.770 "strip_size_kb": 0, 00:24:54.770 "state": "online", 00:24:54.770 "raid_level": "raid1", 00:24:54.770 "superblock": true, 00:24:54.770 "num_base_bdevs": 4, 00:24:54.770 "num_base_bdevs_discovered": 3, 00:24:54.770 "num_base_bdevs_operational": 3, 00:24:54.770 "base_bdevs_list": [ 00:24:54.770 { 00:24:54.770 "name": "spare", 00:24:54.770 "uuid": "28974fc2-551b-5e24-9224-3ac52abed6ad", 00:24:54.770 "is_configured": true, 00:24:54.770 "data_offset": 2048, 00:24:54.770 "data_size": 63488 00:24:54.770 }, 00:24:54.770 { 00:24:54.770 "name": null, 00:24:54.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.770 "is_configured": false, 00:24:54.770 "data_offset": 2048, 00:24:54.770 "data_size": 63488 00:24:54.770 }, 00:24:54.770 { 00:24:54.770 "name": "BaseBdev3", 00:24:54.770 "uuid": "e1ecbf56-24f9-58b2-b77b-e9b4b692dc58", 00:24:54.770 "is_configured": true, 00:24:54.770 "data_offset": 2048, 00:24:54.770 "data_size": 63488 00:24:54.770 }, 00:24:54.770 { 00:24:54.770 "name": "BaseBdev4", 00:24:54.770 "uuid": "82aae25b-5930-5937-9ec7-04fea8641eef", 00:24:54.770 "is_configured": true, 00:24:54.770 "data_offset": 2048, 00:24:54.770 "data_size": 63488 00:24:54.770 } 00:24:54.770 ] 00:24:54.770 }' 00:24:54.770 22:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:54.770 22:08:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:55.371 22:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:55.371 22:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:55.371 22:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:55.371 22:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:55.371 22:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:55.371 22:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:55.371 22:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:55.371 22:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:55.371 "name": "raid_bdev1", 00:24:55.371 "uuid": "ce454044-c8f4-4749-bb03-b57bbc5d5d3c", 00:24:55.371 "strip_size_kb": 0, 00:24:55.371 "state": "online", 00:24:55.371 "raid_level": "raid1", 00:24:55.371 "superblock": true, 00:24:55.371 "num_base_bdevs": 4, 00:24:55.371 "num_base_bdevs_discovered": 3, 00:24:55.371 "num_base_bdevs_operational": 3, 00:24:55.371 "base_bdevs_list": [ 00:24:55.371 { 00:24:55.371 "name": "spare", 00:24:55.371 "uuid": "28974fc2-551b-5e24-9224-3ac52abed6ad", 00:24:55.371 "is_configured": true, 00:24:55.371 "data_offset": 2048, 00:24:55.371 "data_size": 63488 00:24:55.371 }, 00:24:55.371 { 00:24:55.371 "name": null, 00:24:55.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:55.371 "is_configured": false, 00:24:55.371 "data_offset": 2048, 00:24:55.371 "data_size": 63488 00:24:55.371 }, 00:24:55.371 { 00:24:55.371 "name": "BaseBdev3", 00:24:55.371 "uuid": "e1ecbf56-24f9-58b2-b77b-e9b4b692dc58", 00:24:55.371 "is_configured": true, 00:24:55.371 "data_offset": 2048, 00:24:55.371 "data_size": 63488 00:24:55.371 }, 00:24:55.371 { 00:24:55.371 "name": "BaseBdev4", 00:24:55.371 "uuid": "82aae25b-5930-5937-9ec7-04fea8641eef", 00:24:55.371 "is_configured": true, 00:24:55.371 "data_offset": 2048, 00:24:55.371 "data_size": 63488 00:24:55.371 } 00:24:55.371 ] 00:24:55.371 }' 00:24:55.371 22:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:55.371 22:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:55.371 22:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:55.371 22:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:55.371 22:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:55.371 22:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:55.631 22:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:24:55.631 22:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:55.891 [2024-07-13 22:08:15.076679] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:55.891 22:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:55.891 22:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:55.891 22:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:55.891 22:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:55.891 22:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:55.891 22:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:55.891 22:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:55.891 22:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:55.891 22:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:55.891 22:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:55.891 22:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:55.891 22:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:55.891 22:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:55.891 "name": "raid_bdev1", 00:24:55.891 "uuid": "ce454044-c8f4-4749-bb03-b57bbc5d5d3c", 00:24:55.891 "strip_size_kb": 0, 00:24:55.891 "state": "online", 00:24:55.891 "raid_level": "raid1", 00:24:55.891 "superblock": true, 00:24:55.891 "num_base_bdevs": 4, 00:24:55.891 "num_base_bdevs_discovered": 2, 00:24:55.891 "num_base_bdevs_operational": 2, 00:24:55.891 "base_bdevs_list": [ 00:24:55.891 { 00:24:55.891 "name": null, 00:24:55.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:55.891 "is_configured": false, 00:24:55.891 "data_offset": 2048, 00:24:55.891 "data_size": 63488 00:24:55.891 }, 00:24:55.891 { 00:24:55.891 "name": null, 00:24:55.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:55.891 "is_configured": false, 00:24:55.891 "data_offset": 2048, 00:24:55.891 "data_size": 63488 00:24:55.891 }, 00:24:55.891 { 00:24:55.891 "name": "BaseBdev3", 00:24:55.891 "uuid": "e1ecbf56-24f9-58b2-b77b-e9b4b692dc58", 00:24:55.891 "is_configured": true, 00:24:55.891 "data_offset": 2048, 00:24:55.891 "data_size": 63488 00:24:55.891 }, 00:24:55.891 { 00:24:55.891 "name": "BaseBdev4", 00:24:55.891 "uuid": "82aae25b-5930-5937-9ec7-04fea8641eef", 00:24:55.891 "is_configured": true, 00:24:55.891 "data_offset": 2048, 00:24:55.891 "data_size": 63488 00:24:55.891 } 00:24:55.891 ] 00:24:55.891 }' 00:24:55.891 22:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:55.891 22:08:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:56.460 22:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:56.719 [2024-07-13 22:08:15.898985] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:56.719 [2024-07-13 22:08:15.899194] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:24:56.719 [2024-07-13 22:08:15.899212] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:56.719 [2024-07-13 22:08:15.899250] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:56.719 [2024-07-13 22:08:15.916211] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000041a60 00:24:56.719 [2024-07-13 22:08:15.918213] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:56.719 22:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # sleep 1 00:24:57.657 22:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:57.657 22:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:57.657 22:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:57.657 22:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:57.657 22:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:57.657 22:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:57.657 22:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:57.916 22:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:57.916 "name": "raid_bdev1", 00:24:57.916 "uuid": "ce454044-c8f4-4749-bb03-b57bbc5d5d3c", 00:24:57.916 "strip_size_kb": 0, 00:24:57.916 "state": "online", 00:24:57.916 "raid_level": "raid1", 00:24:57.916 "superblock": true, 00:24:57.916 "num_base_bdevs": 4, 00:24:57.916 "num_base_bdevs_discovered": 3, 00:24:57.916 "num_base_bdevs_operational": 3, 00:24:57.916 "process": { 00:24:57.916 "type": "rebuild", 00:24:57.916 "target": "spare", 00:24:57.916 "progress": { 00:24:57.916 "blocks": 22528, 00:24:57.916 "percent": 35 00:24:57.916 } 00:24:57.916 }, 00:24:57.916 "base_bdevs_list": [ 00:24:57.916 { 00:24:57.916 "name": "spare", 00:24:57.916 "uuid": "28974fc2-551b-5e24-9224-3ac52abed6ad", 00:24:57.916 "is_configured": true, 00:24:57.916 "data_offset": 2048, 00:24:57.916 "data_size": 63488 00:24:57.916 }, 00:24:57.916 { 00:24:57.916 "name": null, 00:24:57.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:57.916 "is_configured": false, 00:24:57.916 "data_offset": 2048, 00:24:57.916 "data_size": 63488 00:24:57.916 }, 00:24:57.916 { 00:24:57.916 "name": "BaseBdev3", 00:24:57.916 "uuid": "e1ecbf56-24f9-58b2-b77b-e9b4b692dc58", 00:24:57.916 "is_configured": true, 00:24:57.916 "data_offset": 2048, 00:24:57.916 "data_size": 63488 00:24:57.916 }, 00:24:57.916 { 00:24:57.916 "name": "BaseBdev4", 00:24:57.916 "uuid": "82aae25b-5930-5937-9ec7-04fea8641eef", 00:24:57.916 "is_configured": true, 00:24:57.916 "data_offset": 2048, 00:24:57.916 "data_size": 63488 00:24:57.916 } 00:24:57.916 ] 00:24:57.916 }' 00:24:57.916 22:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:57.916 22:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:57.916 22:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:57.916 22:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:57.916 22:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:58.176 [2024-07-13 22:08:17.343352] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:58.176 [2024-07-13 22:08:17.429742] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:58.176 [2024-07-13 22:08:17.429821] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:58.176 [2024-07-13 22:08:17.429844] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:58.176 [2024-07-13 22:08:17.429854] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:58.176 22:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:58.176 22:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:58.176 22:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:58.176 22:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:58.176 22:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:58.176 22:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:58.176 22:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:58.176 22:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:58.176 22:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:58.176 22:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:58.176 22:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:58.176 22:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:58.435 22:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:58.435 "name": "raid_bdev1", 00:24:58.435 "uuid": "ce454044-c8f4-4749-bb03-b57bbc5d5d3c", 00:24:58.435 "strip_size_kb": 0, 00:24:58.435 "state": "online", 00:24:58.435 "raid_level": "raid1", 00:24:58.435 "superblock": true, 00:24:58.435 "num_base_bdevs": 4, 00:24:58.435 "num_base_bdevs_discovered": 2, 00:24:58.435 "num_base_bdevs_operational": 2, 00:24:58.435 "base_bdevs_list": [ 00:24:58.435 { 00:24:58.435 "name": null, 00:24:58.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:58.435 "is_configured": false, 00:24:58.435 "data_offset": 2048, 00:24:58.435 "data_size": 63488 00:24:58.435 }, 00:24:58.435 { 00:24:58.435 "name": null, 00:24:58.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:58.435 "is_configured": false, 00:24:58.435 "data_offset": 2048, 00:24:58.435 "data_size": 63488 00:24:58.435 }, 00:24:58.435 { 00:24:58.435 "name": "BaseBdev3", 00:24:58.435 "uuid": "e1ecbf56-24f9-58b2-b77b-e9b4b692dc58", 00:24:58.435 "is_configured": true, 00:24:58.435 "data_offset": 2048, 00:24:58.435 "data_size": 63488 00:24:58.435 }, 00:24:58.435 { 00:24:58.435 "name": "BaseBdev4", 00:24:58.435 "uuid": "82aae25b-5930-5937-9ec7-04fea8641eef", 00:24:58.435 "is_configured": true, 00:24:58.435 "data_offset": 2048, 00:24:58.435 "data_size": 63488 00:24:58.435 } 00:24:58.436 ] 00:24:58.436 }' 00:24:58.436 22:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:58.436 22:08:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:59.004 22:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:59.004 [2024-07-13 22:08:18.296462] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:59.004 [2024-07-13 22:08:18.296543] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:59.004 [2024-07-13 22:08:18.296569] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000045980 00:24:59.004 [2024-07-13 22:08:18.296581] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:59.004 [2024-07-13 22:08:18.297106] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:59.004 [2024-07-13 22:08:18.297128] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:59.004 [2024-07-13 22:08:18.297224] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:59.004 [2024-07-13 22:08:18.297238] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:24:59.004 [2024-07-13 22:08:18.297256] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:59.004 [2024-07-13 22:08:18.297279] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:59.004 [2024-07-13 22:08:18.313333] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000041b30 00:24:59.004 spare 00:24:59.004 [2024-07-13 22:08:18.315108] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:59.004 22:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # sleep 1 00:25:00.380 22:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:00.380 22:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:00.380 22:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:00.380 22:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:00.380 22:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:00.381 22:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:00.381 22:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:00.381 22:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:00.381 "name": "raid_bdev1", 00:25:00.381 "uuid": "ce454044-c8f4-4749-bb03-b57bbc5d5d3c", 00:25:00.381 "strip_size_kb": 0, 00:25:00.381 "state": "online", 00:25:00.381 "raid_level": "raid1", 00:25:00.381 "superblock": true, 00:25:00.381 "num_base_bdevs": 4, 00:25:00.381 "num_base_bdevs_discovered": 3, 00:25:00.381 "num_base_bdevs_operational": 3, 00:25:00.381 "process": { 00:25:00.381 "type": "rebuild", 00:25:00.381 "target": "spare", 00:25:00.381 "progress": { 00:25:00.381 "blocks": 22528, 00:25:00.381 "percent": 35 00:25:00.381 } 00:25:00.381 }, 00:25:00.381 "base_bdevs_list": [ 00:25:00.381 { 00:25:00.381 "name": "spare", 00:25:00.381 "uuid": "28974fc2-551b-5e24-9224-3ac52abed6ad", 00:25:00.381 "is_configured": true, 00:25:00.381 "data_offset": 2048, 00:25:00.381 "data_size": 63488 00:25:00.381 }, 00:25:00.381 { 00:25:00.381 "name": null, 00:25:00.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:00.381 "is_configured": false, 00:25:00.381 "data_offset": 2048, 00:25:00.381 "data_size": 63488 00:25:00.381 }, 00:25:00.381 { 00:25:00.381 "name": "BaseBdev3", 00:25:00.381 "uuid": "e1ecbf56-24f9-58b2-b77b-e9b4b692dc58", 00:25:00.381 "is_configured": true, 00:25:00.381 "data_offset": 2048, 00:25:00.381 "data_size": 63488 00:25:00.381 }, 00:25:00.381 { 00:25:00.381 "name": "BaseBdev4", 00:25:00.381 "uuid": "82aae25b-5930-5937-9ec7-04fea8641eef", 00:25:00.381 "is_configured": true, 00:25:00.381 "data_offset": 2048, 00:25:00.381 "data_size": 63488 00:25:00.381 } 00:25:00.381 ] 00:25:00.381 }' 00:25:00.381 22:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:00.381 22:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:00.381 22:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:00.381 22:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:00.381 22:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:00.381 [2024-07-13 22:08:19.744694] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:00.639 [2024-07-13 22:08:19.826636] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:00.639 [2024-07-13 22:08:19.826699] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:00.639 [2024-07-13 22:08:19.826718] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:00.639 [2024-07-13 22:08:19.826729] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:00.639 22:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:00.639 22:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:00.639 22:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:00.639 22:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:00.639 22:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:00.639 22:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:00.639 22:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:00.639 22:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:00.639 22:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:00.639 22:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:00.639 22:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:00.639 22:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:00.898 22:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:00.898 "name": "raid_bdev1", 00:25:00.898 "uuid": "ce454044-c8f4-4749-bb03-b57bbc5d5d3c", 00:25:00.898 "strip_size_kb": 0, 00:25:00.898 "state": "online", 00:25:00.898 "raid_level": "raid1", 00:25:00.898 "superblock": true, 00:25:00.898 "num_base_bdevs": 4, 00:25:00.898 "num_base_bdevs_discovered": 2, 00:25:00.898 "num_base_bdevs_operational": 2, 00:25:00.898 "base_bdevs_list": [ 00:25:00.898 { 00:25:00.898 "name": null, 00:25:00.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:00.898 "is_configured": false, 00:25:00.898 "data_offset": 2048, 00:25:00.898 "data_size": 63488 00:25:00.898 }, 00:25:00.898 { 00:25:00.898 "name": null, 00:25:00.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:00.898 "is_configured": false, 00:25:00.898 "data_offset": 2048, 00:25:00.898 "data_size": 63488 00:25:00.898 }, 00:25:00.898 { 00:25:00.898 "name": "BaseBdev3", 00:25:00.898 "uuid": "e1ecbf56-24f9-58b2-b77b-e9b4b692dc58", 00:25:00.898 "is_configured": true, 00:25:00.898 "data_offset": 2048, 00:25:00.898 "data_size": 63488 00:25:00.898 }, 00:25:00.898 { 00:25:00.898 "name": "BaseBdev4", 00:25:00.898 "uuid": "82aae25b-5930-5937-9ec7-04fea8641eef", 00:25:00.898 "is_configured": true, 00:25:00.898 "data_offset": 2048, 00:25:00.898 "data_size": 63488 00:25:00.898 } 00:25:00.898 ] 00:25:00.898 }' 00:25:00.898 22:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:00.898 22:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:01.156 22:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:01.156 22:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:01.156 22:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:01.156 22:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:01.156 22:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:01.156 22:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:01.156 22:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:01.415 22:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:01.415 "name": "raid_bdev1", 00:25:01.415 "uuid": "ce454044-c8f4-4749-bb03-b57bbc5d5d3c", 00:25:01.415 "strip_size_kb": 0, 00:25:01.415 "state": "online", 00:25:01.415 "raid_level": "raid1", 00:25:01.415 "superblock": true, 00:25:01.415 "num_base_bdevs": 4, 00:25:01.415 "num_base_bdevs_discovered": 2, 00:25:01.415 "num_base_bdevs_operational": 2, 00:25:01.415 "base_bdevs_list": [ 00:25:01.415 { 00:25:01.415 "name": null, 00:25:01.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:01.415 "is_configured": false, 00:25:01.415 "data_offset": 2048, 00:25:01.415 "data_size": 63488 00:25:01.415 }, 00:25:01.415 { 00:25:01.415 "name": null, 00:25:01.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:01.415 "is_configured": false, 00:25:01.415 "data_offset": 2048, 00:25:01.415 "data_size": 63488 00:25:01.415 }, 00:25:01.415 { 00:25:01.415 "name": "BaseBdev3", 00:25:01.415 "uuid": "e1ecbf56-24f9-58b2-b77b-e9b4b692dc58", 00:25:01.415 "is_configured": true, 00:25:01.415 "data_offset": 2048, 00:25:01.415 "data_size": 63488 00:25:01.415 }, 00:25:01.415 { 00:25:01.415 "name": "BaseBdev4", 00:25:01.415 "uuid": "82aae25b-5930-5937-9ec7-04fea8641eef", 00:25:01.415 "is_configured": true, 00:25:01.415 "data_offset": 2048, 00:25:01.415 "data_size": 63488 00:25:01.415 } 00:25:01.415 ] 00:25:01.415 }' 00:25:01.416 22:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:01.416 22:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:01.416 22:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:01.416 22:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:01.416 22:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:25:01.674 22:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:01.933 [2024-07-13 22:08:21.074788] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:01.933 [2024-07-13 22:08:21.074855] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:01.933 [2024-07-13 22:08:21.074879] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000045f80 00:25:01.933 [2024-07-13 22:08:21.074893] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:01.933 [2024-07-13 22:08:21.075364] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:01.933 [2024-07-13 22:08:21.075387] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:01.933 [2024-07-13 22:08:21.075469] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:25:01.933 [2024-07-13 22:08:21.075487] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:25:01.933 [2024-07-13 22:08:21.075497] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:01.933 BaseBdev1 00:25:01.933 22:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # sleep 1 00:25:02.868 22:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:02.868 22:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:02.868 22:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:02.868 22:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:02.868 22:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:02.868 22:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:02.868 22:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:02.868 22:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:02.868 22:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:02.868 22:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:02.868 22:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:02.868 22:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:03.126 22:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:03.126 "name": "raid_bdev1", 00:25:03.126 "uuid": "ce454044-c8f4-4749-bb03-b57bbc5d5d3c", 00:25:03.126 "strip_size_kb": 0, 00:25:03.126 "state": "online", 00:25:03.126 "raid_level": "raid1", 00:25:03.126 "superblock": true, 00:25:03.126 "num_base_bdevs": 4, 00:25:03.126 "num_base_bdevs_discovered": 2, 00:25:03.126 "num_base_bdevs_operational": 2, 00:25:03.126 "base_bdevs_list": [ 00:25:03.126 { 00:25:03.126 "name": null, 00:25:03.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.127 "is_configured": false, 00:25:03.127 "data_offset": 2048, 00:25:03.127 "data_size": 63488 00:25:03.127 }, 00:25:03.127 { 00:25:03.127 "name": null, 00:25:03.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.127 "is_configured": false, 00:25:03.127 "data_offset": 2048, 00:25:03.127 "data_size": 63488 00:25:03.127 }, 00:25:03.127 { 00:25:03.127 "name": "BaseBdev3", 00:25:03.127 "uuid": "e1ecbf56-24f9-58b2-b77b-e9b4b692dc58", 00:25:03.127 "is_configured": true, 00:25:03.127 "data_offset": 2048, 00:25:03.127 "data_size": 63488 00:25:03.127 }, 00:25:03.127 { 00:25:03.127 "name": "BaseBdev4", 00:25:03.127 "uuid": "82aae25b-5930-5937-9ec7-04fea8641eef", 00:25:03.127 "is_configured": true, 00:25:03.127 "data_offset": 2048, 00:25:03.127 "data_size": 63488 00:25:03.127 } 00:25:03.127 ] 00:25:03.127 }' 00:25:03.127 22:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:03.127 22:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:03.385 22:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:03.385 22:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:03.385 22:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:03.385 22:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:03.385 22:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:03.385 22:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:03.386 22:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:03.654 22:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:03.654 "name": "raid_bdev1", 00:25:03.654 "uuid": "ce454044-c8f4-4749-bb03-b57bbc5d5d3c", 00:25:03.654 "strip_size_kb": 0, 00:25:03.654 "state": "online", 00:25:03.654 "raid_level": "raid1", 00:25:03.654 "superblock": true, 00:25:03.654 "num_base_bdevs": 4, 00:25:03.654 "num_base_bdevs_discovered": 2, 00:25:03.654 "num_base_bdevs_operational": 2, 00:25:03.654 "base_bdevs_list": [ 00:25:03.654 { 00:25:03.654 "name": null, 00:25:03.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.654 "is_configured": false, 00:25:03.654 "data_offset": 2048, 00:25:03.654 "data_size": 63488 00:25:03.654 }, 00:25:03.654 { 00:25:03.654 "name": null, 00:25:03.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.654 "is_configured": false, 00:25:03.654 "data_offset": 2048, 00:25:03.654 "data_size": 63488 00:25:03.654 }, 00:25:03.654 { 00:25:03.654 "name": "BaseBdev3", 00:25:03.654 "uuid": "e1ecbf56-24f9-58b2-b77b-e9b4b692dc58", 00:25:03.654 "is_configured": true, 00:25:03.654 "data_offset": 2048, 00:25:03.654 "data_size": 63488 00:25:03.654 }, 00:25:03.654 { 00:25:03.654 "name": "BaseBdev4", 00:25:03.654 "uuid": "82aae25b-5930-5937-9ec7-04fea8641eef", 00:25:03.654 "is_configured": true, 00:25:03.654 "data_offset": 2048, 00:25:03.654 "data_size": 63488 00:25:03.654 } 00:25:03.654 ] 00:25:03.654 }' 00:25:03.654 22:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:03.654 22:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:03.654 22:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:03.654 22:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:03.654 22:08:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:03.654 22:08:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@648 -- # local es=0 00:25:03.654 22:08:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:03.654 22:08:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:03.654 22:08:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:03.654 22:08:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:03.654 22:08:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:03.654 22:08:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:03.654 22:08:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:03.654 22:08:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:03.654 22:08:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:25:03.654 22:08:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:03.913 [2024-07-13 22:08:23.164555] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:03.913 [2024-07-13 22:08:23.164711] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:25:03.913 [2024-07-13 22:08:23.164730] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:03.913 request: 00:25:03.913 { 00:25:03.913 "base_bdev": "BaseBdev1", 00:25:03.913 "raid_bdev": "raid_bdev1", 00:25:03.913 "method": "bdev_raid_add_base_bdev", 00:25:03.913 "req_id": 1 00:25:03.913 } 00:25:03.913 Got JSON-RPC error response 00:25:03.913 response: 00:25:03.913 { 00:25:03.913 "code": -22, 00:25:03.913 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:25:03.913 } 00:25:03.913 22:08:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # es=1 00:25:03.913 22:08:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:03.913 22:08:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:03.913 22:08:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:03.913 22:08:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # sleep 1 00:25:04.849 22:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:04.849 22:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:04.849 22:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:04.849 22:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:04.849 22:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:04.849 22:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:04.849 22:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:04.849 22:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:04.849 22:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:04.849 22:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:04.849 22:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:04.849 22:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:05.108 22:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:05.108 "name": "raid_bdev1", 00:25:05.108 "uuid": "ce454044-c8f4-4749-bb03-b57bbc5d5d3c", 00:25:05.108 "strip_size_kb": 0, 00:25:05.108 "state": "online", 00:25:05.108 "raid_level": "raid1", 00:25:05.108 "superblock": true, 00:25:05.108 "num_base_bdevs": 4, 00:25:05.108 "num_base_bdevs_discovered": 2, 00:25:05.108 "num_base_bdevs_operational": 2, 00:25:05.108 "base_bdevs_list": [ 00:25:05.108 { 00:25:05.108 "name": null, 00:25:05.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:05.108 "is_configured": false, 00:25:05.108 "data_offset": 2048, 00:25:05.108 "data_size": 63488 00:25:05.108 }, 00:25:05.108 { 00:25:05.108 "name": null, 00:25:05.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:05.108 "is_configured": false, 00:25:05.108 "data_offset": 2048, 00:25:05.108 "data_size": 63488 00:25:05.108 }, 00:25:05.108 { 00:25:05.108 "name": "BaseBdev3", 00:25:05.108 "uuid": "e1ecbf56-24f9-58b2-b77b-e9b4b692dc58", 00:25:05.108 "is_configured": true, 00:25:05.108 "data_offset": 2048, 00:25:05.108 "data_size": 63488 00:25:05.108 }, 00:25:05.108 { 00:25:05.108 "name": "BaseBdev4", 00:25:05.108 "uuid": "82aae25b-5930-5937-9ec7-04fea8641eef", 00:25:05.108 "is_configured": true, 00:25:05.108 "data_offset": 2048, 00:25:05.108 "data_size": 63488 00:25:05.108 } 00:25:05.108 ] 00:25:05.108 }' 00:25:05.108 22:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:05.108 22:08:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:05.674 22:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:05.674 22:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:05.675 22:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:05.675 22:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:05.675 22:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:05.675 22:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:05.675 22:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:05.675 22:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:05.675 "name": "raid_bdev1", 00:25:05.675 "uuid": "ce454044-c8f4-4749-bb03-b57bbc5d5d3c", 00:25:05.675 "strip_size_kb": 0, 00:25:05.675 "state": "online", 00:25:05.675 "raid_level": "raid1", 00:25:05.675 "superblock": true, 00:25:05.675 "num_base_bdevs": 4, 00:25:05.675 "num_base_bdevs_discovered": 2, 00:25:05.675 "num_base_bdevs_operational": 2, 00:25:05.675 "base_bdevs_list": [ 00:25:05.675 { 00:25:05.675 "name": null, 00:25:05.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:05.675 "is_configured": false, 00:25:05.675 "data_offset": 2048, 00:25:05.675 "data_size": 63488 00:25:05.675 }, 00:25:05.675 { 00:25:05.675 "name": null, 00:25:05.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:05.675 "is_configured": false, 00:25:05.675 "data_offset": 2048, 00:25:05.675 "data_size": 63488 00:25:05.675 }, 00:25:05.675 { 00:25:05.675 "name": "BaseBdev3", 00:25:05.675 "uuid": "e1ecbf56-24f9-58b2-b77b-e9b4b692dc58", 00:25:05.675 "is_configured": true, 00:25:05.675 "data_offset": 2048, 00:25:05.675 "data_size": 63488 00:25:05.675 }, 00:25:05.675 { 00:25:05.675 "name": "BaseBdev4", 00:25:05.675 "uuid": "82aae25b-5930-5937-9ec7-04fea8641eef", 00:25:05.675 "is_configured": true, 00:25:05.675 "data_offset": 2048, 00:25:05.675 "data_size": 63488 00:25:05.675 } 00:25:05.675 ] 00:25:05.675 }' 00:25:05.675 22:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:05.933 22:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:05.933 22:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:05.933 22:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:05.933 22:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # killprocess 1491306 00:25:05.933 22:08:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@948 -- # '[' -z 1491306 ']' 00:25:05.933 22:08:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # kill -0 1491306 00:25:05.933 22:08:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # uname 00:25:05.933 22:08:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:05.933 22:08:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1491306 00:25:05.933 22:08:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:05.933 22:08:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:05.933 22:08:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1491306' 00:25:05.933 killing process with pid 1491306 00:25:05.933 22:08:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@967 -- # kill 1491306 00:25:05.933 Received shutdown signal, test time was about 24.292558 seconds 00:25:05.933 00:25:05.933 Latency(us) 00:25:05.933 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:05.933 =================================================================================================================== 00:25:05.933 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:05.933 [2024-07-13 22:08:25.161470] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:05.933 [2024-07-13 22:08:25.161610] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:05.933 22:08:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # wait 1491306 00:25:05.933 [2024-07-13 22:08:25.161682] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:05.933 [2024-07-13 22:08:25.161698] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000045380 name raid_bdev1, state offline 00:25:06.192 [2024-07-13 22:08:25.504636] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:07.569 22:08:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # return 0 00:25:07.569 00:25:07.569 real 0m30.138s 00:25:07.569 user 0m44.132s 00:25:07.569 sys 0m4.462s 00:25:07.569 22:08:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:07.569 22:08:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:07.569 ************************************ 00:25:07.569 END TEST raid_rebuild_test_sb_io 00:25:07.569 ************************************ 00:25:07.569 22:08:26 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:25:07.569 22:08:26 bdev_raid -- bdev/bdev_raid.sh@884 -- # '[' n == y ']' 00:25:07.569 22:08:26 bdev_raid -- bdev/bdev_raid.sh@896 -- # base_blocklen=4096 00:25:07.569 22:08:26 bdev_raid -- bdev/bdev_raid.sh@898 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:25:07.569 22:08:26 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:25:07.569 22:08:26 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:07.569 22:08:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:07.569 ************************************ 00:25:07.569 START TEST raid_state_function_test_sb_4k 00:25:07.569 ************************************ 00:25:07.569 22:08:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:25:07.569 22:08:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:25:07.569 22:08:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:25:07.569 22:08:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:25:07.569 22:08:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:25:07.569 22:08:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:25:07.569 22:08:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:07.569 22:08:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:25:07.569 22:08:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:07.569 22:08:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:07.569 22:08:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:25:07.569 22:08:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:07.569 22:08:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:07.569 22:08:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:25:07.569 22:08:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:25:07.569 22:08:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:25:07.569 22:08:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # local strip_size 00:25:07.569 22:08:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:25:07.569 22:08:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:25:07.569 22:08:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:25:07.569 22:08:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:25:07.569 22:08:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:25:07.569 22:08:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:25:07.569 22:08:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # raid_pid=1496810 00:25:07.569 22:08:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1496810' 00:25:07.569 22:08:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:25:07.569 Process raid pid: 1496810 00:25:07.569 22:08:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@246 -- # waitforlisten 1496810 /var/tmp/spdk-raid.sock 00:25:07.569 22:08:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@829 -- # '[' -z 1496810 ']' 00:25:07.569 22:08:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:07.569 22:08:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:07.569 22:08:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:07.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:07.569 22:08:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:07.569 22:08:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:07.830 [2024-07-13 22:08:26.982140] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:07.830 [2024-07-13 22:08:26.982230] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:07.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:07.830 EAL: Requested device 0000:3d:01.0 cannot be used 00:25:07.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:07.830 EAL: Requested device 0000:3d:01.1 cannot be used 00:25:07.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:07.830 EAL: Requested device 0000:3d:01.2 cannot be used 00:25:07.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:07.830 EAL: Requested device 0000:3d:01.3 cannot be used 00:25:07.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:07.830 EAL: Requested device 0000:3d:01.4 cannot be used 00:25:07.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:07.830 EAL: Requested device 0000:3d:01.5 cannot be used 00:25:07.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:07.830 EAL: Requested device 0000:3d:01.6 cannot be used 00:25:07.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:07.830 EAL: Requested device 0000:3d:01.7 cannot be used 00:25:07.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:07.830 EAL: Requested device 0000:3d:02.0 cannot be used 00:25:07.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:07.830 EAL: Requested device 0000:3d:02.1 cannot be used 00:25:07.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:07.830 EAL: Requested device 0000:3d:02.2 cannot be used 00:25:07.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:07.830 EAL: Requested device 0000:3d:02.3 cannot be used 00:25:07.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:07.830 EAL: Requested device 0000:3d:02.4 cannot be used 00:25:07.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:07.830 EAL: Requested device 0000:3d:02.5 cannot be used 00:25:07.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:07.830 EAL: Requested device 0000:3d:02.6 cannot be used 00:25:07.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:07.830 EAL: Requested device 0000:3d:02.7 cannot be used 00:25:07.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:07.830 EAL: Requested device 0000:3f:01.0 cannot be used 00:25:07.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:07.830 EAL: Requested device 0000:3f:01.1 cannot be used 00:25:07.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:07.830 EAL: Requested device 0000:3f:01.2 cannot be used 00:25:07.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:07.830 EAL: Requested device 0000:3f:01.3 cannot be used 00:25:07.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:07.830 EAL: Requested device 0000:3f:01.4 cannot be used 00:25:07.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:07.830 EAL: Requested device 0000:3f:01.5 cannot be used 00:25:07.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:07.830 EAL: Requested device 0000:3f:01.6 cannot be used 00:25:07.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:07.830 EAL: Requested device 0000:3f:01.7 cannot be used 00:25:07.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:07.830 EAL: Requested device 0000:3f:02.0 cannot be used 00:25:07.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:07.830 EAL: Requested device 0000:3f:02.1 cannot be used 00:25:07.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:07.830 EAL: Requested device 0000:3f:02.2 cannot be used 00:25:07.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:07.830 EAL: Requested device 0000:3f:02.3 cannot be used 00:25:07.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:07.831 EAL: Requested device 0000:3f:02.4 cannot be used 00:25:07.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:07.831 EAL: Requested device 0000:3f:02.5 cannot be used 00:25:07.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:07.831 EAL: Requested device 0000:3f:02.6 cannot be used 00:25:07.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:07.831 EAL: Requested device 0000:3f:02.7 cannot be used 00:25:07.831 [2024-07-13 22:08:27.145803] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.090 [2024-07-13 22:08:27.351250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:08.348 [2024-07-13 22:08:27.605707] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:08.348 [2024-07-13 22:08:27.605734] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:08.607 22:08:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:08.607 22:08:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@862 -- # return 0 00:25:08.607 22:08:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:25:08.607 [2024-07-13 22:08:27.920505] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:08.607 [2024-07-13 22:08:27.920552] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:08.607 [2024-07-13 22:08:27.920563] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:08.607 [2024-07-13 22:08:27.920575] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:08.607 22:08:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:08.607 22:08:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:08.607 22:08:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:08.607 22:08:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:08.607 22:08:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:08.607 22:08:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:08.607 22:08:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:08.607 22:08:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:08.607 22:08:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:08.607 22:08:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:08.607 22:08:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:08.607 22:08:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:08.866 22:08:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:08.866 "name": "Existed_Raid", 00:25:08.866 "uuid": "56b45073-9b47-4f80-bd73-ec8a387e09a5", 00:25:08.866 "strip_size_kb": 0, 00:25:08.866 "state": "configuring", 00:25:08.866 "raid_level": "raid1", 00:25:08.866 "superblock": true, 00:25:08.866 "num_base_bdevs": 2, 00:25:08.866 "num_base_bdevs_discovered": 0, 00:25:08.866 "num_base_bdevs_operational": 2, 00:25:08.866 "base_bdevs_list": [ 00:25:08.866 { 00:25:08.866 "name": "BaseBdev1", 00:25:08.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:08.866 "is_configured": false, 00:25:08.866 "data_offset": 0, 00:25:08.866 "data_size": 0 00:25:08.866 }, 00:25:08.866 { 00:25:08.866 "name": "BaseBdev2", 00:25:08.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:08.866 "is_configured": false, 00:25:08.866 "data_offset": 0, 00:25:08.866 "data_size": 0 00:25:08.866 } 00:25:08.866 ] 00:25:08.866 }' 00:25:08.866 22:08:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:08.866 22:08:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:09.465 22:08:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:09.465 [2024-07-13 22:08:28.770621] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:09.465 [2024-07-13 22:08:28.770653] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:25:09.465 22:08:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:25:09.724 [2024-07-13 22:08:28.939101] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:09.724 [2024-07-13 22:08:28.939140] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:09.724 [2024-07-13 22:08:28.939150] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:09.724 [2024-07-13 22:08:28.939162] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:09.724 22:08:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1 00:25:09.984 [2024-07-13 22:08:29.132648] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:09.984 BaseBdev1 00:25:09.984 22:08:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:25:09.984 22:08:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:25:09.984 22:08:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:09.984 22:08:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local i 00:25:09.984 22:08:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:09.984 22:08:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:09.984 22:08:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:09.984 22:08:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:10.244 [ 00:25:10.244 { 00:25:10.244 "name": "BaseBdev1", 00:25:10.244 "aliases": [ 00:25:10.244 "3d72aff3-9e0a-4617-8f86-a60037048a1a" 00:25:10.244 ], 00:25:10.244 "product_name": "Malloc disk", 00:25:10.244 "block_size": 4096, 00:25:10.244 "num_blocks": 8192, 00:25:10.244 "uuid": "3d72aff3-9e0a-4617-8f86-a60037048a1a", 00:25:10.244 "assigned_rate_limits": { 00:25:10.244 "rw_ios_per_sec": 0, 00:25:10.244 "rw_mbytes_per_sec": 0, 00:25:10.244 "r_mbytes_per_sec": 0, 00:25:10.244 "w_mbytes_per_sec": 0 00:25:10.244 }, 00:25:10.244 "claimed": true, 00:25:10.244 "claim_type": "exclusive_write", 00:25:10.244 "zoned": false, 00:25:10.244 "supported_io_types": { 00:25:10.244 "read": true, 00:25:10.244 "write": true, 00:25:10.244 "unmap": true, 00:25:10.244 "flush": true, 00:25:10.244 "reset": true, 00:25:10.244 "nvme_admin": false, 00:25:10.244 "nvme_io": false, 00:25:10.244 "nvme_io_md": false, 00:25:10.244 "write_zeroes": true, 00:25:10.244 "zcopy": true, 00:25:10.244 "get_zone_info": false, 00:25:10.244 "zone_management": false, 00:25:10.244 "zone_append": false, 00:25:10.244 "compare": false, 00:25:10.244 "compare_and_write": false, 00:25:10.244 "abort": true, 00:25:10.244 "seek_hole": false, 00:25:10.244 "seek_data": false, 00:25:10.244 "copy": true, 00:25:10.244 "nvme_iov_md": false 00:25:10.244 }, 00:25:10.244 "memory_domains": [ 00:25:10.244 { 00:25:10.244 "dma_device_id": "system", 00:25:10.244 "dma_device_type": 1 00:25:10.244 }, 00:25:10.244 { 00:25:10.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:10.244 "dma_device_type": 2 00:25:10.244 } 00:25:10.244 ], 00:25:10.244 "driver_specific": {} 00:25:10.244 } 00:25:10.244 ] 00:25:10.244 22:08:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # return 0 00:25:10.244 22:08:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:10.244 22:08:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:10.244 22:08:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:10.244 22:08:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:10.244 22:08:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:10.244 22:08:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:10.244 22:08:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:10.244 22:08:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:10.244 22:08:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:10.244 22:08:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:10.244 22:08:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:10.244 22:08:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:10.503 22:08:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:10.503 "name": "Existed_Raid", 00:25:10.503 "uuid": "b9e33747-6e5b-419c-aae4-ae6c81c0c694", 00:25:10.503 "strip_size_kb": 0, 00:25:10.503 "state": "configuring", 00:25:10.503 "raid_level": "raid1", 00:25:10.503 "superblock": true, 00:25:10.503 "num_base_bdevs": 2, 00:25:10.503 "num_base_bdevs_discovered": 1, 00:25:10.503 "num_base_bdevs_operational": 2, 00:25:10.503 "base_bdevs_list": [ 00:25:10.503 { 00:25:10.503 "name": "BaseBdev1", 00:25:10.503 "uuid": "3d72aff3-9e0a-4617-8f86-a60037048a1a", 00:25:10.503 "is_configured": true, 00:25:10.503 "data_offset": 256, 00:25:10.503 "data_size": 7936 00:25:10.503 }, 00:25:10.503 { 00:25:10.503 "name": "BaseBdev2", 00:25:10.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:10.503 "is_configured": false, 00:25:10.503 "data_offset": 0, 00:25:10.503 "data_size": 0 00:25:10.503 } 00:25:10.503 ] 00:25:10.503 }' 00:25:10.503 22:08:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:10.503 22:08:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:10.762 22:08:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:11.022 [2024-07-13 22:08:30.295716] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:11.022 [2024-07-13 22:08:30.295763] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:25:11.022 22:08:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:25:11.281 [2024-07-13 22:08:30.472243] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:11.281 [2024-07-13 22:08:30.474052] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:11.281 [2024-07-13 22:08:30.474089] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:11.281 22:08:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:25:11.281 22:08:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:11.281 22:08:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:11.281 22:08:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:11.281 22:08:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:11.281 22:08:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:11.281 22:08:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:11.281 22:08:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:11.281 22:08:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:11.281 22:08:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:11.281 22:08:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:11.281 22:08:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:11.281 22:08:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:11.281 22:08:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:11.281 22:08:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:11.281 "name": "Existed_Raid", 00:25:11.281 "uuid": "2df1778e-af60-472f-95dc-3462d5823e44", 00:25:11.281 "strip_size_kb": 0, 00:25:11.281 "state": "configuring", 00:25:11.281 "raid_level": "raid1", 00:25:11.281 "superblock": true, 00:25:11.281 "num_base_bdevs": 2, 00:25:11.281 "num_base_bdevs_discovered": 1, 00:25:11.281 "num_base_bdevs_operational": 2, 00:25:11.281 "base_bdevs_list": [ 00:25:11.281 { 00:25:11.281 "name": "BaseBdev1", 00:25:11.281 "uuid": "3d72aff3-9e0a-4617-8f86-a60037048a1a", 00:25:11.281 "is_configured": true, 00:25:11.281 "data_offset": 256, 00:25:11.281 "data_size": 7936 00:25:11.281 }, 00:25:11.281 { 00:25:11.281 "name": "BaseBdev2", 00:25:11.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:11.281 "is_configured": false, 00:25:11.281 "data_offset": 0, 00:25:11.281 "data_size": 0 00:25:11.281 } 00:25:11.281 ] 00:25:11.281 }' 00:25:11.281 22:08:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:11.281 22:08:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:11.847 22:08:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2 00:25:12.105 [2024-07-13 22:08:31.369318] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:12.105 [2024-07-13 22:08:31.369537] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:25:12.105 [2024-07-13 22:08:31.369557] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:12.105 [2024-07-13 22:08:31.369794] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:25:12.105 [2024-07-13 22:08:31.369999] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:25:12.105 [2024-07-13 22:08:31.370013] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:25:12.105 [2024-07-13 22:08:31.370152] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:12.105 BaseBdev2 00:25:12.105 22:08:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:25:12.105 22:08:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:25:12.105 22:08:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:12.105 22:08:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local i 00:25:12.105 22:08:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:12.105 22:08:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:12.105 22:08:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:12.363 22:08:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:12.363 [ 00:25:12.363 { 00:25:12.363 "name": "BaseBdev2", 00:25:12.363 "aliases": [ 00:25:12.363 "0a4faf33-27d2-4ebd-831b-e5b4d03663bb" 00:25:12.363 ], 00:25:12.363 "product_name": "Malloc disk", 00:25:12.363 "block_size": 4096, 00:25:12.363 "num_blocks": 8192, 00:25:12.363 "uuid": "0a4faf33-27d2-4ebd-831b-e5b4d03663bb", 00:25:12.363 "assigned_rate_limits": { 00:25:12.363 "rw_ios_per_sec": 0, 00:25:12.363 "rw_mbytes_per_sec": 0, 00:25:12.363 "r_mbytes_per_sec": 0, 00:25:12.363 "w_mbytes_per_sec": 0 00:25:12.363 }, 00:25:12.363 "claimed": true, 00:25:12.363 "claim_type": "exclusive_write", 00:25:12.363 "zoned": false, 00:25:12.363 "supported_io_types": { 00:25:12.363 "read": true, 00:25:12.363 "write": true, 00:25:12.363 "unmap": true, 00:25:12.363 "flush": true, 00:25:12.363 "reset": true, 00:25:12.363 "nvme_admin": false, 00:25:12.363 "nvme_io": false, 00:25:12.363 "nvme_io_md": false, 00:25:12.363 "write_zeroes": true, 00:25:12.363 "zcopy": true, 00:25:12.364 "get_zone_info": false, 00:25:12.364 "zone_management": false, 00:25:12.364 "zone_append": false, 00:25:12.364 "compare": false, 00:25:12.364 "compare_and_write": false, 00:25:12.364 "abort": true, 00:25:12.364 "seek_hole": false, 00:25:12.364 "seek_data": false, 00:25:12.364 "copy": true, 00:25:12.364 "nvme_iov_md": false 00:25:12.364 }, 00:25:12.364 "memory_domains": [ 00:25:12.364 { 00:25:12.364 "dma_device_id": "system", 00:25:12.364 "dma_device_type": 1 00:25:12.364 }, 00:25:12.364 { 00:25:12.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:12.364 "dma_device_type": 2 00:25:12.364 } 00:25:12.364 ], 00:25:12.364 "driver_specific": {} 00:25:12.364 } 00:25:12.364 ] 00:25:12.364 22:08:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # return 0 00:25:12.364 22:08:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:12.364 22:08:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:12.364 22:08:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:25:12.364 22:08:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:12.364 22:08:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:12.364 22:08:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:12.364 22:08:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:12.364 22:08:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:12.364 22:08:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:12.364 22:08:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:12.364 22:08:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:12.364 22:08:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:12.364 22:08:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:12.364 22:08:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:12.622 22:08:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:12.622 "name": "Existed_Raid", 00:25:12.622 "uuid": "2df1778e-af60-472f-95dc-3462d5823e44", 00:25:12.622 "strip_size_kb": 0, 00:25:12.622 "state": "online", 00:25:12.622 "raid_level": "raid1", 00:25:12.622 "superblock": true, 00:25:12.622 "num_base_bdevs": 2, 00:25:12.622 "num_base_bdevs_discovered": 2, 00:25:12.622 "num_base_bdevs_operational": 2, 00:25:12.622 "base_bdevs_list": [ 00:25:12.622 { 00:25:12.622 "name": "BaseBdev1", 00:25:12.622 "uuid": "3d72aff3-9e0a-4617-8f86-a60037048a1a", 00:25:12.622 "is_configured": true, 00:25:12.622 "data_offset": 256, 00:25:12.622 "data_size": 7936 00:25:12.622 }, 00:25:12.622 { 00:25:12.622 "name": "BaseBdev2", 00:25:12.622 "uuid": "0a4faf33-27d2-4ebd-831b-e5b4d03663bb", 00:25:12.622 "is_configured": true, 00:25:12.622 "data_offset": 256, 00:25:12.622 "data_size": 7936 00:25:12.622 } 00:25:12.622 ] 00:25:12.622 }' 00:25:12.622 22:08:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:12.622 22:08:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:13.187 22:08:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:25:13.187 22:08:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:25:13.187 22:08:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:13.187 22:08:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:13.187 22:08:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:13.187 22:08:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # local name 00:25:13.187 22:08:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:13.187 22:08:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:13.187 [2024-07-13 22:08:32.560733] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:13.444 22:08:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:13.444 "name": "Existed_Raid", 00:25:13.444 "aliases": [ 00:25:13.444 "2df1778e-af60-472f-95dc-3462d5823e44" 00:25:13.444 ], 00:25:13.444 "product_name": "Raid Volume", 00:25:13.444 "block_size": 4096, 00:25:13.444 "num_blocks": 7936, 00:25:13.444 "uuid": "2df1778e-af60-472f-95dc-3462d5823e44", 00:25:13.444 "assigned_rate_limits": { 00:25:13.444 "rw_ios_per_sec": 0, 00:25:13.444 "rw_mbytes_per_sec": 0, 00:25:13.444 "r_mbytes_per_sec": 0, 00:25:13.444 "w_mbytes_per_sec": 0 00:25:13.444 }, 00:25:13.444 "claimed": false, 00:25:13.444 "zoned": false, 00:25:13.444 "supported_io_types": { 00:25:13.444 "read": true, 00:25:13.444 "write": true, 00:25:13.444 "unmap": false, 00:25:13.444 "flush": false, 00:25:13.444 "reset": true, 00:25:13.444 "nvme_admin": false, 00:25:13.444 "nvme_io": false, 00:25:13.444 "nvme_io_md": false, 00:25:13.444 "write_zeroes": true, 00:25:13.444 "zcopy": false, 00:25:13.444 "get_zone_info": false, 00:25:13.444 "zone_management": false, 00:25:13.444 "zone_append": false, 00:25:13.444 "compare": false, 00:25:13.444 "compare_and_write": false, 00:25:13.444 "abort": false, 00:25:13.444 "seek_hole": false, 00:25:13.445 "seek_data": false, 00:25:13.445 "copy": false, 00:25:13.445 "nvme_iov_md": false 00:25:13.445 }, 00:25:13.445 "memory_domains": [ 00:25:13.445 { 00:25:13.445 "dma_device_id": "system", 00:25:13.445 "dma_device_type": 1 00:25:13.445 }, 00:25:13.445 { 00:25:13.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:13.445 "dma_device_type": 2 00:25:13.445 }, 00:25:13.445 { 00:25:13.445 "dma_device_id": "system", 00:25:13.445 "dma_device_type": 1 00:25:13.445 }, 00:25:13.445 { 00:25:13.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:13.445 "dma_device_type": 2 00:25:13.445 } 00:25:13.445 ], 00:25:13.445 "driver_specific": { 00:25:13.445 "raid": { 00:25:13.445 "uuid": "2df1778e-af60-472f-95dc-3462d5823e44", 00:25:13.445 "strip_size_kb": 0, 00:25:13.445 "state": "online", 00:25:13.445 "raid_level": "raid1", 00:25:13.445 "superblock": true, 00:25:13.445 "num_base_bdevs": 2, 00:25:13.445 "num_base_bdevs_discovered": 2, 00:25:13.445 "num_base_bdevs_operational": 2, 00:25:13.445 "base_bdevs_list": [ 00:25:13.445 { 00:25:13.445 "name": "BaseBdev1", 00:25:13.445 "uuid": "3d72aff3-9e0a-4617-8f86-a60037048a1a", 00:25:13.445 "is_configured": true, 00:25:13.445 "data_offset": 256, 00:25:13.445 "data_size": 7936 00:25:13.445 }, 00:25:13.445 { 00:25:13.445 "name": "BaseBdev2", 00:25:13.445 "uuid": "0a4faf33-27d2-4ebd-831b-e5b4d03663bb", 00:25:13.445 "is_configured": true, 00:25:13.445 "data_offset": 256, 00:25:13.445 "data_size": 7936 00:25:13.445 } 00:25:13.445 ] 00:25:13.445 } 00:25:13.445 } 00:25:13.445 }' 00:25:13.445 22:08:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:13.445 22:08:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:25:13.445 BaseBdev2' 00:25:13.445 22:08:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:13.445 22:08:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:13.445 22:08:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:25:13.445 22:08:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:13.445 "name": "BaseBdev1", 00:25:13.445 "aliases": [ 00:25:13.445 "3d72aff3-9e0a-4617-8f86-a60037048a1a" 00:25:13.445 ], 00:25:13.445 "product_name": "Malloc disk", 00:25:13.445 "block_size": 4096, 00:25:13.445 "num_blocks": 8192, 00:25:13.445 "uuid": "3d72aff3-9e0a-4617-8f86-a60037048a1a", 00:25:13.445 "assigned_rate_limits": { 00:25:13.445 "rw_ios_per_sec": 0, 00:25:13.445 "rw_mbytes_per_sec": 0, 00:25:13.445 "r_mbytes_per_sec": 0, 00:25:13.445 "w_mbytes_per_sec": 0 00:25:13.445 }, 00:25:13.445 "claimed": true, 00:25:13.445 "claim_type": "exclusive_write", 00:25:13.445 "zoned": false, 00:25:13.445 "supported_io_types": { 00:25:13.445 "read": true, 00:25:13.445 "write": true, 00:25:13.445 "unmap": true, 00:25:13.445 "flush": true, 00:25:13.445 "reset": true, 00:25:13.445 "nvme_admin": false, 00:25:13.445 "nvme_io": false, 00:25:13.445 "nvme_io_md": false, 00:25:13.445 "write_zeroes": true, 00:25:13.445 "zcopy": true, 00:25:13.445 "get_zone_info": false, 00:25:13.445 "zone_management": false, 00:25:13.445 "zone_append": false, 00:25:13.445 "compare": false, 00:25:13.445 "compare_and_write": false, 00:25:13.445 "abort": true, 00:25:13.445 "seek_hole": false, 00:25:13.445 "seek_data": false, 00:25:13.445 "copy": true, 00:25:13.445 "nvme_iov_md": false 00:25:13.445 }, 00:25:13.445 "memory_domains": [ 00:25:13.445 { 00:25:13.445 "dma_device_id": "system", 00:25:13.445 "dma_device_type": 1 00:25:13.445 }, 00:25:13.445 { 00:25:13.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:13.445 "dma_device_type": 2 00:25:13.445 } 00:25:13.445 ], 00:25:13.445 "driver_specific": {} 00:25:13.445 }' 00:25:13.445 22:08:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:13.702 22:08:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:13.702 22:08:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:25:13.702 22:08:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:13.702 22:08:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:13.702 22:08:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:13.702 22:08:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:13.702 22:08:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:13.702 22:08:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:13.702 22:08:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:13.960 22:08:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:13.960 22:08:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:13.960 22:08:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:13.960 22:08:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:13.960 22:08:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:25:13.960 22:08:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:13.960 "name": "BaseBdev2", 00:25:13.960 "aliases": [ 00:25:13.960 "0a4faf33-27d2-4ebd-831b-e5b4d03663bb" 00:25:13.960 ], 00:25:13.960 "product_name": "Malloc disk", 00:25:13.960 "block_size": 4096, 00:25:13.960 "num_blocks": 8192, 00:25:13.960 "uuid": "0a4faf33-27d2-4ebd-831b-e5b4d03663bb", 00:25:13.960 "assigned_rate_limits": { 00:25:13.960 "rw_ios_per_sec": 0, 00:25:13.960 "rw_mbytes_per_sec": 0, 00:25:13.960 "r_mbytes_per_sec": 0, 00:25:13.960 "w_mbytes_per_sec": 0 00:25:13.960 }, 00:25:13.960 "claimed": true, 00:25:13.960 "claim_type": "exclusive_write", 00:25:13.960 "zoned": false, 00:25:13.960 "supported_io_types": { 00:25:13.960 "read": true, 00:25:13.960 "write": true, 00:25:13.960 "unmap": true, 00:25:13.960 "flush": true, 00:25:13.960 "reset": true, 00:25:13.960 "nvme_admin": false, 00:25:13.960 "nvme_io": false, 00:25:13.960 "nvme_io_md": false, 00:25:13.960 "write_zeroes": true, 00:25:13.960 "zcopy": true, 00:25:13.960 "get_zone_info": false, 00:25:13.960 "zone_management": false, 00:25:13.960 "zone_append": false, 00:25:13.960 "compare": false, 00:25:13.960 "compare_and_write": false, 00:25:13.960 "abort": true, 00:25:13.960 "seek_hole": false, 00:25:13.960 "seek_data": false, 00:25:13.960 "copy": true, 00:25:13.960 "nvme_iov_md": false 00:25:13.960 }, 00:25:13.960 "memory_domains": [ 00:25:13.961 { 00:25:13.961 "dma_device_id": "system", 00:25:13.961 "dma_device_type": 1 00:25:13.961 }, 00:25:13.961 { 00:25:13.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:13.961 "dma_device_type": 2 00:25:13.961 } 00:25:13.961 ], 00:25:13.961 "driver_specific": {} 00:25:13.961 }' 00:25:13.961 22:08:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:13.961 22:08:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:14.218 22:08:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:25:14.218 22:08:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:14.218 22:08:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:14.218 22:08:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:14.218 22:08:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:14.218 22:08:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:14.218 22:08:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:14.218 22:08:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:14.219 22:08:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:14.219 22:08:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:14.219 22:08:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:14.477 [2024-07-13 22:08:33.723741] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:14.477 22:08:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@275 -- # local expected_state 00:25:14.477 22:08:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:25:14.477 22:08:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:14.477 22:08:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:25:14.477 22:08:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:25:14.477 22:08:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:25:14.477 22:08:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:14.477 22:08:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:14.477 22:08:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:14.477 22:08:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:14.477 22:08:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:14.477 22:08:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:14.477 22:08:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:14.477 22:08:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:14.477 22:08:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:14.477 22:08:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:14.477 22:08:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:14.736 22:08:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:14.736 "name": "Existed_Raid", 00:25:14.736 "uuid": "2df1778e-af60-472f-95dc-3462d5823e44", 00:25:14.736 "strip_size_kb": 0, 00:25:14.736 "state": "online", 00:25:14.736 "raid_level": "raid1", 00:25:14.736 "superblock": true, 00:25:14.736 "num_base_bdevs": 2, 00:25:14.736 "num_base_bdevs_discovered": 1, 00:25:14.736 "num_base_bdevs_operational": 1, 00:25:14.736 "base_bdevs_list": [ 00:25:14.736 { 00:25:14.736 "name": null, 00:25:14.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:14.736 "is_configured": false, 00:25:14.736 "data_offset": 256, 00:25:14.736 "data_size": 7936 00:25:14.736 }, 00:25:14.736 { 00:25:14.736 "name": "BaseBdev2", 00:25:14.736 "uuid": "0a4faf33-27d2-4ebd-831b-e5b4d03663bb", 00:25:14.736 "is_configured": true, 00:25:14.736 "data_offset": 256, 00:25:14.736 "data_size": 7936 00:25:14.736 } 00:25:14.736 ] 00:25:14.736 }' 00:25:14.736 22:08:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:14.736 22:08:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:15.304 22:08:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:25:15.304 22:08:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:15.304 22:08:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:15.304 22:08:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:15.304 22:08:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:15.304 22:08:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:15.304 22:08:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:25:15.563 [2024-07-13 22:08:34.758061] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:15.563 [2024-07-13 22:08:34.758159] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:15.563 [2024-07-13 22:08:34.846539] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:15.563 [2024-07-13 22:08:34.846610] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:15.563 [2024-07-13 22:08:34.846623] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:25:15.563 22:08:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:15.563 22:08:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:15.563 22:08:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:15.563 22:08:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:25:15.822 22:08:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:25:15.822 22:08:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:25:15.822 22:08:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:25:15.822 22:08:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@341 -- # killprocess 1496810 00:25:15.822 22:08:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@948 -- # '[' -z 1496810 ']' 00:25:15.822 22:08:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # kill -0 1496810 00:25:15.822 22:08:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@953 -- # uname 00:25:15.822 22:08:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:15.822 22:08:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1496810 00:25:15.822 22:08:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:15.822 22:08:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:15.822 22:08:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1496810' 00:25:15.822 killing process with pid 1496810 00:25:15.822 22:08:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@967 -- # kill 1496810 00:25:15.822 [2024-07-13 22:08:35.093360] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:15.822 22:08:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # wait 1496810 00:25:15.822 [2024-07-13 22:08:35.111229] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:17.203 22:08:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@343 -- # return 0 00:25:17.203 00:25:17.203 real 0m9.435s 00:25:17.203 user 0m15.479s 00:25:17.203 sys 0m1.760s 00:25:17.203 22:08:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:17.203 22:08:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:17.203 ************************************ 00:25:17.203 END TEST raid_state_function_test_sb_4k 00:25:17.203 ************************************ 00:25:17.203 22:08:36 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:25:17.203 22:08:36 bdev_raid -- bdev/bdev_raid.sh@899 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:25:17.203 22:08:36 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:25:17.203 22:08:36 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:17.203 22:08:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:17.203 ************************************ 00:25:17.203 START TEST raid_superblock_test_4k 00:25:17.203 ************************************ 00:25:17.203 22:08:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:25:17.203 22:08:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:25:17.203 22:08:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:25:17.203 22:08:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:25:17.203 22:08:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:25:17.203 22:08:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:25:17.203 22:08:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:25:17.203 22:08:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:25:17.203 22:08:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:25:17.203 22:08:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:25:17.203 22:08:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local strip_size 00:25:17.203 22:08:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:25:17.203 22:08:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:25:17.203 22:08:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:25:17.203 22:08:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:25:17.203 22:08:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:25:17.203 22:08:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # raid_pid=1498626 00:25:17.203 22:08:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # waitforlisten 1498626 /var/tmp/spdk-raid.sock 00:25:17.203 22:08:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:25:17.203 22:08:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@829 -- # '[' -z 1498626 ']' 00:25:17.203 22:08:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:17.203 22:08:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:17.203 22:08:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:17.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:17.203 22:08:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:17.203 22:08:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:17.203 [2024-07-13 22:08:36.499226] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:17.203 [2024-07-13 22:08:36.499332] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1498626 ] 00:25:17.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.463 EAL: Requested device 0000:3d:01.0 cannot be used 00:25:17.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.463 EAL: Requested device 0000:3d:01.1 cannot be used 00:25:17.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.463 EAL: Requested device 0000:3d:01.2 cannot be used 00:25:17.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.463 EAL: Requested device 0000:3d:01.3 cannot be used 00:25:17.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.463 EAL: Requested device 0000:3d:01.4 cannot be used 00:25:17.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.463 EAL: Requested device 0000:3d:01.5 cannot be used 00:25:17.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.463 EAL: Requested device 0000:3d:01.6 cannot be used 00:25:17.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.463 EAL: Requested device 0000:3d:01.7 cannot be used 00:25:17.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.463 EAL: Requested device 0000:3d:02.0 cannot be used 00:25:17.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.463 EAL: Requested device 0000:3d:02.1 cannot be used 00:25:17.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.463 EAL: Requested device 0000:3d:02.2 cannot be used 00:25:17.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.463 EAL: Requested device 0000:3d:02.3 cannot be used 00:25:17.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.463 EAL: Requested device 0000:3d:02.4 cannot be used 00:25:17.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.463 EAL: Requested device 0000:3d:02.5 cannot be used 00:25:17.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.463 EAL: Requested device 0000:3d:02.6 cannot be used 00:25:17.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.463 EAL: Requested device 0000:3d:02.7 cannot be used 00:25:17.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.463 EAL: Requested device 0000:3f:01.0 cannot be used 00:25:17.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.463 EAL: Requested device 0000:3f:01.1 cannot be used 00:25:17.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.463 EAL: Requested device 0000:3f:01.2 cannot be used 00:25:17.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.463 EAL: Requested device 0000:3f:01.3 cannot be used 00:25:17.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.463 EAL: Requested device 0000:3f:01.4 cannot be used 00:25:17.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.463 EAL: Requested device 0000:3f:01.5 cannot be used 00:25:17.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.463 EAL: Requested device 0000:3f:01.6 cannot be used 00:25:17.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.463 EAL: Requested device 0000:3f:01.7 cannot be used 00:25:17.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.463 EAL: Requested device 0000:3f:02.0 cannot be used 00:25:17.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.463 EAL: Requested device 0000:3f:02.1 cannot be used 00:25:17.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.463 EAL: Requested device 0000:3f:02.2 cannot be used 00:25:17.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.463 EAL: Requested device 0000:3f:02.3 cannot be used 00:25:17.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.463 EAL: Requested device 0000:3f:02.4 cannot be used 00:25:17.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.463 EAL: Requested device 0000:3f:02.5 cannot be used 00:25:17.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.463 EAL: Requested device 0000:3f:02.6 cannot be used 00:25:17.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.463 EAL: Requested device 0000:3f:02.7 cannot be used 00:25:17.463 [2024-07-13 22:08:36.659599] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:17.723 [2024-07-13 22:08:36.858028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:17.723 [2024-07-13 22:08:37.098412] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:17.723 [2024-07-13 22:08:37.098442] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:17.982 22:08:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:17.982 22:08:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@862 -- # return 0 00:25:17.982 22:08:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:25:17.982 22:08:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:25:17.982 22:08:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:25:17.982 22:08:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:25:17.982 22:08:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:25:17.982 22:08:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:17.982 22:08:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:25:17.982 22:08:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:17.982 22:08:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc1 00:25:18.242 malloc1 00:25:18.242 22:08:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:18.242 [2024-07-13 22:08:37.624379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:18.242 [2024-07-13 22:08:37.624435] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:18.242 [2024-07-13 22:08:37.624459] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:25:18.242 [2024-07-13 22:08:37.624471] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:18.242 [2024-07-13 22:08:37.626599] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:18.242 [2024-07-13 22:08:37.626630] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:18.242 pt1 00:25:18.502 22:08:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:25:18.502 22:08:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:25:18.502 22:08:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:25:18.502 22:08:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:25:18.502 22:08:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:25:18.502 22:08:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:18.502 22:08:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:25:18.502 22:08:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:18.502 22:08:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc2 00:25:18.502 malloc2 00:25:18.502 22:08:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:18.761 [2024-07-13 22:08:37.993872] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:18.761 [2024-07-13 22:08:37.993926] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:18.761 [2024-07-13 22:08:37.993949] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:25:18.761 [2024-07-13 22:08:37.993976] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:18.761 [2024-07-13 22:08:37.995977] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:18.761 [2024-07-13 22:08:37.996008] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:18.761 pt2 00:25:18.761 22:08:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:25:18.761 22:08:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:25:18.761 22:08:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:25:18.761 [2024-07-13 22:08:38.150314] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:19.021 [2024-07-13 22:08:38.152108] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:19.021 [2024-07-13 22:08:38.152289] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000040880 00:25:19.021 [2024-07-13 22:08:38.152305] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:19.021 [2024-07-13 22:08:38.152552] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:25:19.021 [2024-07-13 22:08:38.152743] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000040880 00:25:19.021 [2024-07-13 22:08:38.152757] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000040880 00:25:19.021 [2024-07-13 22:08:38.152922] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:19.021 22:08:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:19.021 22:08:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:19.021 22:08:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:19.021 22:08:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:19.021 22:08:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:19.021 22:08:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:19.021 22:08:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:19.021 22:08:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:19.021 22:08:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:19.021 22:08:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:19.021 22:08:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:19.021 22:08:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:19.021 22:08:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:19.021 "name": "raid_bdev1", 00:25:19.021 "uuid": "cd004fd3-79bb-4ba9-818e-3b6b53c092f7", 00:25:19.021 "strip_size_kb": 0, 00:25:19.021 "state": "online", 00:25:19.021 "raid_level": "raid1", 00:25:19.021 "superblock": true, 00:25:19.021 "num_base_bdevs": 2, 00:25:19.021 "num_base_bdevs_discovered": 2, 00:25:19.021 "num_base_bdevs_operational": 2, 00:25:19.021 "base_bdevs_list": [ 00:25:19.021 { 00:25:19.021 "name": "pt1", 00:25:19.021 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:19.021 "is_configured": true, 00:25:19.021 "data_offset": 256, 00:25:19.021 "data_size": 7936 00:25:19.021 }, 00:25:19.021 { 00:25:19.021 "name": "pt2", 00:25:19.021 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:19.021 "is_configured": true, 00:25:19.021 "data_offset": 256, 00:25:19.021 "data_size": 7936 00:25:19.021 } 00:25:19.021 ] 00:25:19.021 }' 00:25:19.021 22:08:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:19.021 22:08:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:19.590 22:08:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:25:19.590 22:08:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:25:19.590 22:08:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:19.590 22:08:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:19.590 22:08:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:19.590 22:08:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:25:19.590 22:08:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:19.590 22:08:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:19.849 [2024-07-13 22:08:38.988693] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:19.849 22:08:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:19.849 "name": "raid_bdev1", 00:25:19.849 "aliases": [ 00:25:19.849 "cd004fd3-79bb-4ba9-818e-3b6b53c092f7" 00:25:19.849 ], 00:25:19.849 "product_name": "Raid Volume", 00:25:19.849 "block_size": 4096, 00:25:19.849 "num_blocks": 7936, 00:25:19.849 "uuid": "cd004fd3-79bb-4ba9-818e-3b6b53c092f7", 00:25:19.849 "assigned_rate_limits": { 00:25:19.849 "rw_ios_per_sec": 0, 00:25:19.849 "rw_mbytes_per_sec": 0, 00:25:19.849 "r_mbytes_per_sec": 0, 00:25:19.849 "w_mbytes_per_sec": 0 00:25:19.849 }, 00:25:19.849 "claimed": false, 00:25:19.849 "zoned": false, 00:25:19.849 "supported_io_types": { 00:25:19.849 "read": true, 00:25:19.849 "write": true, 00:25:19.849 "unmap": false, 00:25:19.849 "flush": false, 00:25:19.849 "reset": true, 00:25:19.849 "nvme_admin": false, 00:25:19.849 "nvme_io": false, 00:25:19.849 "nvme_io_md": false, 00:25:19.849 "write_zeroes": true, 00:25:19.849 "zcopy": false, 00:25:19.849 "get_zone_info": false, 00:25:19.849 "zone_management": false, 00:25:19.849 "zone_append": false, 00:25:19.849 "compare": false, 00:25:19.849 "compare_and_write": false, 00:25:19.849 "abort": false, 00:25:19.849 "seek_hole": false, 00:25:19.849 "seek_data": false, 00:25:19.849 "copy": false, 00:25:19.849 "nvme_iov_md": false 00:25:19.849 }, 00:25:19.849 "memory_domains": [ 00:25:19.849 { 00:25:19.849 "dma_device_id": "system", 00:25:19.849 "dma_device_type": 1 00:25:19.849 }, 00:25:19.849 { 00:25:19.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:19.849 "dma_device_type": 2 00:25:19.849 }, 00:25:19.849 { 00:25:19.849 "dma_device_id": "system", 00:25:19.849 "dma_device_type": 1 00:25:19.849 }, 00:25:19.849 { 00:25:19.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:19.849 "dma_device_type": 2 00:25:19.849 } 00:25:19.849 ], 00:25:19.849 "driver_specific": { 00:25:19.849 "raid": { 00:25:19.849 "uuid": "cd004fd3-79bb-4ba9-818e-3b6b53c092f7", 00:25:19.849 "strip_size_kb": 0, 00:25:19.849 "state": "online", 00:25:19.849 "raid_level": "raid1", 00:25:19.849 "superblock": true, 00:25:19.849 "num_base_bdevs": 2, 00:25:19.849 "num_base_bdevs_discovered": 2, 00:25:19.849 "num_base_bdevs_operational": 2, 00:25:19.849 "base_bdevs_list": [ 00:25:19.849 { 00:25:19.849 "name": "pt1", 00:25:19.849 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:19.849 "is_configured": true, 00:25:19.849 "data_offset": 256, 00:25:19.849 "data_size": 7936 00:25:19.849 }, 00:25:19.849 { 00:25:19.849 "name": "pt2", 00:25:19.849 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:19.849 "is_configured": true, 00:25:19.849 "data_offset": 256, 00:25:19.849 "data_size": 7936 00:25:19.849 } 00:25:19.849 ] 00:25:19.849 } 00:25:19.849 } 00:25:19.849 }' 00:25:19.849 22:08:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:19.849 22:08:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:25:19.849 pt2' 00:25:19.849 22:08:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:19.850 22:08:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:25:19.850 22:08:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:19.850 22:08:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:19.850 "name": "pt1", 00:25:19.850 "aliases": [ 00:25:19.850 "00000000-0000-0000-0000-000000000001" 00:25:19.850 ], 00:25:19.850 "product_name": "passthru", 00:25:19.850 "block_size": 4096, 00:25:19.850 "num_blocks": 8192, 00:25:19.850 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:19.850 "assigned_rate_limits": { 00:25:19.850 "rw_ios_per_sec": 0, 00:25:19.850 "rw_mbytes_per_sec": 0, 00:25:19.850 "r_mbytes_per_sec": 0, 00:25:19.850 "w_mbytes_per_sec": 0 00:25:19.850 }, 00:25:19.850 "claimed": true, 00:25:19.850 "claim_type": "exclusive_write", 00:25:19.850 "zoned": false, 00:25:19.850 "supported_io_types": { 00:25:19.850 "read": true, 00:25:19.850 "write": true, 00:25:19.850 "unmap": true, 00:25:19.850 "flush": true, 00:25:19.850 "reset": true, 00:25:19.850 "nvme_admin": false, 00:25:19.850 "nvme_io": false, 00:25:19.850 "nvme_io_md": false, 00:25:19.850 "write_zeroes": true, 00:25:19.850 "zcopy": true, 00:25:19.850 "get_zone_info": false, 00:25:19.850 "zone_management": false, 00:25:19.850 "zone_append": false, 00:25:19.850 "compare": false, 00:25:19.850 "compare_and_write": false, 00:25:19.850 "abort": true, 00:25:19.850 "seek_hole": false, 00:25:19.850 "seek_data": false, 00:25:19.850 "copy": true, 00:25:19.850 "nvme_iov_md": false 00:25:19.850 }, 00:25:19.850 "memory_domains": [ 00:25:19.850 { 00:25:19.850 "dma_device_id": "system", 00:25:19.850 "dma_device_type": 1 00:25:19.850 }, 00:25:19.850 { 00:25:19.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:19.850 "dma_device_type": 2 00:25:19.850 } 00:25:19.850 ], 00:25:19.850 "driver_specific": { 00:25:19.850 "passthru": { 00:25:19.850 "name": "pt1", 00:25:19.850 "base_bdev_name": "malloc1" 00:25:19.850 } 00:25:19.850 } 00:25:19.850 }' 00:25:19.850 22:08:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:20.109 22:08:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:20.109 22:08:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:25:20.109 22:08:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:20.109 22:08:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:20.109 22:08:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:20.109 22:08:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:20.109 22:08:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:20.109 22:08:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:20.109 22:08:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:20.109 22:08:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:20.109 22:08:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:20.109 22:08:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:20.109 22:08:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:25:20.109 22:08:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:20.368 22:08:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:20.368 "name": "pt2", 00:25:20.368 "aliases": [ 00:25:20.368 "00000000-0000-0000-0000-000000000002" 00:25:20.368 ], 00:25:20.368 "product_name": "passthru", 00:25:20.368 "block_size": 4096, 00:25:20.368 "num_blocks": 8192, 00:25:20.368 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:20.368 "assigned_rate_limits": { 00:25:20.368 "rw_ios_per_sec": 0, 00:25:20.368 "rw_mbytes_per_sec": 0, 00:25:20.368 "r_mbytes_per_sec": 0, 00:25:20.368 "w_mbytes_per_sec": 0 00:25:20.368 }, 00:25:20.368 "claimed": true, 00:25:20.368 "claim_type": "exclusive_write", 00:25:20.368 "zoned": false, 00:25:20.368 "supported_io_types": { 00:25:20.368 "read": true, 00:25:20.368 "write": true, 00:25:20.368 "unmap": true, 00:25:20.368 "flush": true, 00:25:20.368 "reset": true, 00:25:20.368 "nvme_admin": false, 00:25:20.368 "nvme_io": false, 00:25:20.368 "nvme_io_md": false, 00:25:20.368 "write_zeroes": true, 00:25:20.368 "zcopy": true, 00:25:20.368 "get_zone_info": false, 00:25:20.368 "zone_management": false, 00:25:20.368 "zone_append": false, 00:25:20.368 "compare": false, 00:25:20.368 "compare_and_write": false, 00:25:20.368 "abort": true, 00:25:20.368 "seek_hole": false, 00:25:20.368 "seek_data": false, 00:25:20.368 "copy": true, 00:25:20.368 "nvme_iov_md": false 00:25:20.368 }, 00:25:20.368 "memory_domains": [ 00:25:20.368 { 00:25:20.368 "dma_device_id": "system", 00:25:20.368 "dma_device_type": 1 00:25:20.368 }, 00:25:20.368 { 00:25:20.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:20.368 "dma_device_type": 2 00:25:20.368 } 00:25:20.368 ], 00:25:20.368 "driver_specific": { 00:25:20.368 "passthru": { 00:25:20.368 "name": "pt2", 00:25:20.369 "base_bdev_name": "malloc2" 00:25:20.369 } 00:25:20.369 } 00:25:20.369 }' 00:25:20.369 22:08:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:20.369 22:08:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:20.369 22:08:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:25:20.369 22:08:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:20.628 22:08:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:20.628 22:08:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:20.628 22:08:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:20.628 22:08:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:20.628 22:08:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:20.628 22:08:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:20.628 22:08:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:20.628 22:08:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:20.628 22:08:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:20.628 22:08:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:25:20.887 [2024-07-13 22:08:40.127708] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:20.887 22:08:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=cd004fd3-79bb-4ba9-818e-3b6b53c092f7 00:25:20.887 22:08:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # '[' -z cd004fd3-79bb-4ba9-818e-3b6b53c092f7 ']' 00:25:20.887 22:08:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:21.147 [2024-07-13 22:08:40.299954] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:21.147 [2024-07-13 22:08:40.299984] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:21.147 [2024-07-13 22:08:40.300057] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:21.147 [2024-07-13 22:08:40.300112] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:21.147 [2024-07-13 22:08:40.300132] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040880 name raid_bdev1, state offline 00:25:21.147 22:08:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:25:21.147 22:08:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:21.147 22:08:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:25:21.147 22:08:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:25:21.147 22:08:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:25:21.147 22:08:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:25:21.406 22:08:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:25:21.406 22:08:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:21.666 22:08:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:25:21.666 22:08:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:25:21.666 22:08:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:25:21.666 22:08:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:25:21.666 22:08:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@648 -- # local es=0 00:25:21.666 22:08:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:25:21.666 22:08:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:21.666 22:08:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:21.666 22:08:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:21.666 22:08:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:21.666 22:08:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:21.666 22:08:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:21.666 22:08:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:21.666 22:08:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:25:21.666 22:08:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:25:21.926 [2024-07-13 22:08:41.150165] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:25:21.926 [2024-07-13 22:08:41.151888] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:25:21.926 [2024-07-13 22:08:41.151958] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:25:21.926 [2024-07-13 22:08:41.152019] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:25:21.926 [2024-07-13 22:08:41.152036] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:21.926 [2024-07-13 22:08:41.152048] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040e80 name raid_bdev1, state configuring 00:25:21.926 request: 00:25:21.926 { 00:25:21.926 "name": "raid_bdev1", 00:25:21.926 "raid_level": "raid1", 00:25:21.926 "base_bdevs": [ 00:25:21.926 "malloc1", 00:25:21.926 "malloc2" 00:25:21.926 ], 00:25:21.926 "superblock": false, 00:25:21.926 "method": "bdev_raid_create", 00:25:21.926 "req_id": 1 00:25:21.926 } 00:25:21.926 Got JSON-RPC error response 00:25:21.926 response: 00:25:21.926 { 00:25:21.926 "code": -17, 00:25:21.926 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:25:21.926 } 00:25:21.926 22:08:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # es=1 00:25:21.926 22:08:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:21.926 22:08:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:21.926 22:08:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:21.926 22:08:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:21.926 22:08:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:25:22.245 22:08:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:25:22.245 22:08:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:25:22.245 22:08:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:22.245 [2024-07-13 22:08:41.495004] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:22.245 [2024-07-13 22:08:41.495056] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:22.245 [2024-07-13 22:08:41.495090] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:25:22.245 [2024-07-13 22:08:41.495103] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:22.245 [2024-07-13 22:08:41.497214] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:22.245 [2024-07-13 22:08:41.497244] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:22.245 [2024-07-13 22:08:41.497340] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:22.245 [2024-07-13 22:08:41.497414] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:22.245 pt1 00:25:22.245 22:08:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:25:22.245 22:08:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:22.245 22:08:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:22.245 22:08:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:22.245 22:08:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:22.245 22:08:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:22.245 22:08:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:22.245 22:08:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:22.245 22:08:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:22.245 22:08:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:22.245 22:08:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:22.245 22:08:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:22.504 22:08:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:22.504 "name": "raid_bdev1", 00:25:22.504 "uuid": "cd004fd3-79bb-4ba9-818e-3b6b53c092f7", 00:25:22.504 "strip_size_kb": 0, 00:25:22.504 "state": "configuring", 00:25:22.504 "raid_level": "raid1", 00:25:22.504 "superblock": true, 00:25:22.504 "num_base_bdevs": 2, 00:25:22.504 "num_base_bdevs_discovered": 1, 00:25:22.504 "num_base_bdevs_operational": 2, 00:25:22.504 "base_bdevs_list": [ 00:25:22.504 { 00:25:22.504 "name": "pt1", 00:25:22.504 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:22.504 "is_configured": true, 00:25:22.504 "data_offset": 256, 00:25:22.504 "data_size": 7936 00:25:22.504 }, 00:25:22.504 { 00:25:22.504 "name": null, 00:25:22.504 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:22.504 "is_configured": false, 00:25:22.504 "data_offset": 256, 00:25:22.504 "data_size": 7936 00:25:22.504 } 00:25:22.504 ] 00:25:22.504 }' 00:25:22.504 22:08:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:22.504 22:08:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:23.072 22:08:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:25:23.072 22:08:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:25:23.073 22:08:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:25:23.073 22:08:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:23.073 [2024-07-13 22:08:42.317163] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:23.073 [2024-07-13 22:08:42.317220] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:23.073 [2024-07-13 22:08:42.317240] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041d80 00:25:23.073 [2024-07-13 22:08:42.317254] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:23.073 [2024-07-13 22:08:42.317702] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:23.073 [2024-07-13 22:08:42.317723] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:23.073 [2024-07-13 22:08:42.317803] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:23.073 [2024-07-13 22:08:42.317833] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:23.073 [2024-07-13 22:08:42.317979] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041a80 00:25:23.073 [2024-07-13 22:08:42.317994] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:23.073 [2024-07-13 22:08:42.318218] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:25:23.073 [2024-07-13 22:08:42.318390] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041a80 00:25:23.073 [2024-07-13 22:08:42.318400] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041a80 00:25:23.073 [2024-07-13 22:08:42.318538] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:23.073 pt2 00:25:23.073 22:08:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:25:23.073 22:08:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:25:23.073 22:08:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:23.073 22:08:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:23.073 22:08:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:23.073 22:08:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:23.073 22:08:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:23.073 22:08:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:23.073 22:08:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:23.073 22:08:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:23.073 22:08:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:23.073 22:08:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:23.073 22:08:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:23.073 22:08:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:23.332 22:08:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:23.332 "name": "raid_bdev1", 00:25:23.332 "uuid": "cd004fd3-79bb-4ba9-818e-3b6b53c092f7", 00:25:23.332 "strip_size_kb": 0, 00:25:23.332 "state": "online", 00:25:23.332 "raid_level": "raid1", 00:25:23.332 "superblock": true, 00:25:23.332 "num_base_bdevs": 2, 00:25:23.332 "num_base_bdevs_discovered": 2, 00:25:23.332 "num_base_bdevs_operational": 2, 00:25:23.332 "base_bdevs_list": [ 00:25:23.332 { 00:25:23.332 "name": "pt1", 00:25:23.332 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:23.332 "is_configured": true, 00:25:23.332 "data_offset": 256, 00:25:23.332 "data_size": 7936 00:25:23.332 }, 00:25:23.332 { 00:25:23.332 "name": "pt2", 00:25:23.332 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:23.332 "is_configured": true, 00:25:23.332 "data_offset": 256, 00:25:23.332 "data_size": 7936 00:25:23.332 } 00:25:23.332 ] 00:25:23.332 }' 00:25:23.332 22:08:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:23.332 22:08:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:23.900 22:08:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:25:23.900 22:08:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:25:23.900 22:08:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:23.900 22:08:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:23.900 22:08:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:23.900 22:08:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:25:23.900 22:08:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:23.900 22:08:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:23.900 [2024-07-13 22:08:43.159582] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:23.900 22:08:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:23.900 "name": "raid_bdev1", 00:25:23.900 "aliases": [ 00:25:23.900 "cd004fd3-79bb-4ba9-818e-3b6b53c092f7" 00:25:23.900 ], 00:25:23.900 "product_name": "Raid Volume", 00:25:23.900 "block_size": 4096, 00:25:23.900 "num_blocks": 7936, 00:25:23.900 "uuid": "cd004fd3-79bb-4ba9-818e-3b6b53c092f7", 00:25:23.900 "assigned_rate_limits": { 00:25:23.900 "rw_ios_per_sec": 0, 00:25:23.900 "rw_mbytes_per_sec": 0, 00:25:23.900 "r_mbytes_per_sec": 0, 00:25:23.900 "w_mbytes_per_sec": 0 00:25:23.900 }, 00:25:23.900 "claimed": false, 00:25:23.900 "zoned": false, 00:25:23.900 "supported_io_types": { 00:25:23.900 "read": true, 00:25:23.900 "write": true, 00:25:23.900 "unmap": false, 00:25:23.900 "flush": false, 00:25:23.900 "reset": true, 00:25:23.900 "nvme_admin": false, 00:25:23.900 "nvme_io": false, 00:25:23.900 "nvme_io_md": false, 00:25:23.900 "write_zeroes": true, 00:25:23.900 "zcopy": false, 00:25:23.900 "get_zone_info": false, 00:25:23.900 "zone_management": false, 00:25:23.900 "zone_append": false, 00:25:23.900 "compare": false, 00:25:23.900 "compare_and_write": false, 00:25:23.900 "abort": false, 00:25:23.900 "seek_hole": false, 00:25:23.900 "seek_data": false, 00:25:23.900 "copy": false, 00:25:23.900 "nvme_iov_md": false 00:25:23.900 }, 00:25:23.900 "memory_domains": [ 00:25:23.900 { 00:25:23.900 "dma_device_id": "system", 00:25:23.900 "dma_device_type": 1 00:25:23.900 }, 00:25:23.900 { 00:25:23.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:23.900 "dma_device_type": 2 00:25:23.900 }, 00:25:23.900 { 00:25:23.900 "dma_device_id": "system", 00:25:23.900 "dma_device_type": 1 00:25:23.900 }, 00:25:23.900 { 00:25:23.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:23.900 "dma_device_type": 2 00:25:23.900 } 00:25:23.900 ], 00:25:23.900 "driver_specific": { 00:25:23.900 "raid": { 00:25:23.900 "uuid": "cd004fd3-79bb-4ba9-818e-3b6b53c092f7", 00:25:23.900 "strip_size_kb": 0, 00:25:23.900 "state": "online", 00:25:23.900 "raid_level": "raid1", 00:25:23.900 "superblock": true, 00:25:23.900 "num_base_bdevs": 2, 00:25:23.900 "num_base_bdevs_discovered": 2, 00:25:23.900 "num_base_bdevs_operational": 2, 00:25:23.900 "base_bdevs_list": [ 00:25:23.900 { 00:25:23.900 "name": "pt1", 00:25:23.900 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:23.901 "is_configured": true, 00:25:23.901 "data_offset": 256, 00:25:23.901 "data_size": 7936 00:25:23.901 }, 00:25:23.901 { 00:25:23.901 "name": "pt2", 00:25:23.901 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:23.901 "is_configured": true, 00:25:23.901 "data_offset": 256, 00:25:23.901 "data_size": 7936 00:25:23.901 } 00:25:23.901 ] 00:25:23.901 } 00:25:23.901 } 00:25:23.901 }' 00:25:23.901 22:08:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:23.901 22:08:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:25:23.901 pt2' 00:25:23.901 22:08:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:23.901 22:08:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:25:23.901 22:08:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:24.159 22:08:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:24.159 "name": "pt1", 00:25:24.159 "aliases": [ 00:25:24.160 "00000000-0000-0000-0000-000000000001" 00:25:24.160 ], 00:25:24.160 "product_name": "passthru", 00:25:24.160 "block_size": 4096, 00:25:24.160 "num_blocks": 8192, 00:25:24.160 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:24.160 "assigned_rate_limits": { 00:25:24.160 "rw_ios_per_sec": 0, 00:25:24.160 "rw_mbytes_per_sec": 0, 00:25:24.160 "r_mbytes_per_sec": 0, 00:25:24.160 "w_mbytes_per_sec": 0 00:25:24.160 }, 00:25:24.160 "claimed": true, 00:25:24.160 "claim_type": "exclusive_write", 00:25:24.160 "zoned": false, 00:25:24.160 "supported_io_types": { 00:25:24.160 "read": true, 00:25:24.160 "write": true, 00:25:24.160 "unmap": true, 00:25:24.160 "flush": true, 00:25:24.160 "reset": true, 00:25:24.160 "nvme_admin": false, 00:25:24.160 "nvme_io": false, 00:25:24.160 "nvme_io_md": false, 00:25:24.160 "write_zeroes": true, 00:25:24.160 "zcopy": true, 00:25:24.160 "get_zone_info": false, 00:25:24.160 "zone_management": false, 00:25:24.160 "zone_append": false, 00:25:24.160 "compare": false, 00:25:24.160 "compare_and_write": false, 00:25:24.160 "abort": true, 00:25:24.160 "seek_hole": false, 00:25:24.160 "seek_data": false, 00:25:24.160 "copy": true, 00:25:24.160 "nvme_iov_md": false 00:25:24.160 }, 00:25:24.160 "memory_domains": [ 00:25:24.160 { 00:25:24.160 "dma_device_id": "system", 00:25:24.160 "dma_device_type": 1 00:25:24.160 }, 00:25:24.160 { 00:25:24.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:24.160 "dma_device_type": 2 00:25:24.160 } 00:25:24.160 ], 00:25:24.160 "driver_specific": { 00:25:24.160 "passthru": { 00:25:24.160 "name": "pt1", 00:25:24.160 "base_bdev_name": "malloc1" 00:25:24.160 } 00:25:24.160 } 00:25:24.160 }' 00:25:24.160 22:08:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:24.160 22:08:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:24.160 22:08:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:25:24.160 22:08:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:24.160 22:08:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:24.160 22:08:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:24.160 22:08:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:24.418 22:08:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:24.418 22:08:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:24.418 22:08:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:24.418 22:08:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:24.418 22:08:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:24.418 22:08:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:24.418 22:08:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:25:24.418 22:08:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:24.677 22:08:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:24.678 "name": "pt2", 00:25:24.678 "aliases": [ 00:25:24.678 "00000000-0000-0000-0000-000000000002" 00:25:24.678 ], 00:25:24.678 "product_name": "passthru", 00:25:24.678 "block_size": 4096, 00:25:24.678 "num_blocks": 8192, 00:25:24.678 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:24.678 "assigned_rate_limits": { 00:25:24.678 "rw_ios_per_sec": 0, 00:25:24.678 "rw_mbytes_per_sec": 0, 00:25:24.678 "r_mbytes_per_sec": 0, 00:25:24.678 "w_mbytes_per_sec": 0 00:25:24.678 }, 00:25:24.678 "claimed": true, 00:25:24.678 "claim_type": "exclusive_write", 00:25:24.678 "zoned": false, 00:25:24.678 "supported_io_types": { 00:25:24.678 "read": true, 00:25:24.678 "write": true, 00:25:24.678 "unmap": true, 00:25:24.678 "flush": true, 00:25:24.678 "reset": true, 00:25:24.678 "nvme_admin": false, 00:25:24.678 "nvme_io": false, 00:25:24.678 "nvme_io_md": false, 00:25:24.678 "write_zeroes": true, 00:25:24.678 "zcopy": true, 00:25:24.678 "get_zone_info": false, 00:25:24.678 "zone_management": false, 00:25:24.678 "zone_append": false, 00:25:24.678 "compare": false, 00:25:24.678 "compare_and_write": false, 00:25:24.678 "abort": true, 00:25:24.678 "seek_hole": false, 00:25:24.678 "seek_data": false, 00:25:24.678 "copy": true, 00:25:24.678 "nvme_iov_md": false 00:25:24.678 }, 00:25:24.678 "memory_domains": [ 00:25:24.678 { 00:25:24.678 "dma_device_id": "system", 00:25:24.678 "dma_device_type": 1 00:25:24.678 }, 00:25:24.678 { 00:25:24.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:24.678 "dma_device_type": 2 00:25:24.678 } 00:25:24.678 ], 00:25:24.678 "driver_specific": { 00:25:24.678 "passthru": { 00:25:24.678 "name": "pt2", 00:25:24.678 "base_bdev_name": "malloc2" 00:25:24.678 } 00:25:24.678 } 00:25:24.678 }' 00:25:24.678 22:08:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:24.678 22:08:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:24.678 22:08:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:25:24.678 22:08:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:24.678 22:08:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:24.678 22:08:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:24.678 22:08:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:24.678 22:08:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:24.937 22:08:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:24.937 22:08:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:24.937 22:08:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:24.937 22:08:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:24.937 22:08:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:24.937 22:08:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:25:24.937 [2024-07-13 22:08:44.314674] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:25.197 22:08:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # '[' cd004fd3-79bb-4ba9-818e-3b6b53c092f7 '!=' cd004fd3-79bb-4ba9-818e-3b6b53c092f7 ']' 00:25:25.197 22:08:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:25:25.197 22:08:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:25.197 22:08:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:25:25.197 22:08:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:25:25.197 [2024-07-13 22:08:44.482895] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:25:25.197 22:08:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:25.197 22:08:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:25.197 22:08:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:25.197 22:08:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:25.197 22:08:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:25.197 22:08:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:25.197 22:08:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:25.197 22:08:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:25.197 22:08:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:25.197 22:08:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:25.197 22:08:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:25.197 22:08:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:25.456 22:08:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:25.456 "name": "raid_bdev1", 00:25:25.456 "uuid": "cd004fd3-79bb-4ba9-818e-3b6b53c092f7", 00:25:25.456 "strip_size_kb": 0, 00:25:25.456 "state": "online", 00:25:25.456 "raid_level": "raid1", 00:25:25.456 "superblock": true, 00:25:25.456 "num_base_bdevs": 2, 00:25:25.456 "num_base_bdevs_discovered": 1, 00:25:25.456 "num_base_bdevs_operational": 1, 00:25:25.456 "base_bdevs_list": [ 00:25:25.456 { 00:25:25.456 "name": null, 00:25:25.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:25.456 "is_configured": false, 00:25:25.456 "data_offset": 256, 00:25:25.456 "data_size": 7936 00:25:25.456 }, 00:25:25.456 { 00:25:25.456 "name": "pt2", 00:25:25.456 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:25.456 "is_configured": true, 00:25:25.456 "data_offset": 256, 00:25:25.456 "data_size": 7936 00:25:25.456 } 00:25:25.456 ] 00:25:25.456 }' 00:25:25.456 22:08:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:25.456 22:08:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:26.025 22:08:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:26.025 [2024-07-13 22:08:45.293031] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:26.025 [2024-07-13 22:08:45.293059] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:26.025 [2024-07-13 22:08:45.293126] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:26.025 [2024-07-13 22:08:45.293170] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:26.025 [2024-07-13 22:08:45.293184] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041a80 name raid_bdev1, state offline 00:25:26.025 22:08:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:26.025 22:08:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:25:26.284 22:08:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:25:26.284 22:08:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:25:26.284 22:08:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:25:26.284 22:08:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:25:26.284 22:08:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:26.284 22:08:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:25:26.284 22:08:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:25:26.284 22:08:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:25:26.284 22:08:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:25:26.284 22:08:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@518 -- # i=1 00:25:26.284 22:08:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:26.543 [2024-07-13 22:08:45.810348] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:26.543 [2024-07-13 22:08:45.810413] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:26.543 [2024-07-13 22:08:45.810435] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042080 00:25:26.543 [2024-07-13 22:08:45.810448] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:26.543 [2024-07-13 22:08:45.812512] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:26.543 [2024-07-13 22:08:45.812542] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:26.543 [2024-07-13 22:08:45.812617] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:26.543 [2024-07-13 22:08:45.812674] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:26.543 [2024-07-13 22:08:45.812790] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042680 00:25:26.543 [2024-07-13 22:08:45.812802] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:26.543 [2024-07-13 22:08:45.813050] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:25:26.543 [2024-07-13 22:08:45.813227] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042680 00:25:26.543 [2024-07-13 22:08:45.813238] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000042680 00:25:26.543 [2024-07-13 22:08:45.813378] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:26.543 pt2 00:25:26.543 22:08:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:26.543 22:08:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:26.543 22:08:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:26.543 22:08:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:26.543 22:08:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:26.543 22:08:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:26.543 22:08:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:26.543 22:08:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:26.543 22:08:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:26.543 22:08:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:26.543 22:08:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:26.543 22:08:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:26.802 22:08:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:26.802 "name": "raid_bdev1", 00:25:26.802 "uuid": "cd004fd3-79bb-4ba9-818e-3b6b53c092f7", 00:25:26.802 "strip_size_kb": 0, 00:25:26.802 "state": "online", 00:25:26.802 "raid_level": "raid1", 00:25:26.802 "superblock": true, 00:25:26.802 "num_base_bdevs": 2, 00:25:26.802 "num_base_bdevs_discovered": 1, 00:25:26.802 "num_base_bdevs_operational": 1, 00:25:26.802 "base_bdevs_list": [ 00:25:26.802 { 00:25:26.802 "name": null, 00:25:26.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:26.802 "is_configured": false, 00:25:26.802 "data_offset": 256, 00:25:26.802 "data_size": 7936 00:25:26.802 }, 00:25:26.802 { 00:25:26.802 "name": "pt2", 00:25:26.802 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:26.802 "is_configured": true, 00:25:26.802 "data_offset": 256, 00:25:26.802 "data_size": 7936 00:25:26.802 } 00:25:26.802 ] 00:25:26.802 }' 00:25:26.802 22:08:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:26.802 22:08:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:27.369 22:08:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:27.369 [2024-07-13 22:08:46.640584] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:27.369 [2024-07-13 22:08:46.640614] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:27.369 [2024-07-13 22:08:46.640680] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:27.369 [2024-07-13 22:08:46.640731] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:27.369 [2024-07-13 22:08:46.640742] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042680 name raid_bdev1, state offline 00:25:27.370 22:08:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:27.370 22:08:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:25:27.629 22:08:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:25:27.629 22:08:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:25:27.629 22:08:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:25:27.629 22:08:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:27.629 [2024-07-13 22:08:46.989503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:27.629 [2024-07-13 22:08:46.989557] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:27.629 [2024-07-13 22:08:46.989578] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042980 00:25:27.629 [2024-07-13 22:08:46.989589] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:27.629 [2024-07-13 22:08:46.991714] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:27.629 [2024-07-13 22:08:46.991741] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:27.629 [2024-07-13 22:08:46.991819] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:27.629 [2024-07-13 22:08:46.991883] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:27.629 [2024-07-13 22:08:46.992047] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:25:27.629 [2024-07-13 22:08:46.992059] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:27.629 [2024-07-13 22:08:46.992079] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042f80 name raid_bdev1, state configuring 00:25:27.629 [2024-07-13 22:08:46.992135] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:27.629 [2024-07-13 22:08:46.992207] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000043280 00:25:27.629 [2024-07-13 22:08:46.992217] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:27.629 [2024-07-13 22:08:46.992444] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:25:27.629 [2024-07-13 22:08:46.992608] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000043280 00:25:27.629 [2024-07-13 22:08:46.992620] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000043280 00:25:27.629 [2024-07-13 22:08:46.992762] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:27.629 pt1 00:25:27.629 22:08:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:25:27.629 22:08:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:27.629 22:08:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:27.629 22:08:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:27.629 22:08:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:27.629 22:08:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:27.629 22:08:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:27.629 22:08:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:27.629 22:08:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:27.629 22:08:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:27.629 22:08:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:27.629 22:08:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:27.629 22:08:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:27.888 22:08:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:27.888 "name": "raid_bdev1", 00:25:27.888 "uuid": "cd004fd3-79bb-4ba9-818e-3b6b53c092f7", 00:25:27.888 "strip_size_kb": 0, 00:25:27.888 "state": "online", 00:25:27.888 "raid_level": "raid1", 00:25:27.888 "superblock": true, 00:25:27.888 "num_base_bdevs": 2, 00:25:27.888 "num_base_bdevs_discovered": 1, 00:25:27.888 "num_base_bdevs_operational": 1, 00:25:27.888 "base_bdevs_list": [ 00:25:27.888 { 00:25:27.888 "name": null, 00:25:27.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:27.888 "is_configured": false, 00:25:27.888 "data_offset": 256, 00:25:27.888 "data_size": 7936 00:25:27.888 }, 00:25:27.888 { 00:25:27.888 "name": "pt2", 00:25:27.888 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:27.888 "is_configured": true, 00:25:27.888 "data_offset": 256, 00:25:27.888 "data_size": 7936 00:25:27.888 } 00:25:27.888 ] 00:25:27.888 }' 00:25:27.888 22:08:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:27.888 22:08:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:28.456 22:08:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:25:28.456 22:08:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:25:28.456 22:08:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:25:28.715 22:08:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:28.715 22:08:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:25:28.715 [2024-07-13 22:08:48.004362] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:28.715 22:08:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # '[' cd004fd3-79bb-4ba9-818e-3b6b53c092f7 '!=' cd004fd3-79bb-4ba9-818e-3b6b53c092f7 ']' 00:25:28.715 22:08:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@562 -- # killprocess 1498626 00:25:28.715 22:08:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@948 -- # '[' -z 1498626 ']' 00:25:28.715 22:08:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # kill -0 1498626 00:25:28.715 22:08:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@953 -- # uname 00:25:28.715 22:08:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:28.715 22:08:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1498626 00:25:28.715 22:08:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:28.715 22:08:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:28.715 22:08:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1498626' 00:25:28.715 killing process with pid 1498626 00:25:28.715 22:08:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@967 -- # kill 1498626 00:25:28.715 [2024-07-13 22:08:48.077283] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:28.715 [2024-07-13 22:08:48.077364] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:28.715 [2024-07-13 22:08:48.077408] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:28.715 [2024-07-13 22:08:48.077422] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000043280 name raid_bdev1, state offline 00:25:28.715 22:08:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # wait 1498626 00:25:28.974 [2024-07-13 22:08:48.210257] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:30.353 22:08:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@564 -- # return 0 00:25:30.353 00:25:30.353 real 0m13.017s 00:25:30.353 user 0m22.339s 00:25:30.353 sys 0m2.454s 00:25:30.353 22:08:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:30.353 22:08:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:30.353 ************************************ 00:25:30.353 END TEST raid_superblock_test_4k 00:25:30.353 ************************************ 00:25:30.353 22:08:49 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:25:30.353 22:08:49 bdev_raid -- bdev/bdev_raid.sh@900 -- # '[' true = true ']' 00:25:30.353 22:08:49 bdev_raid -- bdev/bdev_raid.sh@901 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:25:30.353 22:08:49 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:25:30.353 22:08:49 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:30.353 22:08:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:30.353 ************************************ 00:25:30.353 START TEST raid_rebuild_test_sb_4k 00:25:30.353 ************************************ 00:25:30.353 22:08:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false true 00:25:30.353 22:08:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:25:30.353 22:08:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:25:30.353 22:08:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:25:30.353 22:08:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:25:30.353 22:08:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local verify=true 00:25:30.353 22:08:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:25:30.353 22:08:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:25:30.353 22:08:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:25:30.353 22:08:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:25:30.353 22:08:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:25:30.353 22:08:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:25:30.353 22:08:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:25:30.353 22:08:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:25:30.353 22:08:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:25:30.353 22:08:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:25:30.353 22:08:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:25:30.353 22:08:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local strip_size 00:25:30.353 22:08:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local create_arg 00:25:30.353 22:08:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:25:30.353 22:08:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local data_offset 00:25:30.353 22:08:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:25:30.353 22:08:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:25:30.353 22:08:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:25:30.353 22:08:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:25:30.353 22:08:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # raid_pid=1501062 00:25:30.353 22:08:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # waitforlisten 1501062 /var/tmp/spdk-raid.sock 00:25:30.353 22:08:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:30.353 22:08:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@829 -- # '[' -z 1501062 ']' 00:25:30.353 22:08:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:30.353 22:08:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:30.353 22:08:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:30.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:30.353 22:08:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:30.353 22:08:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:30.353 [2024-07-13 22:08:49.615509] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:30.353 [2024-07-13 22:08:49.615594] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1501062 ] 00:25:30.353 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:30.353 Zero copy mechanism will not be used. 00:25:30.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.353 EAL: Requested device 0000:3d:01.0 cannot be used 00:25:30.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.353 EAL: Requested device 0000:3d:01.1 cannot be used 00:25:30.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.353 EAL: Requested device 0000:3d:01.2 cannot be used 00:25:30.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.353 EAL: Requested device 0000:3d:01.3 cannot be used 00:25:30.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.353 EAL: Requested device 0000:3d:01.4 cannot be used 00:25:30.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.353 EAL: Requested device 0000:3d:01.5 cannot be used 00:25:30.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.353 EAL: Requested device 0000:3d:01.6 cannot be used 00:25:30.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.353 EAL: Requested device 0000:3d:01.7 cannot be used 00:25:30.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.353 EAL: Requested device 0000:3d:02.0 cannot be used 00:25:30.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.353 EAL: Requested device 0000:3d:02.1 cannot be used 00:25:30.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.353 EAL: Requested device 0000:3d:02.2 cannot be used 00:25:30.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.353 EAL: Requested device 0000:3d:02.3 cannot be used 00:25:30.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.353 EAL: Requested device 0000:3d:02.4 cannot be used 00:25:30.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.353 EAL: Requested device 0000:3d:02.5 cannot be used 00:25:30.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.353 EAL: Requested device 0000:3d:02.6 cannot be used 00:25:30.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.353 EAL: Requested device 0000:3d:02.7 cannot be used 00:25:30.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.353 EAL: Requested device 0000:3f:01.0 cannot be used 00:25:30.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.353 EAL: Requested device 0000:3f:01.1 cannot be used 00:25:30.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.353 EAL: Requested device 0000:3f:01.2 cannot be used 00:25:30.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.353 EAL: Requested device 0000:3f:01.3 cannot be used 00:25:30.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.353 EAL: Requested device 0000:3f:01.4 cannot be used 00:25:30.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.353 EAL: Requested device 0000:3f:01.5 cannot be used 00:25:30.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.353 EAL: Requested device 0000:3f:01.6 cannot be used 00:25:30.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.353 EAL: Requested device 0000:3f:01.7 cannot be used 00:25:30.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.353 EAL: Requested device 0000:3f:02.0 cannot be used 00:25:30.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.353 EAL: Requested device 0000:3f:02.1 cannot be used 00:25:30.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.353 EAL: Requested device 0000:3f:02.2 cannot be used 00:25:30.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.353 EAL: Requested device 0000:3f:02.3 cannot be used 00:25:30.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.353 EAL: Requested device 0000:3f:02.4 cannot be used 00:25:30.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.353 EAL: Requested device 0000:3f:02.5 cannot be used 00:25:30.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.353 EAL: Requested device 0000:3f:02.6 cannot be used 00:25:30.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.353 EAL: Requested device 0000:3f:02.7 cannot be used 00:25:30.612 [2024-07-13 22:08:49.771423] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.612 [2024-07-13 22:08:49.986340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.871 [2024-07-13 22:08:50.244647] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:30.871 [2024-07-13 22:08:50.244678] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:31.129 22:08:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:31.130 22:08:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@862 -- # return 0 00:25:31.130 22:08:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:25:31.130 22:08:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:25:31.388 BaseBdev1_malloc 00:25:31.388 22:08:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:31.388 [2024-07-13 22:08:50.752055] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:31.388 [2024-07-13 22:08:50.752116] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:31.388 [2024-07-13 22:08:50.752155] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:25:31.388 [2024-07-13 22:08:50.752169] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:31.388 [2024-07-13 22:08:50.754222] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:31.388 [2024-07-13 22:08:50.754254] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:31.388 BaseBdev1 00:25:31.388 22:08:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:25:31.388 22:08:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:25:31.648 BaseBdev2_malloc 00:25:31.648 22:08:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:31.907 [2024-07-13 22:08:51.111638] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:31.907 [2024-07-13 22:08:51.111683] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:31.907 [2024-07-13 22:08:51.111704] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:25:31.907 [2024-07-13 22:08:51.111719] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:31.907 [2024-07-13 22:08:51.113832] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:31.907 [2024-07-13 22:08:51.113862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:31.907 BaseBdev2 00:25:31.907 22:08:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b spare_malloc 00:25:32.166 spare_malloc 00:25:32.166 22:08:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:32.166 spare_delay 00:25:32.166 22:08:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:32.425 [2024-07-13 22:08:51.648774] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:32.425 [2024-07-13 22:08:51.648818] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:32.425 [2024-07-13 22:08:51.648837] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:25:32.425 [2024-07-13 22:08:51.648851] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:32.425 [2024-07-13 22:08:51.650917] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:32.425 [2024-07-13 22:08:51.650947] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:32.425 spare 00:25:32.425 22:08:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:25:32.685 [2024-07-13 22:08:51.829270] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:32.685 [2024-07-13 22:08:51.831011] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:32.685 [2024-07-13 22:08:51.831170] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041a80 00:25:32.685 [2024-07-13 22:08:51.831189] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:32.685 [2024-07-13 22:08:51.831445] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:25:32.685 [2024-07-13 22:08:51.831628] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041a80 00:25:32.685 [2024-07-13 22:08:51.831639] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041a80 00:25:32.685 [2024-07-13 22:08:51.831774] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:32.685 22:08:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:32.685 22:08:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:32.685 22:08:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:32.685 22:08:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:32.685 22:08:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:32.685 22:08:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:32.685 22:08:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:32.685 22:08:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:32.685 22:08:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:32.685 22:08:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:32.685 22:08:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:32.685 22:08:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:32.685 22:08:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:32.685 "name": "raid_bdev1", 00:25:32.685 "uuid": "99dc275a-9802-4420-9060-56268f61e854", 00:25:32.685 "strip_size_kb": 0, 00:25:32.685 "state": "online", 00:25:32.685 "raid_level": "raid1", 00:25:32.685 "superblock": true, 00:25:32.685 "num_base_bdevs": 2, 00:25:32.685 "num_base_bdevs_discovered": 2, 00:25:32.685 "num_base_bdevs_operational": 2, 00:25:32.685 "base_bdevs_list": [ 00:25:32.685 { 00:25:32.685 "name": "BaseBdev1", 00:25:32.685 "uuid": "3f74a002-3288-5f04-95fd-a7538c8ed795", 00:25:32.685 "is_configured": true, 00:25:32.685 "data_offset": 256, 00:25:32.685 "data_size": 7936 00:25:32.685 }, 00:25:32.685 { 00:25:32.685 "name": "BaseBdev2", 00:25:32.685 "uuid": "86225afe-23a3-59b8-9b74-d316e9098040", 00:25:32.685 "is_configured": true, 00:25:32.685 "data_offset": 256, 00:25:32.685 "data_size": 7936 00:25:32.685 } 00:25:32.685 ] 00:25:32.685 }' 00:25:32.685 22:08:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:32.685 22:08:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:33.331 22:08:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:25:33.331 22:08:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:33.331 [2024-07-13 22:08:52.699793] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:33.331 22:08:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:25:33.590 22:08:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:33.590 22:08:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:33.590 22:08:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:25:33.590 22:08:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:25:33.590 22:08:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:25:33.590 22:08:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:25:33.590 22:08:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:25:33.590 22:08:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:33.590 22:08:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:25:33.590 22:08:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:33.590 22:08:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:33.590 22:08:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:33.590 22:08:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:25:33.590 22:08:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:33.590 22:08:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:33.590 22:08:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:25:33.849 [2024-07-13 22:08:53.032478] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:25:33.849 /dev/nbd0 00:25:33.849 22:08:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:33.849 22:08:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:33.849 22:08:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:25:33.850 22:08:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # local i 00:25:33.850 22:08:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:33.850 22:08:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:33.850 22:08:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:25:33.850 22:08:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # break 00:25:33.850 22:08:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:33.850 22:08:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:33.850 22:08:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:33.850 1+0 records in 00:25:33.850 1+0 records out 00:25:33.850 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253147 s, 16.2 MB/s 00:25:33.850 22:08:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:33.850 22:08:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # size=4096 00:25:33.850 22:08:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:33.850 22:08:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:33.850 22:08:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # return 0 00:25:33.850 22:08:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:33.850 22:08:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:33.850 22:08:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:25:33.850 22:08:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:25:33.850 22:08:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:25:34.420 7936+0 records in 00:25:34.420 7936+0 records out 00:25:34.420 32505856 bytes (33 MB, 31 MiB) copied, 0.576296 s, 56.4 MB/s 00:25:34.420 22:08:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:25:34.420 22:08:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:34.420 22:08:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:34.420 22:08:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:34.420 22:08:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:25:34.420 22:08:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:34.420 22:08:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:34.679 [2024-07-13 22:08:53.858763] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:34.679 22:08:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:34.679 22:08:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:34.679 22:08:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:34.679 22:08:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:34.679 22:08:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:34.679 22:08:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:34.679 22:08:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:25:34.679 22:08:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:25:34.679 22:08:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:25:34.679 [2024-07-13 22:08:54.023284] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:34.679 22:08:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:34.679 22:08:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:34.679 22:08:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:34.679 22:08:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:34.679 22:08:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:34.679 22:08:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:34.679 22:08:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:34.679 22:08:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:34.679 22:08:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:34.679 22:08:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:34.679 22:08:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:34.679 22:08:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:34.938 22:08:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:34.938 "name": "raid_bdev1", 00:25:34.938 "uuid": "99dc275a-9802-4420-9060-56268f61e854", 00:25:34.938 "strip_size_kb": 0, 00:25:34.938 "state": "online", 00:25:34.938 "raid_level": "raid1", 00:25:34.938 "superblock": true, 00:25:34.938 "num_base_bdevs": 2, 00:25:34.938 "num_base_bdevs_discovered": 1, 00:25:34.938 "num_base_bdevs_operational": 1, 00:25:34.938 "base_bdevs_list": [ 00:25:34.938 { 00:25:34.938 "name": null, 00:25:34.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:34.938 "is_configured": false, 00:25:34.938 "data_offset": 256, 00:25:34.938 "data_size": 7936 00:25:34.938 }, 00:25:34.938 { 00:25:34.938 "name": "BaseBdev2", 00:25:34.938 "uuid": "86225afe-23a3-59b8-9b74-d316e9098040", 00:25:34.938 "is_configured": true, 00:25:34.938 "data_offset": 256, 00:25:34.938 "data_size": 7936 00:25:34.938 } 00:25:34.938 ] 00:25:34.938 }' 00:25:34.938 22:08:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:34.939 22:08:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:35.507 22:08:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:35.507 [2024-07-13 22:08:54.865520] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:35.507 [2024-07-13 22:08:54.882862] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001a4410 00:25:35.507 [2024-07-13 22:08:54.884652] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:35.507 22:08:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # sleep 1 00:25:36.886 22:08:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:36.886 22:08:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:36.886 22:08:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:36.886 22:08:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:36.886 22:08:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:36.886 22:08:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:36.886 22:08:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:36.886 22:08:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:36.886 "name": "raid_bdev1", 00:25:36.886 "uuid": "99dc275a-9802-4420-9060-56268f61e854", 00:25:36.886 "strip_size_kb": 0, 00:25:36.886 "state": "online", 00:25:36.886 "raid_level": "raid1", 00:25:36.886 "superblock": true, 00:25:36.886 "num_base_bdevs": 2, 00:25:36.886 "num_base_bdevs_discovered": 2, 00:25:36.886 "num_base_bdevs_operational": 2, 00:25:36.886 "process": { 00:25:36.886 "type": "rebuild", 00:25:36.886 "target": "spare", 00:25:36.886 "progress": { 00:25:36.886 "blocks": 2816, 00:25:36.886 "percent": 35 00:25:36.886 } 00:25:36.886 }, 00:25:36.886 "base_bdevs_list": [ 00:25:36.886 { 00:25:36.886 "name": "spare", 00:25:36.886 "uuid": "0c1a133a-8ad3-5be8-b677-e46e393f9d6a", 00:25:36.886 "is_configured": true, 00:25:36.886 "data_offset": 256, 00:25:36.886 "data_size": 7936 00:25:36.886 }, 00:25:36.886 { 00:25:36.886 "name": "BaseBdev2", 00:25:36.886 "uuid": "86225afe-23a3-59b8-9b74-d316e9098040", 00:25:36.886 "is_configured": true, 00:25:36.886 "data_offset": 256, 00:25:36.886 "data_size": 7936 00:25:36.886 } 00:25:36.886 ] 00:25:36.886 }' 00:25:36.886 22:08:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:36.886 22:08:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:36.886 22:08:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:36.886 22:08:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:36.886 22:08:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:37.144 [2024-07-13 22:08:56.306072] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:37.144 [2024-07-13 22:08:56.396218] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:37.144 [2024-07-13 22:08:56.396278] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:37.144 [2024-07-13 22:08:56.396310] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:37.144 [2024-07-13 22:08:56.396322] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:37.144 22:08:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:37.144 22:08:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:37.144 22:08:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:37.144 22:08:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:37.144 22:08:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:37.144 22:08:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:37.144 22:08:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:37.144 22:08:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:37.144 22:08:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:37.144 22:08:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:37.144 22:08:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:37.144 22:08:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:37.403 22:08:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:37.403 "name": "raid_bdev1", 00:25:37.403 "uuid": "99dc275a-9802-4420-9060-56268f61e854", 00:25:37.403 "strip_size_kb": 0, 00:25:37.403 "state": "online", 00:25:37.403 "raid_level": "raid1", 00:25:37.403 "superblock": true, 00:25:37.403 "num_base_bdevs": 2, 00:25:37.403 "num_base_bdevs_discovered": 1, 00:25:37.403 "num_base_bdevs_operational": 1, 00:25:37.403 "base_bdevs_list": [ 00:25:37.403 { 00:25:37.403 "name": null, 00:25:37.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:37.403 "is_configured": false, 00:25:37.403 "data_offset": 256, 00:25:37.403 "data_size": 7936 00:25:37.403 }, 00:25:37.403 { 00:25:37.403 "name": "BaseBdev2", 00:25:37.403 "uuid": "86225afe-23a3-59b8-9b74-d316e9098040", 00:25:37.403 "is_configured": true, 00:25:37.403 "data_offset": 256, 00:25:37.403 "data_size": 7936 00:25:37.403 } 00:25:37.403 ] 00:25:37.403 }' 00:25:37.403 22:08:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:37.403 22:08:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:37.969 22:08:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:37.969 22:08:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:37.969 22:08:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:37.969 22:08:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:37.969 22:08:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:37.969 22:08:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:37.969 22:08:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:37.969 22:08:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:37.969 "name": "raid_bdev1", 00:25:37.969 "uuid": "99dc275a-9802-4420-9060-56268f61e854", 00:25:37.969 "strip_size_kb": 0, 00:25:37.969 "state": "online", 00:25:37.969 "raid_level": "raid1", 00:25:37.969 "superblock": true, 00:25:37.969 "num_base_bdevs": 2, 00:25:37.969 "num_base_bdevs_discovered": 1, 00:25:37.969 "num_base_bdevs_operational": 1, 00:25:37.970 "base_bdevs_list": [ 00:25:37.970 { 00:25:37.970 "name": null, 00:25:37.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:37.970 "is_configured": false, 00:25:37.970 "data_offset": 256, 00:25:37.970 "data_size": 7936 00:25:37.970 }, 00:25:37.970 { 00:25:37.970 "name": "BaseBdev2", 00:25:37.970 "uuid": "86225afe-23a3-59b8-9b74-d316e9098040", 00:25:37.970 "is_configured": true, 00:25:37.970 "data_offset": 256, 00:25:37.970 "data_size": 7936 00:25:37.970 } 00:25:37.970 ] 00:25:37.970 }' 00:25:37.970 22:08:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:37.970 22:08:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:37.970 22:08:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:37.970 22:08:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:37.970 22:08:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:38.228 [2024-07-13 22:08:57.509271] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:38.228 [2024-07-13 22:08:57.526826] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001a44e0 00:25:38.228 [2024-07-13 22:08:57.528625] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:38.228 22:08:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:39.163 22:08:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:39.163 22:08:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:39.163 22:08:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:39.163 22:08:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:39.163 22:08:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:39.163 22:08:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:39.163 22:08:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:39.421 22:08:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:39.421 "name": "raid_bdev1", 00:25:39.421 "uuid": "99dc275a-9802-4420-9060-56268f61e854", 00:25:39.421 "strip_size_kb": 0, 00:25:39.421 "state": "online", 00:25:39.421 "raid_level": "raid1", 00:25:39.421 "superblock": true, 00:25:39.421 "num_base_bdevs": 2, 00:25:39.421 "num_base_bdevs_discovered": 2, 00:25:39.421 "num_base_bdevs_operational": 2, 00:25:39.421 "process": { 00:25:39.421 "type": "rebuild", 00:25:39.421 "target": "spare", 00:25:39.421 "progress": { 00:25:39.421 "blocks": 2816, 00:25:39.421 "percent": 35 00:25:39.421 } 00:25:39.421 }, 00:25:39.421 "base_bdevs_list": [ 00:25:39.421 { 00:25:39.421 "name": "spare", 00:25:39.421 "uuid": "0c1a133a-8ad3-5be8-b677-e46e393f9d6a", 00:25:39.421 "is_configured": true, 00:25:39.421 "data_offset": 256, 00:25:39.421 "data_size": 7936 00:25:39.421 }, 00:25:39.421 { 00:25:39.421 "name": "BaseBdev2", 00:25:39.421 "uuid": "86225afe-23a3-59b8-9b74-d316e9098040", 00:25:39.421 "is_configured": true, 00:25:39.421 "data_offset": 256, 00:25:39.421 "data_size": 7936 00:25:39.421 } 00:25:39.421 ] 00:25:39.421 }' 00:25:39.421 22:08:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:39.421 22:08:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:39.421 22:08:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:39.421 22:08:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:39.421 22:08:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:25:39.421 22:08:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:25:39.421 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:25:39.421 22:08:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:25:39.421 22:08:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:25:39.421 22:08:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:25:39.421 22:08:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@705 -- # local timeout=869 00:25:39.421 22:08:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:25:39.421 22:08:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:39.421 22:08:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:39.421 22:08:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:39.421 22:08:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:39.421 22:08:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:39.421 22:08:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:39.421 22:08:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:39.680 22:08:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:39.680 "name": "raid_bdev1", 00:25:39.680 "uuid": "99dc275a-9802-4420-9060-56268f61e854", 00:25:39.680 "strip_size_kb": 0, 00:25:39.680 "state": "online", 00:25:39.680 "raid_level": "raid1", 00:25:39.680 "superblock": true, 00:25:39.680 "num_base_bdevs": 2, 00:25:39.680 "num_base_bdevs_discovered": 2, 00:25:39.680 "num_base_bdevs_operational": 2, 00:25:39.680 "process": { 00:25:39.680 "type": "rebuild", 00:25:39.680 "target": "spare", 00:25:39.680 "progress": { 00:25:39.680 "blocks": 3584, 00:25:39.680 "percent": 45 00:25:39.680 } 00:25:39.680 }, 00:25:39.680 "base_bdevs_list": [ 00:25:39.680 { 00:25:39.680 "name": "spare", 00:25:39.680 "uuid": "0c1a133a-8ad3-5be8-b677-e46e393f9d6a", 00:25:39.680 "is_configured": true, 00:25:39.680 "data_offset": 256, 00:25:39.680 "data_size": 7936 00:25:39.680 }, 00:25:39.680 { 00:25:39.680 "name": "BaseBdev2", 00:25:39.680 "uuid": "86225afe-23a3-59b8-9b74-d316e9098040", 00:25:39.680 "is_configured": true, 00:25:39.680 "data_offset": 256, 00:25:39.680 "data_size": 7936 00:25:39.680 } 00:25:39.680 ] 00:25:39.680 }' 00:25:39.680 22:08:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:39.680 22:08:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:39.680 22:08:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:39.680 22:08:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:39.680 22:08:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@710 -- # sleep 1 00:25:41.053 22:09:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:25:41.053 22:09:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:41.053 22:09:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:41.053 22:09:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:41.053 22:09:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:41.054 22:09:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:41.054 22:09:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:41.054 22:09:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:41.054 22:09:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:41.054 "name": "raid_bdev1", 00:25:41.054 "uuid": "99dc275a-9802-4420-9060-56268f61e854", 00:25:41.054 "strip_size_kb": 0, 00:25:41.054 "state": "online", 00:25:41.054 "raid_level": "raid1", 00:25:41.054 "superblock": true, 00:25:41.054 "num_base_bdevs": 2, 00:25:41.054 "num_base_bdevs_discovered": 2, 00:25:41.054 "num_base_bdevs_operational": 2, 00:25:41.054 "process": { 00:25:41.054 "type": "rebuild", 00:25:41.054 "target": "spare", 00:25:41.054 "progress": { 00:25:41.054 "blocks": 6656, 00:25:41.054 "percent": 83 00:25:41.054 } 00:25:41.054 }, 00:25:41.054 "base_bdevs_list": [ 00:25:41.054 { 00:25:41.054 "name": "spare", 00:25:41.054 "uuid": "0c1a133a-8ad3-5be8-b677-e46e393f9d6a", 00:25:41.054 "is_configured": true, 00:25:41.054 "data_offset": 256, 00:25:41.054 "data_size": 7936 00:25:41.054 }, 00:25:41.054 { 00:25:41.054 "name": "BaseBdev2", 00:25:41.054 "uuid": "86225afe-23a3-59b8-9b74-d316e9098040", 00:25:41.054 "is_configured": true, 00:25:41.054 "data_offset": 256, 00:25:41.054 "data_size": 7936 00:25:41.054 } 00:25:41.054 ] 00:25:41.054 }' 00:25:41.054 22:09:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:41.054 22:09:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:41.054 22:09:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:41.054 22:09:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:41.054 22:09:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@710 -- # sleep 1 00:25:41.311 [2024-07-13 22:09:00.652172] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:41.311 [2024-07-13 22:09:00.652235] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:41.311 [2024-07-13 22:09:00.652321] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:42.245 22:09:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:25:42.245 22:09:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:42.245 22:09:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:42.245 22:09:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:42.245 22:09:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:42.245 22:09:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:42.245 22:09:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:42.245 22:09:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:42.245 22:09:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:42.245 "name": "raid_bdev1", 00:25:42.245 "uuid": "99dc275a-9802-4420-9060-56268f61e854", 00:25:42.245 "strip_size_kb": 0, 00:25:42.245 "state": "online", 00:25:42.245 "raid_level": "raid1", 00:25:42.245 "superblock": true, 00:25:42.245 "num_base_bdevs": 2, 00:25:42.245 "num_base_bdevs_discovered": 2, 00:25:42.245 "num_base_bdevs_operational": 2, 00:25:42.245 "base_bdevs_list": [ 00:25:42.245 { 00:25:42.245 "name": "spare", 00:25:42.245 "uuid": "0c1a133a-8ad3-5be8-b677-e46e393f9d6a", 00:25:42.245 "is_configured": true, 00:25:42.245 "data_offset": 256, 00:25:42.245 "data_size": 7936 00:25:42.245 }, 00:25:42.245 { 00:25:42.245 "name": "BaseBdev2", 00:25:42.245 "uuid": "86225afe-23a3-59b8-9b74-d316e9098040", 00:25:42.245 "is_configured": true, 00:25:42.245 "data_offset": 256, 00:25:42.245 "data_size": 7936 00:25:42.245 } 00:25:42.245 ] 00:25:42.245 }' 00:25:42.245 22:09:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:42.245 22:09:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:42.245 22:09:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:42.245 22:09:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:25:42.245 22:09:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # break 00:25:42.245 22:09:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:42.245 22:09:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:42.245 22:09:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:42.245 22:09:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:42.245 22:09:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:42.245 22:09:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:42.245 22:09:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:42.503 22:09:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:42.503 "name": "raid_bdev1", 00:25:42.503 "uuid": "99dc275a-9802-4420-9060-56268f61e854", 00:25:42.503 "strip_size_kb": 0, 00:25:42.503 "state": "online", 00:25:42.503 "raid_level": "raid1", 00:25:42.503 "superblock": true, 00:25:42.503 "num_base_bdevs": 2, 00:25:42.503 "num_base_bdevs_discovered": 2, 00:25:42.503 "num_base_bdevs_operational": 2, 00:25:42.503 "base_bdevs_list": [ 00:25:42.503 { 00:25:42.503 "name": "spare", 00:25:42.503 "uuid": "0c1a133a-8ad3-5be8-b677-e46e393f9d6a", 00:25:42.503 "is_configured": true, 00:25:42.503 "data_offset": 256, 00:25:42.503 "data_size": 7936 00:25:42.503 }, 00:25:42.503 { 00:25:42.503 "name": "BaseBdev2", 00:25:42.503 "uuid": "86225afe-23a3-59b8-9b74-d316e9098040", 00:25:42.503 "is_configured": true, 00:25:42.503 "data_offset": 256, 00:25:42.503 "data_size": 7936 00:25:42.503 } 00:25:42.503 ] 00:25:42.503 }' 00:25:42.503 22:09:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:42.503 22:09:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:42.503 22:09:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:42.503 22:09:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:42.503 22:09:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:42.503 22:09:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:42.503 22:09:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:42.503 22:09:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:42.503 22:09:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:42.503 22:09:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:42.503 22:09:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:42.503 22:09:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:42.503 22:09:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:42.503 22:09:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:42.503 22:09:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:42.503 22:09:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:42.761 22:09:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:42.761 "name": "raid_bdev1", 00:25:42.761 "uuid": "99dc275a-9802-4420-9060-56268f61e854", 00:25:42.761 "strip_size_kb": 0, 00:25:42.761 "state": "online", 00:25:42.761 "raid_level": "raid1", 00:25:42.761 "superblock": true, 00:25:42.761 "num_base_bdevs": 2, 00:25:42.761 "num_base_bdevs_discovered": 2, 00:25:42.761 "num_base_bdevs_operational": 2, 00:25:42.761 "base_bdevs_list": [ 00:25:42.761 { 00:25:42.761 "name": "spare", 00:25:42.761 "uuid": "0c1a133a-8ad3-5be8-b677-e46e393f9d6a", 00:25:42.761 "is_configured": true, 00:25:42.761 "data_offset": 256, 00:25:42.761 "data_size": 7936 00:25:42.761 }, 00:25:42.761 { 00:25:42.761 "name": "BaseBdev2", 00:25:42.761 "uuid": "86225afe-23a3-59b8-9b74-d316e9098040", 00:25:42.761 "is_configured": true, 00:25:42.761 "data_offset": 256, 00:25:42.761 "data_size": 7936 00:25:42.761 } 00:25:42.761 ] 00:25:42.761 }' 00:25:42.761 22:09:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:42.761 22:09:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:43.328 22:09:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:43.328 [2024-07-13 22:09:02.631423] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:43.328 [2024-07-13 22:09:02.631459] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:43.328 [2024-07-13 22:09:02.631543] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:43.328 [2024-07-13 22:09:02.631610] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:43.328 [2024-07-13 22:09:02.631622] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041a80 name raid_bdev1, state offline 00:25:43.328 22:09:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # jq length 00:25:43.328 22:09:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:43.586 22:09:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:25:43.586 22:09:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:25:43.586 22:09:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:25:43.586 22:09:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:25:43.586 22:09:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:43.586 22:09:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:25:43.586 22:09:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:43.586 22:09:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:43.586 22:09:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:43.586 22:09:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:25:43.586 22:09:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:43.586 22:09:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:43.586 22:09:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:25:43.844 /dev/nbd0 00:25:43.844 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:43.844 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:43.844 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:25:43.844 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # local i 00:25:43.844 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:43.844 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:43.844 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:25:43.844 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # break 00:25:43.844 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:43.844 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:43.844 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:43.844 1+0 records in 00:25:43.844 1+0 records out 00:25:43.844 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230476 s, 17.8 MB/s 00:25:43.844 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:43.845 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # size=4096 00:25:43.845 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:43.845 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:43.845 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # return 0 00:25:43.845 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:43.845 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:43.845 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:25:43.845 /dev/nbd1 00:25:43.845 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:43.845 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:43.845 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:25:43.845 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # local i 00:25:43.845 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:43.845 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:43.845 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:25:43.845 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # break 00:25:43.845 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:43.845 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:43.845 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:44.102 1+0 records in 00:25:44.102 1+0 records out 00:25:44.102 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00030915 s, 13.2 MB/s 00:25:44.102 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:44.102 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # size=4096 00:25:44.102 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:44.102 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:44.102 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # return 0 00:25:44.102 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:44.102 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:44.102 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:25:44.102 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:25:44.102 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:44.102 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:44.102 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:44.102 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:25:44.102 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:44.102 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:44.359 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:44.359 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:44.359 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:44.359 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:44.359 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:44.360 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:44.360 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:25:44.360 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:25:44.360 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:44.360 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:44.617 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:44.617 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:44.617 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:44.617 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:44.617 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:44.617 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:44.617 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:25:44.617 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:25:44.617 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:25:44.617 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:44.617 22:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:44.874 [2024-07-13 22:09:04.120293] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:44.874 [2024-07-13 22:09:04.120368] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:44.874 [2024-07-13 22:09:04.120393] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043280 00:25:44.874 [2024-07-13 22:09:04.120406] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:44.874 [2024-07-13 22:09:04.122616] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:44.874 [2024-07-13 22:09:04.122646] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:44.874 [2024-07-13 22:09:04.122738] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:44.874 [2024-07-13 22:09:04.122794] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:44.874 [2024-07-13 22:09:04.122973] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:44.874 spare 00:25:44.874 22:09:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:44.874 22:09:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:44.874 22:09:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:44.874 22:09:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:44.874 22:09:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:44.874 22:09:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:44.874 22:09:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:44.874 22:09:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:44.874 22:09:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:44.874 22:09:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:44.875 22:09:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:44.875 22:09:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:44.875 [2024-07-13 22:09:04.223303] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000043880 00:25:44.875 [2024-07-13 22:09:04.223336] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:44.875 [2024-07-13 22:09:04.223641] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c9390 00:25:44.875 [2024-07-13 22:09:04.223883] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000043880 00:25:44.875 [2024-07-13 22:09:04.223895] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000043880 00:25:44.875 [2024-07-13 22:09:04.224074] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:45.132 22:09:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:45.132 "name": "raid_bdev1", 00:25:45.132 "uuid": "99dc275a-9802-4420-9060-56268f61e854", 00:25:45.132 "strip_size_kb": 0, 00:25:45.132 "state": "online", 00:25:45.132 "raid_level": "raid1", 00:25:45.132 "superblock": true, 00:25:45.132 "num_base_bdevs": 2, 00:25:45.132 "num_base_bdevs_discovered": 2, 00:25:45.132 "num_base_bdevs_operational": 2, 00:25:45.132 "base_bdevs_list": [ 00:25:45.132 { 00:25:45.132 "name": "spare", 00:25:45.132 "uuid": "0c1a133a-8ad3-5be8-b677-e46e393f9d6a", 00:25:45.132 "is_configured": true, 00:25:45.132 "data_offset": 256, 00:25:45.132 "data_size": 7936 00:25:45.132 }, 00:25:45.132 { 00:25:45.132 "name": "BaseBdev2", 00:25:45.132 "uuid": "86225afe-23a3-59b8-9b74-d316e9098040", 00:25:45.132 "is_configured": true, 00:25:45.132 "data_offset": 256, 00:25:45.132 "data_size": 7936 00:25:45.132 } 00:25:45.132 ] 00:25:45.132 }' 00:25:45.132 22:09:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:45.132 22:09:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:45.697 22:09:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:45.697 22:09:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:45.697 22:09:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:45.697 22:09:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:45.697 22:09:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:45.697 22:09:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:45.697 22:09:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:45.697 22:09:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:45.697 "name": "raid_bdev1", 00:25:45.697 "uuid": "99dc275a-9802-4420-9060-56268f61e854", 00:25:45.697 "strip_size_kb": 0, 00:25:45.697 "state": "online", 00:25:45.697 "raid_level": "raid1", 00:25:45.697 "superblock": true, 00:25:45.697 "num_base_bdevs": 2, 00:25:45.697 "num_base_bdevs_discovered": 2, 00:25:45.697 "num_base_bdevs_operational": 2, 00:25:45.697 "base_bdevs_list": [ 00:25:45.697 { 00:25:45.697 "name": "spare", 00:25:45.697 "uuid": "0c1a133a-8ad3-5be8-b677-e46e393f9d6a", 00:25:45.697 "is_configured": true, 00:25:45.697 "data_offset": 256, 00:25:45.697 "data_size": 7936 00:25:45.697 }, 00:25:45.697 { 00:25:45.697 "name": "BaseBdev2", 00:25:45.697 "uuid": "86225afe-23a3-59b8-9b74-d316e9098040", 00:25:45.697 "is_configured": true, 00:25:45.697 "data_offset": 256, 00:25:45.697 "data_size": 7936 00:25:45.697 } 00:25:45.697 ] 00:25:45.697 }' 00:25:45.697 22:09:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:45.697 22:09:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:45.697 22:09:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:45.697 22:09:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:45.697 22:09:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:45.697 22:09:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:25:45.955 22:09:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:25:45.955 22:09:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:46.212 [2024-07-13 22:09:05.359627] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:46.212 22:09:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:46.212 22:09:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:46.212 22:09:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:46.212 22:09:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:46.212 22:09:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:46.212 22:09:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:46.212 22:09:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:46.212 22:09:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:46.212 22:09:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:46.212 22:09:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:46.212 22:09:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:46.212 22:09:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:46.212 22:09:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:46.212 "name": "raid_bdev1", 00:25:46.212 "uuid": "99dc275a-9802-4420-9060-56268f61e854", 00:25:46.212 "strip_size_kb": 0, 00:25:46.212 "state": "online", 00:25:46.212 "raid_level": "raid1", 00:25:46.212 "superblock": true, 00:25:46.212 "num_base_bdevs": 2, 00:25:46.212 "num_base_bdevs_discovered": 1, 00:25:46.212 "num_base_bdevs_operational": 1, 00:25:46.212 "base_bdevs_list": [ 00:25:46.212 { 00:25:46.212 "name": null, 00:25:46.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:46.212 "is_configured": false, 00:25:46.212 "data_offset": 256, 00:25:46.212 "data_size": 7936 00:25:46.212 }, 00:25:46.212 { 00:25:46.212 "name": "BaseBdev2", 00:25:46.213 "uuid": "86225afe-23a3-59b8-9b74-d316e9098040", 00:25:46.213 "is_configured": true, 00:25:46.213 "data_offset": 256, 00:25:46.213 "data_size": 7936 00:25:46.213 } 00:25:46.213 ] 00:25:46.213 }' 00:25:46.213 22:09:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:46.213 22:09:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:46.777 22:09:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:47.056 [2024-07-13 22:09:06.201866] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:47.056 [2024-07-13 22:09:06.202084] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:25:47.056 [2024-07-13 22:09:06.202105] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:47.056 [2024-07-13 22:09:06.202139] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:47.056 [2024-07-13 22:09:06.220396] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c9460 00:25:47.056 [2024-07-13 22:09:06.222178] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:47.056 22:09:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # sleep 1 00:25:47.988 22:09:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:47.988 22:09:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:47.988 22:09:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:47.988 22:09:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:47.988 22:09:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:47.988 22:09:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:47.988 22:09:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:48.246 22:09:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:48.246 "name": "raid_bdev1", 00:25:48.246 "uuid": "99dc275a-9802-4420-9060-56268f61e854", 00:25:48.246 "strip_size_kb": 0, 00:25:48.246 "state": "online", 00:25:48.246 "raid_level": "raid1", 00:25:48.246 "superblock": true, 00:25:48.246 "num_base_bdevs": 2, 00:25:48.246 "num_base_bdevs_discovered": 2, 00:25:48.246 "num_base_bdevs_operational": 2, 00:25:48.246 "process": { 00:25:48.246 "type": "rebuild", 00:25:48.246 "target": "spare", 00:25:48.246 "progress": { 00:25:48.246 "blocks": 2816, 00:25:48.246 "percent": 35 00:25:48.246 } 00:25:48.246 }, 00:25:48.246 "base_bdevs_list": [ 00:25:48.246 { 00:25:48.246 "name": "spare", 00:25:48.246 "uuid": "0c1a133a-8ad3-5be8-b677-e46e393f9d6a", 00:25:48.246 "is_configured": true, 00:25:48.246 "data_offset": 256, 00:25:48.246 "data_size": 7936 00:25:48.246 }, 00:25:48.246 { 00:25:48.246 "name": "BaseBdev2", 00:25:48.246 "uuid": "86225afe-23a3-59b8-9b74-d316e9098040", 00:25:48.246 "is_configured": true, 00:25:48.246 "data_offset": 256, 00:25:48.246 "data_size": 7936 00:25:48.246 } 00:25:48.246 ] 00:25:48.246 }' 00:25:48.246 22:09:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:48.246 22:09:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:48.246 22:09:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:48.246 22:09:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:48.246 22:09:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:48.504 [2024-07-13 22:09:07.643605] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:48.504 [2024-07-13 22:09:07.733683] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:48.504 [2024-07-13 22:09:07.733741] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:48.504 [2024-07-13 22:09:07.733757] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:48.504 [2024-07-13 22:09:07.733768] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:48.504 22:09:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:48.504 22:09:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:48.504 22:09:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:48.504 22:09:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:48.504 22:09:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:48.504 22:09:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:48.504 22:09:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:48.504 22:09:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:48.504 22:09:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:48.504 22:09:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:48.504 22:09:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:48.504 22:09:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:48.762 22:09:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:48.762 "name": "raid_bdev1", 00:25:48.762 "uuid": "99dc275a-9802-4420-9060-56268f61e854", 00:25:48.762 "strip_size_kb": 0, 00:25:48.762 "state": "online", 00:25:48.762 "raid_level": "raid1", 00:25:48.763 "superblock": true, 00:25:48.763 "num_base_bdevs": 2, 00:25:48.763 "num_base_bdevs_discovered": 1, 00:25:48.763 "num_base_bdevs_operational": 1, 00:25:48.763 "base_bdevs_list": [ 00:25:48.763 { 00:25:48.763 "name": null, 00:25:48.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:48.763 "is_configured": false, 00:25:48.763 "data_offset": 256, 00:25:48.763 "data_size": 7936 00:25:48.763 }, 00:25:48.763 { 00:25:48.763 "name": "BaseBdev2", 00:25:48.763 "uuid": "86225afe-23a3-59b8-9b74-d316e9098040", 00:25:48.763 "is_configured": true, 00:25:48.763 "data_offset": 256, 00:25:48.763 "data_size": 7936 00:25:48.763 } 00:25:48.763 ] 00:25:48.763 }' 00:25:48.763 22:09:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:48.763 22:09:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:49.329 22:09:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:49.329 [2024-07-13 22:09:08.578335] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:49.329 [2024-07-13 22:09:08.578418] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:49.329 [2024-07-13 22:09:08.578443] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043e80 00:25:49.329 [2024-07-13 22:09:08.578457] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:49.329 [2024-07-13 22:09:08.578982] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:49.329 [2024-07-13 22:09:08.579012] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:49.329 [2024-07-13 22:09:08.579104] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:49.329 [2024-07-13 22:09:08.579121] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:25:49.329 [2024-07-13 22:09:08.579133] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:49.329 [2024-07-13 22:09:08.579161] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:49.329 [2024-07-13 22:09:08.596822] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c9530 00:25:49.329 spare 00:25:49.329 [2024-07-13 22:09:08.598616] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:49.329 22:09:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # sleep 1 00:25:50.263 22:09:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:50.263 22:09:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:50.263 22:09:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:50.263 22:09:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:50.263 22:09:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:50.263 22:09:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:50.263 22:09:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:50.521 22:09:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:50.521 "name": "raid_bdev1", 00:25:50.521 "uuid": "99dc275a-9802-4420-9060-56268f61e854", 00:25:50.521 "strip_size_kb": 0, 00:25:50.521 "state": "online", 00:25:50.521 "raid_level": "raid1", 00:25:50.521 "superblock": true, 00:25:50.521 "num_base_bdevs": 2, 00:25:50.521 "num_base_bdevs_discovered": 2, 00:25:50.521 "num_base_bdevs_operational": 2, 00:25:50.521 "process": { 00:25:50.521 "type": "rebuild", 00:25:50.521 "target": "spare", 00:25:50.521 "progress": { 00:25:50.521 "blocks": 2816, 00:25:50.521 "percent": 35 00:25:50.521 } 00:25:50.521 }, 00:25:50.521 "base_bdevs_list": [ 00:25:50.521 { 00:25:50.521 "name": "spare", 00:25:50.521 "uuid": "0c1a133a-8ad3-5be8-b677-e46e393f9d6a", 00:25:50.521 "is_configured": true, 00:25:50.521 "data_offset": 256, 00:25:50.521 "data_size": 7936 00:25:50.521 }, 00:25:50.521 { 00:25:50.521 "name": "BaseBdev2", 00:25:50.521 "uuid": "86225afe-23a3-59b8-9b74-d316e9098040", 00:25:50.521 "is_configured": true, 00:25:50.521 "data_offset": 256, 00:25:50.521 "data_size": 7936 00:25:50.521 } 00:25:50.521 ] 00:25:50.521 }' 00:25:50.521 22:09:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:50.521 22:09:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:50.521 22:09:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:50.521 22:09:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:50.521 22:09:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:50.779 [2024-07-13 22:09:10.020127] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:50.779 [2024-07-13 22:09:10.110274] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:50.779 [2024-07-13 22:09:10.110327] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:50.779 [2024-07-13 22:09:10.110346] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:50.779 [2024-07-13 22:09:10.110355] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:50.779 22:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:50.779 22:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:50.779 22:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:50.779 22:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:50.779 22:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:50.779 22:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:50.779 22:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:50.779 22:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:50.779 22:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:50.779 22:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:51.038 22:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:51.038 22:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:51.038 22:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:51.038 "name": "raid_bdev1", 00:25:51.038 "uuid": "99dc275a-9802-4420-9060-56268f61e854", 00:25:51.038 "strip_size_kb": 0, 00:25:51.038 "state": "online", 00:25:51.038 "raid_level": "raid1", 00:25:51.038 "superblock": true, 00:25:51.038 "num_base_bdevs": 2, 00:25:51.038 "num_base_bdevs_discovered": 1, 00:25:51.038 "num_base_bdevs_operational": 1, 00:25:51.038 "base_bdevs_list": [ 00:25:51.038 { 00:25:51.038 "name": null, 00:25:51.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:51.038 "is_configured": false, 00:25:51.038 "data_offset": 256, 00:25:51.038 "data_size": 7936 00:25:51.038 }, 00:25:51.038 { 00:25:51.038 "name": "BaseBdev2", 00:25:51.038 "uuid": "86225afe-23a3-59b8-9b74-d316e9098040", 00:25:51.038 "is_configured": true, 00:25:51.038 "data_offset": 256, 00:25:51.038 "data_size": 7936 00:25:51.038 } 00:25:51.038 ] 00:25:51.038 }' 00:25:51.038 22:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:51.038 22:09:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:51.622 22:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:51.622 22:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:51.622 22:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:51.622 22:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:51.622 22:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:51.622 22:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:51.622 22:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:51.622 22:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:51.622 "name": "raid_bdev1", 00:25:51.622 "uuid": "99dc275a-9802-4420-9060-56268f61e854", 00:25:51.622 "strip_size_kb": 0, 00:25:51.622 "state": "online", 00:25:51.622 "raid_level": "raid1", 00:25:51.622 "superblock": true, 00:25:51.622 "num_base_bdevs": 2, 00:25:51.622 "num_base_bdevs_discovered": 1, 00:25:51.622 "num_base_bdevs_operational": 1, 00:25:51.622 "base_bdevs_list": [ 00:25:51.622 { 00:25:51.622 "name": null, 00:25:51.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:51.622 "is_configured": false, 00:25:51.622 "data_offset": 256, 00:25:51.622 "data_size": 7936 00:25:51.622 }, 00:25:51.622 { 00:25:51.622 "name": "BaseBdev2", 00:25:51.622 "uuid": "86225afe-23a3-59b8-9b74-d316e9098040", 00:25:51.622 "is_configured": true, 00:25:51.622 "data_offset": 256, 00:25:51.622 "data_size": 7936 00:25:51.622 } 00:25:51.622 ] 00:25:51.622 }' 00:25:51.622 22:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:51.922 22:09:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:51.922 22:09:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:51.922 22:09:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:51.922 22:09:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:25:51.922 22:09:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:52.180 [2024-07-13 22:09:11.395824] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:52.180 [2024-07-13 22:09:11.395881] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:52.180 [2024-07-13 22:09:11.395929] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000044480 00:25:52.180 [2024-07-13 22:09:11.395945] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:52.180 [2024-07-13 22:09:11.396453] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:52.180 [2024-07-13 22:09:11.396474] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:52.180 [2024-07-13 22:09:11.396555] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:25:52.180 [2024-07-13 22:09:11.396570] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:25:52.180 [2024-07-13 22:09:11.396583] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:52.180 BaseBdev1 00:25:52.180 22:09:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # sleep 1 00:25:53.115 22:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:53.115 22:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:53.115 22:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:53.115 22:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:53.115 22:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:53.115 22:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:53.115 22:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:53.115 22:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:53.115 22:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:53.115 22:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:53.115 22:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:53.115 22:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:53.373 22:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:53.373 "name": "raid_bdev1", 00:25:53.373 "uuid": "99dc275a-9802-4420-9060-56268f61e854", 00:25:53.373 "strip_size_kb": 0, 00:25:53.373 "state": "online", 00:25:53.373 "raid_level": "raid1", 00:25:53.373 "superblock": true, 00:25:53.373 "num_base_bdevs": 2, 00:25:53.373 "num_base_bdevs_discovered": 1, 00:25:53.373 "num_base_bdevs_operational": 1, 00:25:53.373 "base_bdevs_list": [ 00:25:53.373 { 00:25:53.373 "name": null, 00:25:53.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:53.373 "is_configured": false, 00:25:53.373 "data_offset": 256, 00:25:53.373 "data_size": 7936 00:25:53.373 }, 00:25:53.373 { 00:25:53.373 "name": "BaseBdev2", 00:25:53.373 "uuid": "86225afe-23a3-59b8-9b74-d316e9098040", 00:25:53.373 "is_configured": true, 00:25:53.373 "data_offset": 256, 00:25:53.373 "data_size": 7936 00:25:53.373 } 00:25:53.373 ] 00:25:53.373 }' 00:25:53.373 22:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:53.373 22:09:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:53.937 22:09:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:53.938 22:09:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:53.938 22:09:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:53.938 22:09:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:53.938 22:09:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:53.938 22:09:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:53.938 22:09:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:53.938 22:09:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:53.938 "name": "raid_bdev1", 00:25:53.938 "uuid": "99dc275a-9802-4420-9060-56268f61e854", 00:25:53.938 "strip_size_kb": 0, 00:25:53.938 "state": "online", 00:25:53.938 "raid_level": "raid1", 00:25:53.938 "superblock": true, 00:25:53.938 "num_base_bdevs": 2, 00:25:53.938 "num_base_bdevs_discovered": 1, 00:25:53.938 "num_base_bdevs_operational": 1, 00:25:53.938 "base_bdevs_list": [ 00:25:53.938 { 00:25:53.938 "name": null, 00:25:53.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:53.938 "is_configured": false, 00:25:53.938 "data_offset": 256, 00:25:53.938 "data_size": 7936 00:25:53.938 }, 00:25:53.938 { 00:25:53.938 "name": "BaseBdev2", 00:25:53.938 "uuid": "86225afe-23a3-59b8-9b74-d316e9098040", 00:25:53.938 "is_configured": true, 00:25:53.938 "data_offset": 256, 00:25:53.938 "data_size": 7936 00:25:53.938 } 00:25:53.938 ] 00:25:53.938 }' 00:25:53.938 22:09:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:53.938 22:09:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:53.938 22:09:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:54.195 22:09:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:54.195 22:09:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:54.195 22:09:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@648 -- # local es=0 00:25:54.195 22:09:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:54.195 22:09:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:54.195 22:09:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:54.195 22:09:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:54.195 22:09:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:54.195 22:09:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:54.195 22:09:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:54.195 22:09:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:54.195 22:09:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:25:54.195 22:09:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:54.195 [2024-07-13 22:09:13.513561] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:54.195 [2024-07-13 22:09:13.513734] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:25:54.195 [2024-07-13 22:09:13.513751] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:54.195 request: 00:25:54.195 { 00:25:54.195 "base_bdev": "BaseBdev1", 00:25:54.195 "raid_bdev": "raid_bdev1", 00:25:54.195 "method": "bdev_raid_add_base_bdev", 00:25:54.195 "req_id": 1 00:25:54.195 } 00:25:54.195 Got JSON-RPC error response 00:25:54.195 response: 00:25:54.195 { 00:25:54.195 "code": -22, 00:25:54.195 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:25:54.195 } 00:25:54.195 22:09:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@651 -- # es=1 00:25:54.195 22:09:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:54.195 22:09:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:54.195 22:09:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:54.195 22:09:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # sleep 1 00:25:55.565 22:09:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:55.565 22:09:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:55.565 22:09:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:55.565 22:09:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:55.565 22:09:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:55.565 22:09:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:55.565 22:09:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:55.565 22:09:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:55.565 22:09:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:55.565 22:09:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:55.565 22:09:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:55.565 22:09:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:55.565 22:09:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:55.565 "name": "raid_bdev1", 00:25:55.565 "uuid": "99dc275a-9802-4420-9060-56268f61e854", 00:25:55.565 "strip_size_kb": 0, 00:25:55.565 "state": "online", 00:25:55.565 "raid_level": "raid1", 00:25:55.565 "superblock": true, 00:25:55.565 "num_base_bdevs": 2, 00:25:55.565 "num_base_bdevs_discovered": 1, 00:25:55.565 "num_base_bdevs_operational": 1, 00:25:55.565 "base_bdevs_list": [ 00:25:55.565 { 00:25:55.565 "name": null, 00:25:55.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:55.565 "is_configured": false, 00:25:55.565 "data_offset": 256, 00:25:55.565 "data_size": 7936 00:25:55.565 }, 00:25:55.565 { 00:25:55.565 "name": "BaseBdev2", 00:25:55.565 "uuid": "86225afe-23a3-59b8-9b74-d316e9098040", 00:25:55.565 "is_configured": true, 00:25:55.565 "data_offset": 256, 00:25:55.565 "data_size": 7936 00:25:55.565 } 00:25:55.565 ] 00:25:55.565 }' 00:25:55.565 22:09:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:55.565 22:09:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:55.836 22:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:55.836 22:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:55.836 22:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:55.836 22:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:55.836 22:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:55.836 22:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:55.836 22:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:56.104 22:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:56.104 "name": "raid_bdev1", 00:25:56.104 "uuid": "99dc275a-9802-4420-9060-56268f61e854", 00:25:56.104 "strip_size_kb": 0, 00:25:56.104 "state": "online", 00:25:56.104 "raid_level": "raid1", 00:25:56.104 "superblock": true, 00:25:56.104 "num_base_bdevs": 2, 00:25:56.104 "num_base_bdevs_discovered": 1, 00:25:56.104 "num_base_bdevs_operational": 1, 00:25:56.104 "base_bdevs_list": [ 00:25:56.104 { 00:25:56.104 "name": null, 00:25:56.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:56.104 "is_configured": false, 00:25:56.104 "data_offset": 256, 00:25:56.104 "data_size": 7936 00:25:56.104 }, 00:25:56.104 { 00:25:56.104 "name": "BaseBdev2", 00:25:56.104 "uuid": "86225afe-23a3-59b8-9b74-d316e9098040", 00:25:56.104 "is_configured": true, 00:25:56.104 "data_offset": 256, 00:25:56.104 "data_size": 7936 00:25:56.104 } 00:25:56.104 ] 00:25:56.104 }' 00:25:56.104 22:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:56.104 22:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:56.104 22:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:56.104 22:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:56.104 22:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@782 -- # killprocess 1501062 00:25:56.104 22:09:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@948 -- # '[' -z 1501062 ']' 00:25:56.104 22:09:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@952 -- # kill -0 1501062 00:25:56.104 22:09:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@953 -- # uname 00:25:56.104 22:09:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:56.104 22:09:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1501062 00:25:56.104 22:09:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:56.104 22:09:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:56.104 22:09:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1501062' 00:25:56.104 killing process with pid 1501062 00:25:56.104 22:09:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@967 -- # kill 1501062 00:25:56.104 Received shutdown signal, test time was about 60.000000 seconds 00:25:56.104 00:25:56.104 Latency(us) 00:25:56.104 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:56.104 =================================================================================================================== 00:25:56.104 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:56.104 [2024-07-13 22:09:15.416942] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:56.104 [2024-07-13 22:09:15.417082] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:56.104 22:09:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # wait 1501062 00:25:56.104 [2024-07-13 22:09:15.417140] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:56.104 [2024-07-13 22:09:15.417153] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000043880 name raid_bdev1, state offline 00:25:56.362 [2024-07-13 22:09:15.647203] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:57.739 22:09:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # return 0 00:25:57.739 00:25:57.739 real 0m27.327s 00:25:57.739 user 0m40.091s 00:25:57.739 sys 0m4.220s 00:25:57.739 22:09:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:57.739 22:09:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:57.739 ************************************ 00:25:57.739 END TEST raid_rebuild_test_sb_4k 00:25:57.739 ************************************ 00:25:57.739 22:09:16 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:25:57.739 22:09:16 bdev_raid -- bdev/bdev_raid.sh@904 -- # base_malloc_params='-m 32' 00:25:57.739 22:09:16 bdev_raid -- bdev/bdev_raid.sh@905 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:25:57.739 22:09:16 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:25:57.739 22:09:16 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:57.739 22:09:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:57.739 ************************************ 00:25:57.739 START TEST raid_state_function_test_sb_md_separate 00:25:57.739 ************************************ 00:25:57.739 22:09:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:25:57.739 22:09:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:25:57.739 22:09:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:25:57.739 22:09:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:25:57.739 22:09:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:25:57.739 22:09:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:25:57.739 22:09:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:57.739 22:09:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:25:57.739 22:09:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:57.739 22:09:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:57.739 22:09:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:25:57.739 22:09:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:57.739 22:09:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:57.739 22:09:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:25:57.739 22:09:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:25:57.739 22:09:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:25:57.739 22:09:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # local strip_size 00:25:57.739 22:09:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:25:57.739 22:09:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:25:57.739 22:09:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:25:57.739 22:09:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:25:57.739 22:09:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:25:57.739 22:09:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:25:57.739 22:09:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # raid_pid=1506575 00:25:57.739 22:09:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1506575' 00:25:57.739 Process raid pid: 1506575 00:25:57.739 22:09:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:25:57.739 22:09:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@246 -- # waitforlisten 1506575 /var/tmp/spdk-raid.sock 00:25:57.739 22:09:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@829 -- # '[' -z 1506575 ']' 00:25:57.739 22:09:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:57.739 22:09:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:57.739 22:09:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:57.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:57.739 22:09:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:57.739 22:09:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:57.739 [2024-07-13 22:09:17.021485] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:57.739 [2024-07-13 22:09:17.021576] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:57.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.740 EAL: Requested device 0000:3d:01.0 cannot be used 00:25:57.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.740 EAL: Requested device 0000:3d:01.1 cannot be used 00:25:57.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.740 EAL: Requested device 0000:3d:01.2 cannot be used 00:25:57.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.740 EAL: Requested device 0000:3d:01.3 cannot be used 00:25:57.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.740 EAL: Requested device 0000:3d:01.4 cannot be used 00:25:57.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.740 EAL: Requested device 0000:3d:01.5 cannot be used 00:25:57.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.740 EAL: Requested device 0000:3d:01.6 cannot be used 00:25:57.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.740 EAL: Requested device 0000:3d:01.7 cannot be used 00:25:57.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.740 EAL: Requested device 0000:3d:02.0 cannot be used 00:25:57.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.740 EAL: Requested device 0000:3d:02.1 cannot be used 00:25:57.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.740 EAL: Requested device 0000:3d:02.2 cannot be used 00:25:57.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.740 EAL: Requested device 0000:3d:02.3 cannot be used 00:25:57.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.740 EAL: Requested device 0000:3d:02.4 cannot be used 00:25:57.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.740 EAL: Requested device 0000:3d:02.5 cannot be used 00:25:57.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.740 EAL: Requested device 0000:3d:02.6 cannot be used 00:25:57.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.740 EAL: Requested device 0000:3d:02.7 cannot be used 00:25:57.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.740 EAL: Requested device 0000:3f:01.0 cannot be used 00:25:57.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.740 EAL: Requested device 0000:3f:01.1 cannot be used 00:25:57.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.740 EAL: Requested device 0000:3f:01.2 cannot be used 00:25:57.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.740 EAL: Requested device 0000:3f:01.3 cannot be used 00:25:57.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.740 EAL: Requested device 0000:3f:01.4 cannot be used 00:25:57.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.740 EAL: Requested device 0000:3f:01.5 cannot be used 00:25:57.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.740 EAL: Requested device 0000:3f:01.6 cannot be used 00:25:57.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.740 EAL: Requested device 0000:3f:01.7 cannot be used 00:25:57.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.740 EAL: Requested device 0000:3f:02.0 cannot be used 00:25:57.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.740 EAL: Requested device 0000:3f:02.1 cannot be used 00:25:57.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.740 EAL: Requested device 0000:3f:02.2 cannot be used 00:25:57.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.740 EAL: Requested device 0000:3f:02.3 cannot be used 00:25:57.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.740 EAL: Requested device 0000:3f:02.4 cannot be used 00:25:57.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.740 EAL: Requested device 0000:3f:02.5 cannot be used 00:25:57.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.740 EAL: Requested device 0000:3f:02.6 cannot be used 00:25:57.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.740 EAL: Requested device 0000:3f:02.7 cannot be used 00:25:57.999 [2024-07-13 22:09:17.185316] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:58.258 [2024-07-13 22:09:17.394205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.258 [2024-07-13 22:09:17.640943] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:58.258 [2024-07-13 22:09:17.640973] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:58.517 22:09:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:58.517 22:09:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@862 -- # return 0 00:25:58.517 22:09:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:25:58.777 [2024-07-13 22:09:17.923906] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:58.777 [2024-07-13 22:09:17.923955] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:58.777 [2024-07-13 22:09:17.923966] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:58.777 [2024-07-13 22:09:17.923994] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:58.777 22:09:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:58.777 22:09:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:58.777 22:09:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:58.777 22:09:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:58.777 22:09:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:58.777 22:09:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:58.777 22:09:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:58.777 22:09:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:58.777 22:09:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:58.777 22:09:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:58.777 22:09:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:58.777 22:09:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:58.777 22:09:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:58.777 "name": "Existed_Raid", 00:25:58.777 "uuid": "cffe6509-bd60-4d42-a460-2511d4b7aa6b", 00:25:58.777 "strip_size_kb": 0, 00:25:58.777 "state": "configuring", 00:25:58.777 "raid_level": "raid1", 00:25:58.777 "superblock": true, 00:25:58.777 "num_base_bdevs": 2, 00:25:58.777 "num_base_bdevs_discovered": 0, 00:25:58.777 "num_base_bdevs_operational": 2, 00:25:58.777 "base_bdevs_list": [ 00:25:58.777 { 00:25:58.777 "name": "BaseBdev1", 00:25:58.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:58.777 "is_configured": false, 00:25:58.777 "data_offset": 0, 00:25:58.777 "data_size": 0 00:25:58.777 }, 00:25:58.777 { 00:25:58.777 "name": "BaseBdev2", 00:25:58.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:58.777 "is_configured": false, 00:25:58.777 "data_offset": 0, 00:25:58.777 "data_size": 0 00:25:58.777 } 00:25:58.777 ] 00:25:58.777 }' 00:25:58.777 22:09:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:58.777 22:09:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:59.344 22:09:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:59.602 [2024-07-13 22:09:18.757969] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:59.602 [2024-07-13 22:09:18.758001] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:25:59.602 22:09:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:25:59.602 [2024-07-13 22:09:18.930457] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:59.602 [2024-07-13 22:09:18.930496] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:59.602 [2024-07-13 22:09:18.930506] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:59.602 [2024-07-13 22:09:18.930534] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:59.602 22:09:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:25:59.860 [2024-07-13 22:09:19.140409] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:59.860 BaseBdev1 00:25:59.860 22:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:25:59.860 22:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:25:59.860 22:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:59.860 22:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local i 00:25:59.860 22:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:59.860 22:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:59.860 22:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:00.118 22:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:00.118 [ 00:26:00.118 { 00:26:00.118 "name": "BaseBdev1", 00:26:00.118 "aliases": [ 00:26:00.118 "f321c5be-d245-4cda-a59c-b2505468b0e8" 00:26:00.118 ], 00:26:00.118 "product_name": "Malloc disk", 00:26:00.118 "block_size": 4096, 00:26:00.118 "num_blocks": 8192, 00:26:00.118 "uuid": "f321c5be-d245-4cda-a59c-b2505468b0e8", 00:26:00.118 "md_size": 32, 00:26:00.118 "md_interleave": false, 00:26:00.118 "dif_type": 0, 00:26:00.118 "assigned_rate_limits": { 00:26:00.118 "rw_ios_per_sec": 0, 00:26:00.118 "rw_mbytes_per_sec": 0, 00:26:00.118 "r_mbytes_per_sec": 0, 00:26:00.118 "w_mbytes_per_sec": 0 00:26:00.118 }, 00:26:00.118 "claimed": true, 00:26:00.118 "claim_type": "exclusive_write", 00:26:00.118 "zoned": false, 00:26:00.118 "supported_io_types": { 00:26:00.118 "read": true, 00:26:00.118 "write": true, 00:26:00.118 "unmap": true, 00:26:00.118 "flush": true, 00:26:00.118 "reset": true, 00:26:00.118 "nvme_admin": false, 00:26:00.118 "nvme_io": false, 00:26:00.118 "nvme_io_md": false, 00:26:00.118 "write_zeroes": true, 00:26:00.118 "zcopy": true, 00:26:00.118 "get_zone_info": false, 00:26:00.118 "zone_management": false, 00:26:00.118 "zone_append": false, 00:26:00.118 "compare": false, 00:26:00.118 "compare_and_write": false, 00:26:00.118 "abort": true, 00:26:00.118 "seek_hole": false, 00:26:00.118 "seek_data": false, 00:26:00.118 "copy": true, 00:26:00.118 "nvme_iov_md": false 00:26:00.118 }, 00:26:00.118 "memory_domains": [ 00:26:00.118 { 00:26:00.118 "dma_device_id": "system", 00:26:00.118 "dma_device_type": 1 00:26:00.118 }, 00:26:00.118 { 00:26:00.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:00.118 "dma_device_type": 2 00:26:00.118 } 00:26:00.118 ], 00:26:00.118 "driver_specific": {} 00:26:00.118 } 00:26:00.118 ] 00:26:00.118 22:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # return 0 00:26:00.118 22:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:26:00.118 22:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:00.118 22:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:00.118 22:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:00.119 22:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:00.119 22:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:00.119 22:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:00.119 22:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:00.119 22:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:00.119 22:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:00.119 22:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:00.119 22:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:00.377 22:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:00.378 "name": "Existed_Raid", 00:26:00.378 "uuid": "24e919bc-aa88-46bf-af62-04689f7d68f0", 00:26:00.378 "strip_size_kb": 0, 00:26:00.378 "state": "configuring", 00:26:00.378 "raid_level": "raid1", 00:26:00.378 "superblock": true, 00:26:00.378 "num_base_bdevs": 2, 00:26:00.378 "num_base_bdevs_discovered": 1, 00:26:00.378 "num_base_bdevs_operational": 2, 00:26:00.378 "base_bdevs_list": [ 00:26:00.378 { 00:26:00.378 "name": "BaseBdev1", 00:26:00.378 "uuid": "f321c5be-d245-4cda-a59c-b2505468b0e8", 00:26:00.378 "is_configured": true, 00:26:00.378 "data_offset": 256, 00:26:00.378 "data_size": 7936 00:26:00.378 }, 00:26:00.378 { 00:26:00.378 "name": "BaseBdev2", 00:26:00.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:00.378 "is_configured": false, 00:26:00.378 "data_offset": 0, 00:26:00.378 "data_size": 0 00:26:00.378 } 00:26:00.378 ] 00:26:00.378 }' 00:26:00.378 22:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:00.378 22:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:00.946 22:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:00.946 [2024-07-13 22:09:20.323606] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:00.946 [2024-07-13 22:09:20.323655] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:26:01.205 22:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:26:01.205 [2024-07-13 22:09:20.504110] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:01.205 [2024-07-13 22:09:20.505767] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:01.205 [2024-07-13 22:09:20.505801] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:01.206 22:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:26:01.206 22:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:01.206 22:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:26:01.206 22:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:01.206 22:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:01.206 22:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:01.206 22:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:01.206 22:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:01.206 22:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:01.206 22:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:01.206 22:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:01.206 22:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:01.206 22:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:01.206 22:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:01.464 22:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:01.464 "name": "Existed_Raid", 00:26:01.464 "uuid": "f91512ab-15f0-4449-ba27-ccfed7453bec", 00:26:01.464 "strip_size_kb": 0, 00:26:01.464 "state": "configuring", 00:26:01.464 "raid_level": "raid1", 00:26:01.464 "superblock": true, 00:26:01.464 "num_base_bdevs": 2, 00:26:01.464 "num_base_bdevs_discovered": 1, 00:26:01.464 "num_base_bdevs_operational": 2, 00:26:01.464 "base_bdevs_list": [ 00:26:01.464 { 00:26:01.464 "name": "BaseBdev1", 00:26:01.464 "uuid": "f321c5be-d245-4cda-a59c-b2505468b0e8", 00:26:01.464 "is_configured": true, 00:26:01.465 "data_offset": 256, 00:26:01.465 "data_size": 7936 00:26:01.465 }, 00:26:01.465 { 00:26:01.465 "name": "BaseBdev2", 00:26:01.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:01.465 "is_configured": false, 00:26:01.465 "data_offset": 0, 00:26:01.465 "data_size": 0 00:26:01.465 } 00:26:01.465 ] 00:26:01.465 }' 00:26:01.465 22:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:01.465 22:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:02.033 22:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:26:02.033 [2024-07-13 22:09:21.358242] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:02.033 [2024-07-13 22:09:21.358440] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:26:02.033 [2024-07-13 22:09:21.358455] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:26:02.033 [2024-07-13 22:09:21.358530] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:26:02.033 [2024-07-13 22:09:21.358669] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:26:02.033 [2024-07-13 22:09:21.358682] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:26:02.033 [2024-07-13 22:09:21.358787] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:02.033 BaseBdev2 00:26:02.033 22:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:26:02.033 22:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:26:02.033 22:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:02.033 22:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local i 00:26:02.033 22:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:02.033 22:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:02.033 22:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:02.292 22:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:02.551 [ 00:26:02.551 { 00:26:02.551 "name": "BaseBdev2", 00:26:02.551 "aliases": [ 00:26:02.551 "e9551239-68bd-4e6f-ad73-6f06b8b25ab1" 00:26:02.551 ], 00:26:02.551 "product_name": "Malloc disk", 00:26:02.551 "block_size": 4096, 00:26:02.551 "num_blocks": 8192, 00:26:02.551 "uuid": "e9551239-68bd-4e6f-ad73-6f06b8b25ab1", 00:26:02.551 "md_size": 32, 00:26:02.551 "md_interleave": false, 00:26:02.551 "dif_type": 0, 00:26:02.551 "assigned_rate_limits": { 00:26:02.551 "rw_ios_per_sec": 0, 00:26:02.551 "rw_mbytes_per_sec": 0, 00:26:02.551 "r_mbytes_per_sec": 0, 00:26:02.551 "w_mbytes_per_sec": 0 00:26:02.551 }, 00:26:02.551 "claimed": true, 00:26:02.551 "claim_type": "exclusive_write", 00:26:02.551 "zoned": false, 00:26:02.551 "supported_io_types": { 00:26:02.551 "read": true, 00:26:02.551 "write": true, 00:26:02.551 "unmap": true, 00:26:02.551 "flush": true, 00:26:02.551 "reset": true, 00:26:02.551 "nvme_admin": false, 00:26:02.551 "nvme_io": false, 00:26:02.551 "nvme_io_md": false, 00:26:02.551 "write_zeroes": true, 00:26:02.551 "zcopy": true, 00:26:02.551 "get_zone_info": false, 00:26:02.551 "zone_management": false, 00:26:02.551 "zone_append": false, 00:26:02.551 "compare": false, 00:26:02.551 "compare_and_write": false, 00:26:02.551 "abort": true, 00:26:02.551 "seek_hole": false, 00:26:02.551 "seek_data": false, 00:26:02.551 "copy": true, 00:26:02.551 "nvme_iov_md": false 00:26:02.551 }, 00:26:02.551 "memory_domains": [ 00:26:02.551 { 00:26:02.551 "dma_device_id": "system", 00:26:02.551 "dma_device_type": 1 00:26:02.551 }, 00:26:02.551 { 00:26:02.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:02.551 "dma_device_type": 2 00:26:02.551 } 00:26:02.551 ], 00:26:02.551 "driver_specific": {} 00:26:02.551 } 00:26:02.551 ] 00:26:02.551 22:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # return 0 00:26:02.551 22:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:26:02.551 22:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:02.551 22:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:26:02.551 22:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:02.551 22:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:02.551 22:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:02.551 22:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:02.551 22:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:02.551 22:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:02.551 22:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:02.551 22:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:02.551 22:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:02.551 22:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:02.551 22:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:02.551 22:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:02.551 "name": "Existed_Raid", 00:26:02.551 "uuid": "f91512ab-15f0-4449-ba27-ccfed7453bec", 00:26:02.551 "strip_size_kb": 0, 00:26:02.551 "state": "online", 00:26:02.551 "raid_level": "raid1", 00:26:02.551 "superblock": true, 00:26:02.551 "num_base_bdevs": 2, 00:26:02.551 "num_base_bdevs_discovered": 2, 00:26:02.551 "num_base_bdevs_operational": 2, 00:26:02.551 "base_bdevs_list": [ 00:26:02.551 { 00:26:02.551 "name": "BaseBdev1", 00:26:02.551 "uuid": "f321c5be-d245-4cda-a59c-b2505468b0e8", 00:26:02.551 "is_configured": true, 00:26:02.552 "data_offset": 256, 00:26:02.552 "data_size": 7936 00:26:02.552 }, 00:26:02.552 { 00:26:02.552 "name": "BaseBdev2", 00:26:02.552 "uuid": "e9551239-68bd-4e6f-ad73-6f06b8b25ab1", 00:26:02.552 "is_configured": true, 00:26:02.552 "data_offset": 256, 00:26:02.552 "data_size": 7936 00:26:02.552 } 00:26:02.552 ] 00:26:02.552 }' 00:26:02.552 22:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:02.552 22:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:03.118 22:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:26:03.118 22:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:26:03.118 22:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:03.118 22:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:03.118 22:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:03.118 22:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:26:03.118 22:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:03.118 22:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:26:03.377 [2024-07-13 22:09:22.525666] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:03.377 22:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:03.377 "name": "Existed_Raid", 00:26:03.377 "aliases": [ 00:26:03.377 "f91512ab-15f0-4449-ba27-ccfed7453bec" 00:26:03.377 ], 00:26:03.377 "product_name": "Raid Volume", 00:26:03.377 "block_size": 4096, 00:26:03.377 "num_blocks": 7936, 00:26:03.377 "uuid": "f91512ab-15f0-4449-ba27-ccfed7453bec", 00:26:03.377 "md_size": 32, 00:26:03.377 "md_interleave": false, 00:26:03.377 "dif_type": 0, 00:26:03.377 "assigned_rate_limits": { 00:26:03.377 "rw_ios_per_sec": 0, 00:26:03.377 "rw_mbytes_per_sec": 0, 00:26:03.377 "r_mbytes_per_sec": 0, 00:26:03.377 "w_mbytes_per_sec": 0 00:26:03.377 }, 00:26:03.377 "claimed": false, 00:26:03.377 "zoned": false, 00:26:03.377 "supported_io_types": { 00:26:03.377 "read": true, 00:26:03.377 "write": true, 00:26:03.377 "unmap": false, 00:26:03.377 "flush": false, 00:26:03.377 "reset": true, 00:26:03.377 "nvme_admin": false, 00:26:03.377 "nvme_io": false, 00:26:03.377 "nvme_io_md": false, 00:26:03.377 "write_zeroes": true, 00:26:03.377 "zcopy": false, 00:26:03.377 "get_zone_info": false, 00:26:03.377 "zone_management": false, 00:26:03.377 "zone_append": false, 00:26:03.377 "compare": false, 00:26:03.377 "compare_and_write": false, 00:26:03.377 "abort": false, 00:26:03.377 "seek_hole": false, 00:26:03.377 "seek_data": false, 00:26:03.377 "copy": false, 00:26:03.377 "nvme_iov_md": false 00:26:03.377 }, 00:26:03.377 "memory_domains": [ 00:26:03.377 { 00:26:03.377 "dma_device_id": "system", 00:26:03.377 "dma_device_type": 1 00:26:03.377 }, 00:26:03.377 { 00:26:03.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:03.377 "dma_device_type": 2 00:26:03.377 }, 00:26:03.377 { 00:26:03.377 "dma_device_id": "system", 00:26:03.377 "dma_device_type": 1 00:26:03.377 }, 00:26:03.377 { 00:26:03.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:03.377 "dma_device_type": 2 00:26:03.377 } 00:26:03.377 ], 00:26:03.377 "driver_specific": { 00:26:03.377 "raid": { 00:26:03.377 "uuid": "f91512ab-15f0-4449-ba27-ccfed7453bec", 00:26:03.377 "strip_size_kb": 0, 00:26:03.377 "state": "online", 00:26:03.377 "raid_level": "raid1", 00:26:03.377 "superblock": true, 00:26:03.377 "num_base_bdevs": 2, 00:26:03.377 "num_base_bdevs_discovered": 2, 00:26:03.377 "num_base_bdevs_operational": 2, 00:26:03.377 "base_bdevs_list": [ 00:26:03.377 { 00:26:03.377 "name": "BaseBdev1", 00:26:03.377 "uuid": "f321c5be-d245-4cda-a59c-b2505468b0e8", 00:26:03.377 "is_configured": true, 00:26:03.377 "data_offset": 256, 00:26:03.377 "data_size": 7936 00:26:03.377 }, 00:26:03.377 { 00:26:03.377 "name": "BaseBdev2", 00:26:03.377 "uuid": "e9551239-68bd-4e6f-ad73-6f06b8b25ab1", 00:26:03.377 "is_configured": true, 00:26:03.377 "data_offset": 256, 00:26:03.377 "data_size": 7936 00:26:03.377 } 00:26:03.377 ] 00:26:03.377 } 00:26:03.377 } 00:26:03.377 }' 00:26:03.377 22:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:03.377 22:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:26:03.377 BaseBdev2' 00:26:03.377 22:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:03.377 22:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:26:03.377 22:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:03.377 22:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:03.377 "name": "BaseBdev1", 00:26:03.377 "aliases": [ 00:26:03.377 "f321c5be-d245-4cda-a59c-b2505468b0e8" 00:26:03.377 ], 00:26:03.377 "product_name": "Malloc disk", 00:26:03.378 "block_size": 4096, 00:26:03.378 "num_blocks": 8192, 00:26:03.378 "uuid": "f321c5be-d245-4cda-a59c-b2505468b0e8", 00:26:03.378 "md_size": 32, 00:26:03.378 "md_interleave": false, 00:26:03.378 "dif_type": 0, 00:26:03.378 "assigned_rate_limits": { 00:26:03.378 "rw_ios_per_sec": 0, 00:26:03.378 "rw_mbytes_per_sec": 0, 00:26:03.378 "r_mbytes_per_sec": 0, 00:26:03.378 "w_mbytes_per_sec": 0 00:26:03.378 }, 00:26:03.378 "claimed": true, 00:26:03.378 "claim_type": "exclusive_write", 00:26:03.378 "zoned": false, 00:26:03.378 "supported_io_types": { 00:26:03.378 "read": true, 00:26:03.378 "write": true, 00:26:03.378 "unmap": true, 00:26:03.378 "flush": true, 00:26:03.378 "reset": true, 00:26:03.378 "nvme_admin": false, 00:26:03.378 "nvme_io": false, 00:26:03.378 "nvme_io_md": false, 00:26:03.378 "write_zeroes": true, 00:26:03.378 "zcopy": true, 00:26:03.378 "get_zone_info": false, 00:26:03.378 "zone_management": false, 00:26:03.378 "zone_append": false, 00:26:03.378 "compare": false, 00:26:03.378 "compare_and_write": false, 00:26:03.378 "abort": true, 00:26:03.378 "seek_hole": false, 00:26:03.378 "seek_data": false, 00:26:03.378 "copy": true, 00:26:03.378 "nvme_iov_md": false 00:26:03.378 }, 00:26:03.378 "memory_domains": [ 00:26:03.378 { 00:26:03.378 "dma_device_id": "system", 00:26:03.378 "dma_device_type": 1 00:26:03.378 }, 00:26:03.378 { 00:26:03.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:03.378 "dma_device_type": 2 00:26:03.378 } 00:26:03.378 ], 00:26:03.378 "driver_specific": {} 00:26:03.378 }' 00:26:03.378 22:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:03.636 22:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:03.636 22:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:26:03.636 22:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:03.636 22:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:03.636 22:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:26:03.636 22:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:03.636 22:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:03.636 22:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:26:03.636 22:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:03.636 22:09:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:03.896 22:09:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:26:03.896 22:09:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:03.896 22:09:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:03.896 22:09:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:26:03.896 22:09:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:03.896 "name": "BaseBdev2", 00:26:03.896 "aliases": [ 00:26:03.896 "e9551239-68bd-4e6f-ad73-6f06b8b25ab1" 00:26:03.896 ], 00:26:03.896 "product_name": "Malloc disk", 00:26:03.896 "block_size": 4096, 00:26:03.896 "num_blocks": 8192, 00:26:03.896 "uuid": "e9551239-68bd-4e6f-ad73-6f06b8b25ab1", 00:26:03.896 "md_size": 32, 00:26:03.896 "md_interleave": false, 00:26:03.896 "dif_type": 0, 00:26:03.896 "assigned_rate_limits": { 00:26:03.896 "rw_ios_per_sec": 0, 00:26:03.896 "rw_mbytes_per_sec": 0, 00:26:03.896 "r_mbytes_per_sec": 0, 00:26:03.896 "w_mbytes_per_sec": 0 00:26:03.896 }, 00:26:03.896 "claimed": true, 00:26:03.896 "claim_type": "exclusive_write", 00:26:03.896 "zoned": false, 00:26:03.896 "supported_io_types": { 00:26:03.896 "read": true, 00:26:03.896 "write": true, 00:26:03.896 "unmap": true, 00:26:03.896 "flush": true, 00:26:03.896 "reset": true, 00:26:03.896 "nvme_admin": false, 00:26:03.896 "nvme_io": false, 00:26:03.896 "nvme_io_md": false, 00:26:03.896 "write_zeroes": true, 00:26:03.896 "zcopy": true, 00:26:03.896 "get_zone_info": false, 00:26:03.896 "zone_management": false, 00:26:03.896 "zone_append": false, 00:26:03.896 "compare": false, 00:26:03.896 "compare_and_write": false, 00:26:03.896 "abort": true, 00:26:03.896 "seek_hole": false, 00:26:03.896 "seek_data": false, 00:26:03.896 "copy": true, 00:26:03.896 "nvme_iov_md": false 00:26:03.896 }, 00:26:03.896 "memory_domains": [ 00:26:03.896 { 00:26:03.896 "dma_device_id": "system", 00:26:03.896 "dma_device_type": 1 00:26:03.896 }, 00:26:03.896 { 00:26:03.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:03.896 "dma_device_type": 2 00:26:03.896 } 00:26:03.896 ], 00:26:03.896 "driver_specific": {} 00:26:03.896 }' 00:26:03.896 22:09:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:03.896 22:09:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:04.154 22:09:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:26:04.154 22:09:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:04.154 22:09:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:04.154 22:09:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:26:04.154 22:09:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:04.154 22:09:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:04.154 22:09:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:26:04.154 22:09:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:04.154 22:09:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:04.154 22:09:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:26:04.154 22:09:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:04.414 [2024-07-13 22:09:23.688541] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:04.414 22:09:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@275 -- # local expected_state 00:26:04.414 22:09:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:26:04.414 22:09:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:04.414 22:09:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:26:04.414 22:09:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:26:04.414 22:09:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:26:04.414 22:09:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:04.414 22:09:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:04.414 22:09:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:04.414 22:09:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:04.414 22:09:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:04.414 22:09:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:04.414 22:09:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:04.414 22:09:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:04.414 22:09:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:04.706 22:09:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:04.706 22:09:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:04.706 22:09:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:04.706 "name": "Existed_Raid", 00:26:04.706 "uuid": "f91512ab-15f0-4449-ba27-ccfed7453bec", 00:26:04.706 "strip_size_kb": 0, 00:26:04.706 "state": "online", 00:26:04.706 "raid_level": "raid1", 00:26:04.706 "superblock": true, 00:26:04.706 "num_base_bdevs": 2, 00:26:04.706 "num_base_bdevs_discovered": 1, 00:26:04.706 "num_base_bdevs_operational": 1, 00:26:04.706 "base_bdevs_list": [ 00:26:04.706 { 00:26:04.706 "name": null, 00:26:04.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:04.706 "is_configured": false, 00:26:04.706 "data_offset": 256, 00:26:04.706 "data_size": 7936 00:26:04.706 }, 00:26:04.706 { 00:26:04.706 "name": "BaseBdev2", 00:26:04.706 "uuid": "e9551239-68bd-4e6f-ad73-6f06b8b25ab1", 00:26:04.706 "is_configured": true, 00:26:04.706 "data_offset": 256, 00:26:04.706 "data_size": 7936 00:26:04.706 } 00:26:04.706 ] 00:26:04.706 }' 00:26:04.706 22:09:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:04.706 22:09:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:05.298 22:09:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:26:05.298 22:09:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:05.298 22:09:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:26:05.298 22:09:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:05.298 22:09:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:26:05.298 22:09:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:05.298 22:09:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:26:05.558 [2024-07-13 22:09:24.784926] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:05.558 [2024-07-13 22:09:24.785024] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:05.558 [2024-07-13 22:09:24.882159] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:05.558 [2024-07-13 22:09:24.882208] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:05.558 [2024-07-13 22:09:24.882222] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:26:05.558 22:09:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:26:05.558 22:09:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:05.558 22:09:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:05.558 22:09:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:26:05.817 22:09:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:26:05.817 22:09:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:26:05.817 22:09:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:26:05.817 22:09:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@341 -- # killprocess 1506575 00:26:05.817 22:09:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@948 -- # '[' -z 1506575 ']' 00:26:05.817 22:09:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # kill -0 1506575 00:26:05.817 22:09:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@953 -- # uname 00:26:05.817 22:09:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:05.817 22:09:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1506575 00:26:05.817 22:09:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:05.817 22:09:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:05.817 22:09:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1506575' 00:26:05.817 killing process with pid 1506575 00:26:05.817 22:09:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@967 -- # kill 1506575 00:26:05.817 [2024-07-13 22:09:25.125996] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:05.817 22:09:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # wait 1506575 00:26:05.817 [2024-07-13 22:09:25.143074] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:07.198 22:09:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@343 -- # return 0 00:26:07.198 00:26:07.198 real 0m9.421s 00:26:07.198 user 0m15.391s 00:26:07.198 sys 0m1.767s 00:26:07.198 22:09:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:07.198 22:09:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:07.198 ************************************ 00:26:07.198 END TEST raid_state_function_test_sb_md_separate 00:26:07.198 ************************************ 00:26:07.198 22:09:26 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:26:07.198 22:09:26 bdev_raid -- bdev/bdev_raid.sh@906 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:26:07.198 22:09:26 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:26:07.198 22:09:26 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:07.198 22:09:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:07.198 ************************************ 00:26:07.198 START TEST raid_superblock_test_md_separate 00:26:07.198 ************************************ 00:26:07.198 22:09:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:26:07.198 22:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:26:07.198 22:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:26:07.198 22:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:26:07.198 22:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:26:07.198 22:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:26:07.198 22:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:26:07.198 22:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:26:07.198 22:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:26:07.198 22:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:26:07.198 22:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local strip_size 00:26:07.198 22:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:26:07.198 22:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:26:07.198 22:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:26:07.198 22:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:26:07.198 22:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:26:07.198 22:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # raid_pid=1508391 00:26:07.198 22:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # waitforlisten 1508391 /var/tmp/spdk-raid.sock 00:26:07.198 22:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:26:07.198 22:09:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@829 -- # '[' -z 1508391 ']' 00:26:07.198 22:09:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:07.198 22:09:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:07.198 22:09:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:07.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:07.198 22:09:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:07.198 22:09:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:07.198 [2024-07-13 22:09:26.518888] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:26:07.198 [2024-07-13 22:09:26.519013] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1508391 ] 00:26:07.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:07.458 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:07.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:07.458 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:07.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:07.458 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:07.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:07.458 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:07.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:07.458 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:07.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:07.458 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:07.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:07.458 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:07.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:07.458 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:07.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:07.458 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:07.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:07.458 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:07.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:07.458 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:07.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:07.458 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:07.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:07.458 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:07.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:07.458 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:07.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:07.458 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:07.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:07.458 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:07.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:07.458 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:07.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:07.458 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:07.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:07.458 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:07.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:07.458 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:07.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:07.458 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:07.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:07.458 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:07.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:07.458 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:07.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:07.458 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:07.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:07.458 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:07.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:07.458 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:07.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:07.458 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:07.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:07.458 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:07.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:07.458 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:07.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:07.458 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:07.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:07.458 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:07.459 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:07.459 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:07.459 [2024-07-13 22:09:26.682251] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:07.718 [2024-07-13 22:09:26.887730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:07.978 [2024-07-13 22:09:27.141456] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:07.978 [2024-07-13 22:09:27.141482] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:07.978 22:09:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:07.978 22:09:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@862 -- # return 0 00:26:07.978 22:09:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:26:07.978 22:09:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:26:07.978 22:09:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:26:07.978 22:09:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:26:07.978 22:09:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:26:07.978 22:09:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:07.978 22:09:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:26:07.978 22:09:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:07.978 22:09:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc1 00:26:08.236 malloc1 00:26:08.237 22:09:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:08.495 [2024-07-13 22:09:27.657300] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:08.495 [2024-07-13 22:09:27.657355] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:08.495 [2024-07-13 22:09:27.657396] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:26:08.495 [2024-07-13 22:09:27.657408] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:08.495 [2024-07-13 22:09:27.659327] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:08.495 [2024-07-13 22:09:27.659355] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:08.495 pt1 00:26:08.495 22:09:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:26:08.495 22:09:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:26:08.495 22:09:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:26:08.495 22:09:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:26:08.496 22:09:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:26:08.496 22:09:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:08.496 22:09:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:26:08.496 22:09:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:08.496 22:09:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc2 00:26:08.496 malloc2 00:26:08.755 22:09:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:08.755 [2024-07-13 22:09:28.047819] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:08.755 [2024-07-13 22:09:28.047874] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:08.755 [2024-07-13 22:09:28.047895] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:26:08.755 [2024-07-13 22:09:28.047913] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:08.755 [2024-07-13 22:09:28.049771] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:08.755 [2024-07-13 22:09:28.049798] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:08.755 pt2 00:26:08.755 22:09:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:26:08.755 22:09:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:26:08.755 22:09:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:26:09.013 [2024-07-13 22:09:28.232316] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:09.013 [2024-07-13 22:09:28.234079] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:09.013 [2024-07-13 22:09:28.234248] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000040880 00:26:09.013 [2024-07-13 22:09:28.234262] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:26:09.014 [2024-07-13 22:09:28.234353] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:26:09.014 [2024-07-13 22:09:28.234510] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000040880 00:26:09.014 [2024-07-13 22:09:28.234522] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000040880 00:26:09.014 [2024-07-13 22:09:28.234631] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:09.014 22:09:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:09.014 22:09:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:09.014 22:09:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:09.014 22:09:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:09.014 22:09:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:09.014 22:09:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:09.014 22:09:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:09.014 22:09:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:09.014 22:09:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:09.014 22:09:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:09.014 22:09:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:09.014 22:09:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:09.273 22:09:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:09.273 "name": "raid_bdev1", 00:26:09.273 "uuid": "7add9485-ef93-4ce5-b250-de0ea8e137da", 00:26:09.273 "strip_size_kb": 0, 00:26:09.273 "state": "online", 00:26:09.273 "raid_level": "raid1", 00:26:09.273 "superblock": true, 00:26:09.273 "num_base_bdevs": 2, 00:26:09.273 "num_base_bdevs_discovered": 2, 00:26:09.273 "num_base_bdevs_operational": 2, 00:26:09.273 "base_bdevs_list": [ 00:26:09.273 { 00:26:09.273 "name": "pt1", 00:26:09.273 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:09.273 "is_configured": true, 00:26:09.273 "data_offset": 256, 00:26:09.273 "data_size": 7936 00:26:09.273 }, 00:26:09.273 { 00:26:09.273 "name": "pt2", 00:26:09.273 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:09.273 "is_configured": true, 00:26:09.273 "data_offset": 256, 00:26:09.273 "data_size": 7936 00:26:09.273 } 00:26:09.273 ] 00:26:09.273 }' 00:26:09.273 22:09:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:09.273 22:09:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:09.841 22:09:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:26:09.841 22:09:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:26:09.841 22:09:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:09.841 22:09:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:09.841 22:09:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:09.841 22:09:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:26:09.841 22:09:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:09.841 22:09:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:09.841 [2024-07-13 22:09:29.090795] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:09.841 22:09:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:09.841 "name": "raid_bdev1", 00:26:09.841 "aliases": [ 00:26:09.841 "7add9485-ef93-4ce5-b250-de0ea8e137da" 00:26:09.841 ], 00:26:09.841 "product_name": "Raid Volume", 00:26:09.841 "block_size": 4096, 00:26:09.841 "num_blocks": 7936, 00:26:09.841 "uuid": "7add9485-ef93-4ce5-b250-de0ea8e137da", 00:26:09.841 "md_size": 32, 00:26:09.841 "md_interleave": false, 00:26:09.841 "dif_type": 0, 00:26:09.841 "assigned_rate_limits": { 00:26:09.841 "rw_ios_per_sec": 0, 00:26:09.841 "rw_mbytes_per_sec": 0, 00:26:09.841 "r_mbytes_per_sec": 0, 00:26:09.841 "w_mbytes_per_sec": 0 00:26:09.841 }, 00:26:09.841 "claimed": false, 00:26:09.841 "zoned": false, 00:26:09.841 "supported_io_types": { 00:26:09.841 "read": true, 00:26:09.841 "write": true, 00:26:09.841 "unmap": false, 00:26:09.841 "flush": false, 00:26:09.841 "reset": true, 00:26:09.841 "nvme_admin": false, 00:26:09.841 "nvme_io": false, 00:26:09.841 "nvme_io_md": false, 00:26:09.841 "write_zeroes": true, 00:26:09.841 "zcopy": false, 00:26:09.841 "get_zone_info": false, 00:26:09.841 "zone_management": false, 00:26:09.841 "zone_append": false, 00:26:09.841 "compare": false, 00:26:09.841 "compare_and_write": false, 00:26:09.841 "abort": false, 00:26:09.841 "seek_hole": false, 00:26:09.841 "seek_data": false, 00:26:09.841 "copy": false, 00:26:09.841 "nvme_iov_md": false 00:26:09.841 }, 00:26:09.841 "memory_domains": [ 00:26:09.841 { 00:26:09.841 "dma_device_id": "system", 00:26:09.841 "dma_device_type": 1 00:26:09.841 }, 00:26:09.841 { 00:26:09.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:09.841 "dma_device_type": 2 00:26:09.841 }, 00:26:09.841 { 00:26:09.841 "dma_device_id": "system", 00:26:09.841 "dma_device_type": 1 00:26:09.841 }, 00:26:09.841 { 00:26:09.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:09.841 "dma_device_type": 2 00:26:09.841 } 00:26:09.841 ], 00:26:09.841 "driver_specific": { 00:26:09.841 "raid": { 00:26:09.841 "uuid": "7add9485-ef93-4ce5-b250-de0ea8e137da", 00:26:09.841 "strip_size_kb": 0, 00:26:09.841 "state": "online", 00:26:09.841 "raid_level": "raid1", 00:26:09.841 "superblock": true, 00:26:09.841 "num_base_bdevs": 2, 00:26:09.841 "num_base_bdevs_discovered": 2, 00:26:09.841 "num_base_bdevs_operational": 2, 00:26:09.841 "base_bdevs_list": [ 00:26:09.841 { 00:26:09.841 "name": "pt1", 00:26:09.841 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:09.841 "is_configured": true, 00:26:09.841 "data_offset": 256, 00:26:09.841 "data_size": 7936 00:26:09.841 }, 00:26:09.841 { 00:26:09.841 "name": "pt2", 00:26:09.841 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:09.841 "is_configured": true, 00:26:09.841 "data_offset": 256, 00:26:09.841 "data_size": 7936 00:26:09.841 } 00:26:09.841 ] 00:26:09.841 } 00:26:09.841 } 00:26:09.841 }' 00:26:09.841 22:09:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:09.841 22:09:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:26:09.841 pt2' 00:26:09.841 22:09:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:09.841 22:09:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:26:09.841 22:09:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:10.100 22:09:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:10.100 "name": "pt1", 00:26:10.100 "aliases": [ 00:26:10.100 "00000000-0000-0000-0000-000000000001" 00:26:10.100 ], 00:26:10.100 "product_name": "passthru", 00:26:10.100 "block_size": 4096, 00:26:10.100 "num_blocks": 8192, 00:26:10.100 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:10.100 "md_size": 32, 00:26:10.100 "md_interleave": false, 00:26:10.100 "dif_type": 0, 00:26:10.100 "assigned_rate_limits": { 00:26:10.100 "rw_ios_per_sec": 0, 00:26:10.100 "rw_mbytes_per_sec": 0, 00:26:10.100 "r_mbytes_per_sec": 0, 00:26:10.100 "w_mbytes_per_sec": 0 00:26:10.100 }, 00:26:10.100 "claimed": true, 00:26:10.100 "claim_type": "exclusive_write", 00:26:10.100 "zoned": false, 00:26:10.100 "supported_io_types": { 00:26:10.100 "read": true, 00:26:10.100 "write": true, 00:26:10.100 "unmap": true, 00:26:10.100 "flush": true, 00:26:10.100 "reset": true, 00:26:10.100 "nvme_admin": false, 00:26:10.100 "nvme_io": false, 00:26:10.100 "nvme_io_md": false, 00:26:10.100 "write_zeroes": true, 00:26:10.100 "zcopy": true, 00:26:10.100 "get_zone_info": false, 00:26:10.100 "zone_management": false, 00:26:10.100 "zone_append": false, 00:26:10.100 "compare": false, 00:26:10.100 "compare_and_write": false, 00:26:10.100 "abort": true, 00:26:10.100 "seek_hole": false, 00:26:10.100 "seek_data": false, 00:26:10.100 "copy": true, 00:26:10.100 "nvme_iov_md": false 00:26:10.100 }, 00:26:10.100 "memory_domains": [ 00:26:10.100 { 00:26:10.100 "dma_device_id": "system", 00:26:10.100 "dma_device_type": 1 00:26:10.100 }, 00:26:10.100 { 00:26:10.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:10.100 "dma_device_type": 2 00:26:10.100 } 00:26:10.100 ], 00:26:10.100 "driver_specific": { 00:26:10.100 "passthru": { 00:26:10.100 "name": "pt1", 00:26:10.100 "base_bdev_name": "malloc1" 00:26:10.100 } 00:26:10.100 } 00:26:10.100 }' 00:26:10.100 22:09:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:10.100 22:09:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:10.100 22:09:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:26:10.100 22:09:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:10.100 22:09:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:10.100 22:09:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:26:10.359 22:09:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:10.359 22:09:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:10.359 22:09:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:26:10.359 22:09:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:10.359 22:09:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:10.359 22:09:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:26:10.359 22:09:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:10.359 22:09:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:10.359 22:09:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:26:10.618 22:09:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:10.618 "name": "pt2", 00:26:10.618 "aliases": [ 00:26:10.618 "00000000-0000-0000-0000-000000000002" 00:26:10.618 ], 00:26:10.618 "product_name": "passthru", 00:26:10.618 "block_size": 4096, 00:26:10.618 "num_blocks": 8192, 00:26:10.618 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:10.618 "md_size": 32, 00:26:10.618 "md_interleave": false, 00:26:10.618 "dif_type": 0, 00:26:10.618 "assigned_rate_limits": { 00:26:10.618 "rw_ios_per_sec": 0, 00:26:10.618 "rw_mbytes_per_sec": 0, 00:26:10.618 "r_mbytes_per_sec": 0, 00:26:10.618 "w_mbytes_per_sec": 0 00:26:10.618 }, 00:26:10.618 "claimed": true, 00:26:10.618 "claim_type": "exclusive_write", 00:26:10.618 "zoned": false, 00:26:10.618 "supported_io_types": { 00:26:10.618 "read": true, 00:26:10.618 "write": true, 00:26:10.618 "unmap": true, 00:26:10.618 "flush": true, 00:26:10.618 "reset": true, 00:26:10.618 "nvme_admin": false, 00:26:10.618 "nvme_io": false, 00:26:10.618 "nvme_io_md": false, 00:26:10.618 "write_zeroes": true, 00:26:10.618 "zcopy": true, 00:26:10.618 "get_zone_info": false, 00:26:10.618 "zone_management": false, 00:26:10.618 "zone_append": false, 00:26:10.618 "compare": false, 00:26:10.618 "compare_and_write": false, 00:26:10.618 "abort": true, 00:26:10.618 "seek_hole": false, 00:26:10.618 "seek_data": false, 00:26:10.618 "copy": true, 00:26:10.618 "nvme_iov_md": false 00:26:10.618 }, 00:26:10.618 "memory_domains": [ 00:26:10.618 { 00:26:10.618 "dma_device_id": "system", 00:26:10.618 "dma_device_type": 1 00:26:10.618 }, 00:26:10.618 { 00:26:10.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:10.618 "dma_device_type": 2 00:26:10.618 } 00:26:10.618 ], 00:26:10.618 "driver_specific": { 00:26:10.618 "passthru": { 00:26:10.618 "name": "pt2", 00:26:10.618 "base_bdev_name": "malloc2" 00:26:10.618 } 00:26:10.618 } 00:26:10.618 }' 00:26:10.618 22:09:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:10.618 22:09:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:10.618 22:09:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:26:10.618 22:09:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:10.618 22:09:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:10.618 22:09:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:26:10.618 22:09:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:10.877 22:09:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:10.877 22:09:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:26:10.877 22:09:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:10.877 22:09:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:10.877 22:09:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:26:10.877 22:09:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:10.877 22:09:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:26:11.135 [2024-07-13 22:09:30.294000] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:11.135 22:09:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=7add9485-ef93-4ce5-b250-de0ea8e137da 00:26:11.135 22:09:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # '[' -z 7add9485-ef93-4ce5-b250-de0ea8e137da ']' 00:26:11.135 22:09:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:11.136 [2024-07-13 22:09:30.466184] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:11.136 [2024-07-13 22:09:30.466212] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:11.136 [2024-07-13 22:09:30.466286] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:11.136 [2024-07-13 22:09:30.466339] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:11.136 [2024-07-13 22:09:30.466356] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040880 name raid_bdev1, state offline 00:26:11.136 22:09:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:11.136 22:09:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:26:11.394 22:09:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:26:11.394 22:09:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:26:11.394 22:09:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:26:11.394 22:09:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:26:11.651 22:09:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:26:11.651 22:09:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:11.651 22:09:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:26:11.651 22:09:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:26:11.910 22:09:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:26:11.910 22:09:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:26:11.910 22:09:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@648 -- # local es=0 00:26:11.910 22:09:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:26:11.910 22:09:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:11.910 22:09:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:11.910 22:09:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:11.910 22:09:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:11.910 22:09:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:11.910 22:09:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:11.910 22:09:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:11.910 22:09:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:26:11.910 22:09:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:26:12.168 [2024-07-13 22:09:31.312391] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:26:12.168 [2024-07-13 22:09:31.314183] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:26:12.168 [2024-07-13 22:09:31.314245] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:26:12.168 [2024-07-13 22:09:31.314291] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:26:12.168 [2024-07-13 22:09:31.314323] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:12.168 [2024-07-13 22:09:31.314335] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040e80 name raid_bdev1, state configuring 00:26:12.168 request: 00:26:12.168 { 00:26:12.168 "name": "raid_bdev1", 00:26:12.168 "raid_level": "raid1", 00:26:12.168 "base_bdevs": [ 00:26:12.168 "malloc1", 00:26:12.168 "malloc2" 00:26:12.168 ], 00:26:12.168 "superblock": false, 00:26:12.168 "method": "bdev_raid_create", 00:26:12.168 "req_id": 1 00:26:12.168 } 00:26:12.168 Got JSON-RPC error response 00:26:12.168 response: 00:26:12.168 { 00:26:12.168 "code": -17, 00:26:12.168 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:26:12.168 } 00:26:12.168 22:09:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # es=1 00:26:12.168 22:09:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:12.168 22:09:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:12.168 22:09:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:12.168 22:09:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:26:12.168 22:09:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:12.168 22:09:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:26:12.168 22:09:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:26:12.168 22:09:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:12.426 [2024-07-13 22:09:31.641212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:12.426 [2024-07-13 22:09:31.641266] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:12.426 [2024-07-13 22:09:31.641284] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:26:12.426 [2024-07-13 22:09:31.641297] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:12.426 [2024-07-13 22:09:31.643283] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:12.426 [2024-07-13 22:09:31.643315] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:12.426 [2024-07-13 22:09:31.643366] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:12.426 [2024-07-13 22:09:31.643417] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:12.426 pt1 00:26:12.426 22:09:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:26:12.426 22:09:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:12.426 22:09:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:12.426 22:09:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:12.426 22:09:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:12.426 22:09:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:12.426 22:09:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:12.426 22:09:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:12.426 22:09:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:12.426 22:09:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:12.426 22:09:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:12.426 22:09:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:12.684 22:09:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:12.684 "name": "raid_bdev1", 00:26:12.684 "uuid": "7add9485-ef93-4ce5-b250-de0ea8e137da", 00:26:12.684 "strip_size_kb": 0, 00:26:12.684 "state": "configuring", 00:26:12.684 "raid_level": "raid1", 00:26:12.684 "superblock": true, 00:26:12.684 "num_base_bdevs": 2, 00:26:12.684 "num_base_bdevs_discovered": 1, 00:26:12.684 "num_base_bdevs_operational": 2, 00:26:12.684 "base_bdevs_list": [ 00:26:12.684 { 00:26:12.684 "name": "pt1", 00:26:12.684 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:12.684 "is_configured": true, 00:26:12.684 "data_offset": 256, 00:26:12.684 "data_size": 7936 00:26:12.684 }, 00:26:12.684 { 00:26:12.684 "name": null, 00:26:12.684 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:12.684 "is_configured": false, 00:26:12.684 "data_offset": 256, 00:26:12.684 "data_size": 7936 00:26:12.684 } 00:26:12.684 ] 00:26:12.684 }' 00:26:12.684 22:09:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:12.684 22:09:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:12.942 22:09:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:26:12.942 22:09:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:26:12.942 22:09:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:26:12.942 22:09:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:13.200 [2024-07-13 22:09:32.455363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:13.200 [2024-07-13 22:09:32.455424] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:13.200 [2024-07-13 22:09:32.455445] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041d80 00:26:13.200 [2024-07-13 22:09:32.455464] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:13.200 [2024-07-13 22:09:32.455708] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:13.200 [2024-07-13 22:09:32.455725] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:13.200 [2024-07-13 22:09:32.455773] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:13.200 [2024-07-13 22:09:32.455801] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:13.200 [2024-07-13 22:09:32.455928] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041a80 00:26:13.200 [2024-07-13 22:09:32.455941] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:26:13.200 [2024-07-13 22:09:32.456011] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:26:13.200 [2024-07-13 22:09:32.456152] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041a80 00:26:13.200 [2024-07-13 22:09:32.456162] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041a80 00:26:13.200 [2024-07-13 22:09:32.456262] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:13.200 pt2 00:26:13.200 22:09:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:26:13.200 22:09:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:26:13.200 22:09:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:13.200 22:09:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:13.200 22:09:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:13.200 22:09:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:13.201 22:09:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:13.201 22:09:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:13.201 22:09:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:13.201 22:09:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:13.201 22:09:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:13.201 22:09:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:13.201 22:09:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:13.201 22:09:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:13.459 22:09:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:13.459 "name": "raid_bdev1", 00:26:13.459 "uuid": "7add9485-ef93-4ce5-b250-de0ea8e137da", 00:26:13.459 "strip_size_kb": 0, 00:26:13.459 "state": "online", 00:26:13.459 "raid_level": "raid1", 00:26:13.459 "superblock": true, 00:26:13.459 "num_base_bdevs": 2, 00:26:13.459 "num_base_bdevs_discovered": 2, 00:26:13.459 "num_base_bdevs_operational": 2, 00:26:13.459 "base_bdevs_list": [ 00:26:13.459 { 00:26:13.459 "name": "pt1", 00:26:13.459 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:13.459 "is_configured": true, 00:26:13.459 "data_offset": 256, 00:26:13.459 "data_size": 7936 00:26:13.459 }, 00:26:13.459 { 00:26:13.459 "name": "pt2", 00:26:13.459 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:13.459 "is_configured": true, 00:26:13.459 "data_offset": 256, 00:26:13.459 "data_size": 7936 00:26:13.459 } 00:26:13.459 ] 00:26:13.459 }' 00:26:13.459 22:09:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:13.459 22:09:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:14.024 22:09:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:26:14.024 22:09:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:26:14.024 22:09:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:14.024 22:09:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:14.024 22:09:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:14.024 22:09:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:26:14.025 22:09:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:14.025 22:09:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:14.025 [2024-07-13 22:09:33.285771] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:14.025 22:09:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:14.025 "name": "raid_bdev1", 00:26:14.025 "aliases": [ 00:26:14.025 "7add9485-ef93-4ce5-b250-de0ea8e137da" 00:26:14.025 ], 00:26:14.025 "product_name": "Raid Volume", 00:26:14.025 "block_size": 4096, 00:26:14.025 "num_blocks": 7936, 00:26:14.025 "uuid": "7add9485-ef93-4ce5-b250-de0ea8e137da", 00:26:14.025 "md_size": 32, 00:26:14.025 "md_interleave": false, 00:26:14.025 "dif_type": 0, 00:26:14.025 "assigned_rate_limits": { 00:26:14.025 "rw_ios_per_sec": 0, 00:26:14.025 "rw_mbytes_per_sec": 0, 00:26:14.025 "r_mbytes_per_sec": 0, 00:26:14.025 "w_mbytes_per_sec": 0 00:26:14.025 }, 00:26:14.025 "claimed": false, 00:26:14.025 "zoned": false, 00:26:14.025 "supported_io_types": { 00:26:14.025 "read": true, 00:26:14.025 "write": true, 00:26:14.025 "unmap": false, 00:26:14.025 "flush": false, 00:26:14.025 "reset": true, 00:26:14.025 "nvme_admin": false, 00:26:14.025 "nvme_io": false, 00:26:14.025 "nvme_io_md": false, 00:26:14.025 "write_zeroes": true, 00:26:14.025 "zcopy": false, 00:26:14.025 "get_zone_info": false, 00:26:14.025 "zone_management": false, 00:26:14.025 "zone_append": false, 00:26:14.025 "compare": false, 00:26:14.025 "compare_and_write": false, 00:26:14.025 "abort": false, 00:26:14.025 "seek_hole": false, 00:26:14.025 "seek_data": false, 00:26:14.025 "copy": false, 00:26:14.025 "nvme_iov_md": false 00:26:14.025 }, 00:26:14.025 "memory_domains": [ 00:26:14.025 { 00:26:14.025 "dma_device_id": "system", 00:26:14.025 "dma_device_type": 1 00:26:14.025 }, 00:26:14.025 { 00:26:14.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:14.025 "dma_device_type": 2 00:26:14.025 }, 00:26:14.025 { 00:26:14.025 "dma_device_id": "system", 00:26:14.025 "dma_device_type": 1 00:26:14.025 }, 00:26:14.025 { 00:26:14.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:14.025 "dma_device_type": 2 00:26:14.025 } 00:26:14.025 ], 00:26:14.025 "driver_specific": { 00:26:14.025 "raid": { 00:26:14.025 "uuid": "7add9485-ef93-4ce5-b250-de0ea8e137da", 00:26:14.025 "strip_size_kb": 0, 00:26:14.025 "state": "online", 00:26:14.025 "raid_level": "raid1", 00:26:14.025 "superblock": true, 00:26:14.025 "num_base_bdevs": 2, 00:26:14.025 "num_base_bdevs_discovered": 2, 00:26:14.025 "num_base_bdevs_operational": 2, 00:26:14.025 "base_bdevs_list": [ 00:26:14.025 { 00:26:14.025 "name": "pt1", 00:26:14.025 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:14.025 "is_configured": true, 00:26:14.025 "data_offset": 256, 00:26:14.025 "data_size": 7936 00:26:14.025 }, 00:26:14.025 { 00:26:14.025 "name": "pt2", 00:26:14.025 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:14.025 "is_configured": true, 00:26:14.025 "data_offset": 256, 00:26:14.025 "data_size": 7936 00:26:14.025 } 00:26:14.025 ] 00:26:14.025 } 00:26:14.025 } 00:26:14.025 }' 00:26:14.025 22:09:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:14.025 22:09:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:26:14.025 pt2' 00:26:14.025 22:09:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:14.025 22:09:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:26:14.025 22:09:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:14.282 22:09:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:14.282 "name": "pt1", 00:26:14.282 "aliases": [ 00:26:14.282 "00000000-0000-0000-0000-000000000001" 00:26:14.282 ], 00:26:14.282 "product_name": "passthru", 00:26:14.282 "block_size": 4096, 00:26:14.282 "num_blocks": 8192, 00:26:14.282 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:14.282 "md_size": 32, 00:26:14.282 "md_interleave": false, 00:26:14.282 "dif_type": 0, 00:26:14.282 "assigned_rate_limits": { 00:26:14.282 "rw_ios_per_sec": 0, 00:26:14.282 "rw_mbytes_per_sec": 0, 00:26:14.282 "r_mbytes_per_sec": 0, 00:26:14.282 "w_mbytes_per_sec": 0 00:26:14.282 }, 00:26:14.282 "claimed": true, 00:26:14.283 "claim_type": "exclusive_write", 00:26:14.283 "zoned": false, 00:26:14.283 "supported_io_types": { 00:26:14.283 "read": true, 00:26:14.283 "write": true, 00:26:14.283 "unmap": true, 00:26:14.283 "flush": true, 00:26:14.283 "reset": true, 00:26:14.283 "nvme_admin": false, 00:26:14.283 "nvme_io": false, 00:26:14.283 "nvme_io_md": false, 00:26:14.283 "write_zeroes": true, 00:26:14.283 "zcopy": true, 00:26:14.283 "get_zone_info": false, 00:26:14.283 "zone_management": false, 00:26:14.283 "zone_append": false, 00:26:14.283 "compare": false, 00:26:14.283 "compare_and_write": false, 00:26:14.283 "abort": true, 00:26:14.283 "seek_hole": false, 00:26:14.283 "seek_data": false, 00:26:14.283 "copy": true, 00:26:14.283 "nvme_iov_md": false 00:26:14.283 }, 00:26:14.283 "memory_domains": [ 00:26:14.283 { 00:26:14.283 "dma_device_id": "system", 00:26:14.283 "dma_device_type": 1 00:26:14.283 }, 00:26:14.283 { 00:26:14.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:14.283 "dma_device_type": 2 00:26:14.283 } 00:26:14.283 ], 00:26:14.283 "driver_specific": { 00:26:14.283 "passthru": { 00:26:14.283 "name": "pt1", 00:26:14.283 "base_bdev_name": "malloc1" 00:26:14.283 } 00:26:14.283 } 00:26:14.283 }' 00:26:14.283 22:09:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:14.283 22:09:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:14.283 22:09:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:26:14.283 22:09:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:14.283 22:09:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:14.283 22:09:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:26:14.283 22:09:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:14.540 22:09:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:14.540 22:09:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:26:14.540 22:09:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:14.540 22:09:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:14.540 22:09:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:26:14.540 22:09:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:14.540 22:09:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:26:14.540 22:09:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:14.798 22:09:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:14.798 "name": "pt2", 00:26:14.798 "aliases": [ 00:26:14.798 "00000000-0000-0000-0000-000000000002" 00:26:14.798 ], 00:26:14.798 "product_name": "passthru", 00:26:14.798 "block_size": 4096, 00:26:14.798 "num_blocks": 8192, 00:26:14.799 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:14.799 "md_size": 32, 00:26:14.799 "md_interleave": false, 00:26:14.799 "dif_type": 0, 00:26:14.799 "assigned_rate_limits": { 00:26:14.799 "rw_ios_per_sec": 0, 00:26:14.799 "rw_mbytes_per_sec": 0, 00:26:14.799 "r_mbytes_per_sec": 0, 00:26:14.799 "w_mbytes_per_sec": 0 00:26:14.799 }, 00:26:14.799 "claimed": true, 00:26:14.799 "claim_type": "exclusive_write", 00:26:14.799 "zoned": false, 00:26:14.799 "supported_io_types": { 00:26:14.799 "read": true, 00:26:14.799 "write": true, 00:26:14.799 "unmap": true, 00:26:14.799 "flush": true, 00:26:14.799 "reset": true, 00:26:14.799 "nvme_admin": false, 00:26:14.799 "nvme_io": false, 00:26:14.799 "nvme_io_md": false, 00:26:14.799 "write_zeroes": true, 00:26:14.799 "zcopy": true, 00:26:14.799 "get_zone_info": false, 00:26:14.799 "zone_management": false, 00:26:14.799 "zone_append": false, 00:26:14.799 "compare": false, 00:26:14.799 "compare_and_write": false, 00:26:14.799 "abort": true, 00:26:14.799 "seek_hole": false, 00:26:14.799 "seek_data": false, 00:26:14.799 "copy": true, 00:26:14.799 "nvme_iov_md": false 00:26:14.799 }, 00:26:14.799 "memory_domains": [ 00:26:14.799 { 00:26:14.799 "dma_device_id": "system", 00:26:14.799 "dma_device_type": 1 00:26:14.799 }, 00:26:14.799 { 00:26:14.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:14.799 "dma_device_type": 2 00:26:14.799 } 00:26:14.799 ], 00:26:14.799 "driver_specific": { 00:26:14.799 "passthru": { 00:26:14.799 "name": "pt2", 00:26:14.799 "base_bdev_name": "malloc2" 00:26:14.799 } 00:26:14.799 } 00:26:14.799 }' 00:26:14.799 22:09:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:14.799 22:09:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:14.799 22:09:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:26:14.799 22:09:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:14.799 22:09:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:14.799 22:09:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:26:14.799 22:09:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:14.799 22:09:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:15.057 22:09:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:26:15.057 22:09:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:15.057 22:09:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:15.057 22:09:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:26:15.057 22:09:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:15.057 22:09:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:26:15.314 [2024-07-13 22:09:34.468922] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:15.315 22:09:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # '[' 7add9485-ef93-4ce5-b250-de0ea8e137da '!=' 7add9485-ef93-4ce5-b250-de0ea8e137da ']' 00:26:15.315 22:09:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:26:15.315 22:09:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:15.315 22:09:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:26:15.315 22:09:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:26:15.315 [2024-07-13 22:09:34.637111] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:26:15.315 22:09:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:15.315 22:09:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:15.315 22:09:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:15.315 22:09:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:15.315 22:09:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:15.315 22:09:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:15.315 22:09:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:15.315 22:09:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:15.315 22:09:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:15.315 22:09:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:15.315 22:09:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:15.315 22:09:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:15.573 22:09:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:15.573 "name": "raid_bdev1", 00:26:15.573 "uuid": "7add9485-ef93-4ce5-b250-de0ea8e137da", 00:26:15.573 "strip_size_kb": 0, 00:26:15.573 "state": "online", 00:26:15.573 "raid_level": "raid1", 00:26:15.573 "superblock": true, 00:26:15.573 "num_base_bdevs": 2, 00:26:15.573 "num_base_bdevs_discovered": 1, 00:26:15.573 "num_base_bdevs_operational": 1, 00:26:15.573 "base_bdevs_list": [ 00:26:15.573 { 00:26:15.573 "name": null, 00:26:15.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:15.573 "is_configured": false, 00:26:15.573 "data_offset": 256, 00:26:15.573 "data_size": 7936 00:26:15.573 }, 00:26:15.573 { 00:26:15.573 "name": "pt2", 00:26:15.573 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:15.573 "is_configured": true, 00:26:15.573 "data_offset": 256, 00:26:15.573 "data_size": 7936 00:26:15.573 } 00:26:15.573 ] 00:26:15.573 }' 00:26:15.573 22:09:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:15.573 22:09:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:16.137 22:09:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:16.137 [2024-07-13 22:09:35.459222] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:16.137 [2024-07-13 22:09:35.459248] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:16.137 [2024-07-13 22:09:35.459322] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:16.137 [2024-07-13 22:09:35.459365] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:16.137 [2024-07-13 22:09:35.459378] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041a80 name raid_bdev1, state offline 00:26:16.137 22:09:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:16.137 22:09:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:26:16.395 22:09:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:26:16.395 22:09:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:26:16.395 22:09:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:26:16.395 22:09:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:26:16.395 22:09:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:16.653 22:09:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:26:16.653 22:09:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:26:16.653 22:09:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:26:16.653 22:09:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:26:16.653 22:09:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@518 -- # i=1 00:26:16.653 22:09:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:16.653 [2024-07-13 22:09:35.968539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:16.653 [2024-07-13 22:09:35.968595] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:16.653 [2024-07-13 22:09:35.968612] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042080 00:26:16.653 [2024-07-13 22:09:35.968625] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:16.653 [2024-07-13 22:09:35.970526] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:16.653 [2024-07-13 22:09:35.970555] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:16.653 [2024-07-13 22:09:35.970598] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:16.653 [2024-07-13 22:09:35.970646] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:16.653 [2024-07-13 22:09:35.970760] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042680 00:26:16.653 [2024-07-13 22:09:35.970773] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:26:16.653 [2024-07-13 22:09:35.970834] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:26:16.653 [2024-07-13 22:09:35.970982] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042680 00:26:16.653 [2024-07-13 22:09:35.970992] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000042680 00:26:16.653 [2024-07-13 22:09:35.971098] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:16.653 pt2 00:26:16.653 22:09:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:16.653 22:09:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:16.653 22:09:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:16.653 22:09:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:16.653 22:09:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:16.653 22:09:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:16.653 22:09:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:16.653 22:09:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:16.653 22:09:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:16.653 22:09:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:16.653 22:09:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:16.653 22:09:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:16.911 22:09:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:16.911 "name": "raid_bdev1", 00:26:16.911 "uuid": "7add9485-ef93-4ce5-b250-de0ea8e137da", 00:26:16.911 "strip_size_kb": 0, 00:26:16.911 "state": "online", 00:26:16.911 "raid_level": "raid1", 00:26:16.911 "superblock": true, 00:26:16.911 "num_base_bdevs": 2, 00:26:16.911 "num_base_bdevs_discovered": 1, 00:26:16.911 "num_base_bdevs_operational": 1, 00:26:16.911 "base_bdevs_list": [ 00:26:16.911 { 00:26:16.911 "name": null, 00:26:16.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:16.911 "is_configured": false, 00:26:16.911 "data_offset": 256, 00:26:16.911 "data_size": 7936 00:26:16.911 }, 00:26:16.911 { 00:26:16.911 "name": "pt2", 00:26:16.911 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:16.911 "is_configured": true, 00:26:16.911 "data_offset": 256, 00:26:16.911 "data_size": 7936 00:26:16.911 } 00:26:16.911 ] 00:26:16.911 }' 00:26:16.911 22:09:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:16.911 22:09:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:17.477 22:09:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:17.477 [2024-07-13 22:09:36.770662] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:17.477 [2024-07-13 22:09:36.770691] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:17.477 [2024-07-13 22:09:36.770758] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:17.477 [2024-07-13 22:09:36.770805] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:17.477 [2024-07-13 22:09:36.770816] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042680 name raid_bdev1, state offline 00:26:17.477 22:09:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:17.477 22:09:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:26:17.735 22:09:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:26:17.735 22:09:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:26:17.735 22:09:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:26:17.735 22:09:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:17.735 [2024-07-13 22:09:37.107526] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:17.735 [2024-07-13 22:09:37.107580] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:17.735 [2024-07-13 22:09:37.107601] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042980 00:26:17.735 [2024-07-13 22:09:37.107616] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:17.735 [2024-07-13 22:09:37.109637] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:17.735 [2024-07-13 22:09:37.109667] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:17.735 [2024-07-13 22:09:37.109721] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:17.735 [2024-07-13 22:09:37.109769] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:17.735 [2024-07-13 22:09:37.109927] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:26:17.735 [2024-07-13 22:09:37.109940] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:17.735 [2024-07-13 22:09:37.109960] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042f80 name raid_bdev1, state configuring 00:26:17.735 [2024-07-13 22:09:37.110024] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:17.735 [2024-07-13 22:09:37.110087] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000043280 00:26:17.735 [2024-07-13 22:09:37.110098] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:26:17.735 [2024-07-13 22:09:37.110160] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:26:17.735 [2024-07-13 22:09:37.110299] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000043280 00:26:17.735 [2024-07-13 22:09:37.110311] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000043280 00:26:17.735 [2024-07-13 22:09:37.110424] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:17.735 pt1 00:26:17.735 22:09:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:26:17.735 22:09:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:17.735 22:09:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:17.735 22:09:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:17.735 22:09:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:17.735 22:09:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:17.735 22:09:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:17.735 22:09:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:17.735 22:09:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:17.735 22:09:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:17.736 22:09:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:17.994 22:09:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:17.994 22:09:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:17.994 22:09:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:17.994 "name": "raid_bdev1", 00:26:17.994 "uuid": "7add9485-ef93-4ce5-b250-de0ea8e137da", 00:26:17.994 "strip_size_kb": 0, 00:26:17.994 "state": "online", 00:26:17.994 "raid_level": "raid1", 00:26:17.994 "superblock": true, 00:26:17.994 "num_base_bdevs": 2, 00:26:17.994 "num_base_bdevs_discovered": 1, 00:26:17.994 "num_base_bdevs_operational": 1, 00:26:17.994 "base_bdevs_list": [ 00:26:17.994 { 00:26:17.994 "name": null, 00:26:17.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:17.994 "is_configured": false, 00:26:17.994 "data_offset": 256, 00:26:17.994 "data_size": 7936 00:26:17.994 }, 00:26:17.994 { 00:26:17.994 "name": "pt2", 00:26:17.994 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:17.994 "is_configured": true, 00:26:17.994 "data_offset": 256, 00:26:17.994 "data_size": 7936 00:26:17.994 } 00:26:17.994 ] 00:26:17.994 }' 00:26:17.994 22:09:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:17.994 22:09:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:18.560 22:09:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:26:18.560 22:09:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:26:18.820 22:09:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:26:18.820 22:09:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:18.820 22:09:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:26:18.820 [2024-07-13 22:09:38.126453] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:18.820 22:09:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # '[' 7add9485-ef93-4ce5-b250-de0ea8e137da '!=' 7add9485-ef93-4ce5-b250-de0ea8e137da ']' 00:26:18.820 22:09:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@562 -- # killprocess 1508391 00:26:18.820 22:09:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@948 -- # '[' -z 1508391 ']' 00:26:18.820 22:09:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # kill -0 1508391 00:26:18.820 22:09:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@953 -- # uname 00:26:18.820 22:09:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:18.820 22:09:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1508391 00:26:18.820 22:09:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:18.820 22:09:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:18.820 22:09:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1508391' 00:26:18.820 killing process with pid 1508391 00:26:18.820 22:09:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@967 -- # kill 1508391 00:26:18.820 [2024-07-13 22:09:38.183344] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:18.820 [2024-07-13 22:09:38.183426] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:18.820 [2024-07-13 22:09:38.183471] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:18.820 [2024-07-13 22:09:38.183486] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000043280 name raid_bdev1, state offline 00:26:18.820 22:09:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # wait 1508391 00:26:19.079 [2024-07-13 22:09:38.394555] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:20.499 22:09:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@564 -- # return 0 00:26:20.499 00:26:20.499 real 0m13.151s 00:26:20.499 user 0m22.452s 00:26:20.499 sys 0m2.512s 00:26:20.499 22:09:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:20.499 22:09:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:20.499 ************************************ 00:26:20.499 END TEST raid_superblock_test_md_separate 00:26:20.499 ************************************ 00:26:20.499 22:09:39 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:26:20.499 22:09:39 bdev_raid -- bdev/bdev_raid.sh@907 -- # '[' true = true ']' 00:26:20.499 22:09:39 bdev_raid -- bdev/bdev_raid.sh@908 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:26:20.499 22:09:39 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:26:20.499 22:09:39 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:20.499 22:09:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:20.499 ************************************ 00:26:20.499 START TEST raid_rebuild_test_sb_md_separate 00:26:20.499 ************************************ 00:26:20.499 22:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false true 00:26:20.499 22:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:26:20.499 22:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:26:20.499 22:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:26:20.499 22:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:26:20.499 22:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local verify=true 00:26:20.499 22:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:26:20.499 22:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:26:20.499 22:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:26:20.499 22:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:26:20.499 22:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:26:20.499 22:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:26:20.499 22:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:26:20.499 22:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:26:20.499 22:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:26:20.499 22:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:26:20.499 22:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:26:20.499 22:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local strip_size 00:26:20.499 22:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local create_arg 00:26:20.499 22:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:26:20.499 22:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local data_offset 00:26:20.499 22:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:26:20.499 22:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:26:20.499 22:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:26:20.499 22:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:26:20.499 22:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:26:20.499 22:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # raid_pid=1510944 00:26:20.499 22:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # waitforlisten 1510944 /var/tmp/spdk-raid.sock 00:26:20.499 22:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@829 -- # '[' -z 1510944 ']' 00:26:20.499 22:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:20.499 22:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:20.499 22:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:20.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:20.499 22:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:20.499 22:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:20.499 [2024-07-13 22:09:39.734437] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:26:20.499 [2024-07-13 22:09:39.734548] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1510944 ] 00:26:20.499 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:20.499 Zero copy mechanism will not be used. 00:26:20.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.499 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:20.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.499 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:20.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.499 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:20.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.499 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:20.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.499 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:20.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.499 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:20.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.499 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:20.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.499 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:20.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.499 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:20.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.499 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:20.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.499 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:20.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.499 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:20.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.499 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:20.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.499 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:20.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.499 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:20.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.499 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:20.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.499 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:20.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.499 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:20.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.499 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:20.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.499 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:20.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.499 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:20.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.499 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:20.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.499 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:20.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.499 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:20.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.499 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:20.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.499 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:20.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.499 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:20.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.499 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:20.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.499 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:20.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.499 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:20.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.499 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:20.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.499 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:20.759 [2024-07-13 22:09:39.899550] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.759 [2024-07-13 22:09:40.113688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:21.018 [2024-07-13 22:09:40.352759] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:21.018 [2024-07-13 22:09:40.352789] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:21.277 22:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:21.277 22:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@862 -- # return 0 00:26:21.277 22:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:26:21.277 22:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:26:21.537 BaseBdev1_malloc 00:26:21.537 22:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:21.537 [2024-07-13 22:09:40.853463] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:21.537 [2024-07-13 22:09:40.853521] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:21.537 [2024-07-13 22:09:40.853546] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:26:21.537 [2024-07-13 22:09:40.853560] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:21.537 [2024-07-13 22:09:40.855478] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:21.537 [2024-07-13 22:09:40.855507] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:21.537 BaseBdev1 00:26:21.537 22:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:26:21.537 22:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:26:21.796 BaseBdev2_malloc 00:26:21.796 22:09:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:26:22.055 [2024-07-13 22:09:41.219400] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:26:22.055 [2024-07-13 22:09:41.219449] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:22.055 [2024-07-13 22:09:41.219470] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:26:22.055 [2024-07-13 22:09:41.219486] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:22.055 [2024-07-13 22:09:41.221393] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:22.055 [2024-07-13 22:09:41.221421] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:22.055 BaseBdev2 00:26:22.056 22:09:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:26:22.056 spare_malloc 00:26:22.315 22:09:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:26:22.315 spare_delay 00:26:22.315 22:09:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:22.574 [2024-07-13 22:09:41.768432] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:22.574 [2024-07-13 22:09:41.768486] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:22.574 [2024-07-13 22:09:41.768510] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:26:22.574 [2024-07-13 22:09:41.768523] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:22.574 [2024-07-13 22:09:41.770430] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:22.574 [2024-07-13 22:09:41.770461] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:22.574 spare 00:26:22.574 22:09:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:26:22.574 [2024-07-13 22:09:41.936915] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:22.574 [2024-07-13 22:09:41.938684] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:22.574 [2024-07-13 22:09:41.938870] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041a80 00:26:22.574 [2024-07-13 22:09:41.938888] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:26:22.574 [2024-07-13 22:09:41.938983] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:26:22.574 [2024-07-13 22:09:41.939152] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041a80 00:26:22.574 [2024-07-13 22:09:41.939163] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041a80 00:26:22.574 [2024-07-13 22:09:41.939288] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:22.574 22:09:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:22.574 22:09:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:22.574 22:09:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:22.574 22:09:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:22.574 22:09:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:22.574 22:09:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:22.574 22:09:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:22.574 22:09:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:22.574 22:09:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:22.574 22:09:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:22.574 22:09:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:22.574 22:09:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:22.834 22:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:22.834 "name": "raid_bdev1", 00:26:22.834 "uuid": "a22f0932-c8b3-448f-b2d1-c1e7a2e32f88", 00:26:22.834 "strip_size_kb": 0, 00:26:22.834 "state": "online", 00:26:22.834 "raid_level": "raid1", 00:26:22.834 "superblock": true, 00:26:22.834 "num_base_bdevs": 2, 00:26:22.834 "num_base_bdevs_discovered": 2, 00:26:22.834 "num_base_bdevs_operational": 2, 00:26:22.834 "base_bdevs_list": [ 00:26:22.834 { 00:26:22.834 "name": "BaseBdev1", 00:26:22.834 "uuid": "e7d3845e-25e5-5009-83b7-17d86c5b9ba3", 00:26:22.834 "is_configured": true, 00:26:22.834 "data_offset": 256, 00:26:22.834 "data_size": 7936 00:26:22.834 }, 00:26:22.834 { 00:26:22.834 "name": "BaseBdev2", 00:26:22.834 "uuid": "0e26acd4-732f-5a5b-9cfa-4e72fc073fc9", 00:26:22.834 "is_configured": true, 00:26:22.834 "data_offset": 256, 00:26:22.834 "data_size": 7936 00:26:22.834 } 00:26:22.834 ] 00:26:22.834 }' 00:26:22.834 22:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:22.834 22:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:23.402 22:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:26:23.402 22:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:23.402 [2024-07-13 22:09:42.783355] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:23.662 22:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:26:23.662 22:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:23.662 22:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:26:23.662 22:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:26:23.662 22:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:26:23.662 22:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:26:23.662 22:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:26:23.662 22:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:26:23.662 22:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:23.662 22:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:26:23.662 22:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:23.662 22:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:26:23.662 22:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:23.662 22:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:26:23.662 22:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:23.662 22:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:23.662 22:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:26:23.921 [2024-07-13 22:09:43.136080] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:26:23.922 /dev/nbd0 00:26:23.922 22:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:23.922 22:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:23.922 22:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:26:23.922 22:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # local i 00:26:23.922 22:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:23.922 22:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:23.922 22:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:26:23.922 22:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # break 00:26:23.922 22:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:23.922 22:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:23.922 22:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:23.922 1+0 records in 00:26:23.922 1+0 records out 00:26:23.922 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000256238 s, 16.0 MB/s 00:26:23.922 22:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:23.922 22:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # size=4096 00:26:23.922 22:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:23.922 22:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:23.922 22:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # return 0 00:26:23.922 22:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:23.922 22:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:23.922 22:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:26:23.922 22:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:26:23.922 22:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:26:24.489 7936+0 records in 00:26:24.489 7936+0 records out 00:26:24.489 32505856 bytes (33 MB, 31 MiB) copied, 0.584503 s, 55.6 MB/s 00:26:24.489 22:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:26:24.489 22:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:24.489 22:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:24.489 22:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:24.489 22:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:26:24.489 22:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:24.489 22:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:24.747 [2024-07-13 22:09:43.963804] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:24.747 22:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:24.747 22:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:24.747 22:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:24.747 22:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:24.747 22:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:24.747 22:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:24.747 22:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:26:24.747 22:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:26:24.747 22:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:26:24.747 [2024-07-13 22:09:44.128327] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:25.005 22:09:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:25.005 22:09:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:25.005 22:09:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:25.005 22:09:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:25.005 22:09:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:25.005 22:09:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:25.005 22:09:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:25.005 22:09:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:25.005 22:09:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:25.005 22:09:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:25.005 22:09:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:25.005 22:09:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:25.005 22:09:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:25.005 "name": "raid_bdev1", 00:26:25.005 "uuid": "a22f0932-c8b3-448f-b2d1-c1e7a2e32f88", 00:26:25.005 "strip_size_kb": 0, 00:26:25.006 "state": "online", 00:26:25.006 "raid_level": "raid1", 00:26:25.006 "superblock": true, 00:26:25.006 "num_base_bdevs": 2, 00:26:25.006 "num_base_bdevs_discovered": 1, 00:26:25.006 "num_base_bdevs_operational": 1, 00:26:25.006 "base_bdevs_list": [ 00:26:25.006 { 00:26:25.006 "name": null, 00:26:25.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:25.006 "is_configured": false, 00:26:25.006 "data_offset": 256, 00:26:25.006 "data_size": 7936 00:26:25.006 }, 00:26:25.006 { 00:26:25.006 "name": "BaseBdev2", 00:26:25.006 "uuid": "0e26acd4-732f-5a5b-9cfa-4e72fc073fc9", 00:26:25.006 "is_configured": true, 00:26:25.006 "data_offset": 256, 00:26:25.006 "data_size": 7936 00:26:25.006 } 00:26:25.006 ] 00:26:25.006 }' 00:26:25.006 22:09:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:25.006 22:09:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:25.574 22:09:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:25.574 [2024-07-13 22:09:44.958500] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:25.833 [2024-07-13 22:09:44.977351] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001a4410 00:26:25.833 [2024-07-13 22:09:44.979144] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:25.833 22:09:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # sleep 1 00:26:26.770 22:09:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:26.770 22:09:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:26.770 22:09:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:26.770 22:09:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:26.770 22:09:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:26.770 22:09:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:26.770 22:09:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:27.029 22:09:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:27.029 "name": "raid_bdev1", 00:26:27.029 "uuid": "a22f0932-c8b3-448f-b2d1-c1e7a2e32f88", 00:26:27.029 "strip_size_kb": 0, 00:26:27.029 "state": "online", 00:26:27.029 "raid_level": "raid1", 00:26:27.029 "superblock": true, 00:26:27.029 "num_base_bdevs": 2, 00:26:27.029 "num_base_bdevs_discovered": 2, 00:26:27.029 "num_base_bdevs_operational": 2, 00:26:27.029 "process": { 00:26:27.029 "type": "rebuild", 00:26:27.029 "target": "spare", 00:26:27.029 "progress": { 00:26:27.029 "blocks": 2816, 00:26:27.029 "percent": 35 00:26:27.029 } 00:26:27.029 }, 00:26:27.029 "base_bdevs_list": [ 00:26:27.029 { 00:26:27.029 "name": "spare", 00:26:27.029 "uuid": "94b519f5-c763-5101-af65-9db1e6ed48c3", 00:26:27.029 "is_configured": true, 00:26:27.029 "data_offset": 256, 00:26:27.029 "data_size": 7936 00:26:27.029 }, 00:26:27.029 { 00:26:27.029 "name": "BaseBdev2", 00:26:27.029 "uuid": "0e26acd4-732f-5a5b-9cfa-4e72fc073fc9", 00:26:27.029 "is_configured": true, 00:26:27.029 "data_offset": 256, 00:26:27.029 "data_size": 7936 00:26:27.029 } 00:26:27.029 ] 00:26:27.029 }' 00:26:27.029 22:09:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:27.029 22:09:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:27.029 22:09:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:27.029 22:09:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:27.029 22:09:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:26:27.029 [2024-07-13 22:09:46.380498] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:27.029 [2024-07-13 22:09:46.389952] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:27.029 [2024-07-13 22:09:46.390005] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:27.029 [2024-07-13 22:09:46.390021] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:27.029 [2024-07-13 22:09:46.390032] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:27.289 22:09:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:27.289 22:09:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:27.289 22:09:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:27.289 22:09:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:27.289 22:09:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:27.289 22:09:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:27.289 22:09:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:27.289 22:09:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:27.289 22:09:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:27.289 22:09:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:27.289 22:09:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:27.289 22:09:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:27.289 22:09:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:27.289 "name": "raid_bdev1", 00:26:27.289 "uuid": "a22f0932-c8b3-448f-b2d1-c1e7a2e32f88", 00:26:27.289 "strip_size_kb": 0, 00:26:27.289 "state": "online", 00:26:27.289 "raid_level": "raid1", 00:26:27.289 "superblock": true, 00:26:27.289 "num_base_bdevs": 2, 00:26:27.289 "num_base_bdevs_discovered": 1, 00:26:27.289 "num_base_bdevs_operational": 1, 00:26:27.289 "base_bdevs_list": [ 00:26:27.289 { 00:26:27.289 "name": null, 00:26:27.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:27.289 "is_configured": false, 00:26:27.289 "data_offset": 256, 00:26:27.289 "data_size": 7936 00:26:27.289 }, 00:26:27.289 { 00:26:27.289 "name": "BaseBdev2", 00:26:27.289 "uuid": "0e26acd4-732f-5a5b-9cfa-4e72fc073fc9", 00:26:27.289 "is_configured": true, 00:26:27.289 "data_offset": 256, 00:26:27.289 "data_size": 7936 00:26:27.289 } 00:26:27.289 ] 00:26:27.289 }' 00:26:27.289 22:09:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:27.289 22:09:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:27.858 22:09:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:27.858 22:09:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:27.858 22:09:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:27.858 22:09:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:27.858 22:09:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:27.858 22:09:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:27.858 22:09:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:27.858 22:09:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:27.858 "name": "raid_bdev1", 00:26:27.858 "uuid": "a22f0932-c8b3-448f-b2d1-c1e7a2e32f88", 00:26:27.858 "strip_size_kb": 0, 00:26:27.858 "state": "online", 00:26:27.858 "raid_level": "raid1", 00:26:27.858 "superblock": true, 00:26:27.858 "num_base_bdevs": 2, 00:26:27.858 "num_base_bdevs_discovered": 1, 00:26:27.858 "num_base_bdevs_operational": 1, 00:26:27.858 "base_bdevs_list": [ 00:26:27.858 { 00:26:27.858 "name": null, 00:26:27.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:27.858 "is_configured": false, 00:26:27.858 "data_offset": 256, 00:26:27.858 "data_size": 7936 00:26:27.858 }, 00:26:27.858 { 00:26:27.858 "name": "BaseBdev2", 00:26:27.858 "uuid": "0e26acd4-732f-5a5b-9cfa-4e72fc073fc9", 00:26:27.858 "is_configured": true, 00:26:27.858 "data_offset": 256, 00:26:27.858 "data_size": 7936 00:26:27.858 } 00:26:27.858 ] 00:26:27.858 }' 00:26:27.858 22:09:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:28.117 22:09:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:28.117 22:09:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:28.117 22:09:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:28.117 22:09:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:28.117 [2024-07-13 22:09:47.451786] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:28.117 [2024-07-13 22:09:47.468052] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001a44e0 00:26:28.117 [2024-07-13 22:09:47.469788] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:28.118 22:09:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:29.499 22:09:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:29.499 22:09:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:29.499 22:09:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:29.499 22:09:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:29.499 22:09:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:29.499 22:09:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:29.499 22:09:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:29.499 22:09:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:29.499 "name": "raid_bdev1", 00:26:29.499 "uuid": "a22f0932-c8b3-448f-b2d1-c1e7a2e32f88", 00:26:29.499 "strip_size_kb": 0, 00:26:29.499 "state": "online", 00:26:29.499 "raid_level": "raid1", 00:26:29.499 "superblock": true, 00:26:29.499 "num_base_bdevs": 2, 00:26:29.499 "num_base_bdevs_discovered": 2, 00:26:29.499 "num_base_bdevs_operational": 2, 00:26:29.499 "process": { 00:26:29.499 "type": "rebuild", 00:26:29.499 "target": "spare", 00:26:29.499 "progress": { 00:26:29.499 "blocks": 2816, 00:26:29.499 "percent": 35 00:26:29.499 } 00:26:29.499 }, 00:26:29.499 "base_bdevs_list": [ 00:26:29.499 { 00:26:29.499 "name": "spare", 00:26:29.499 "uuid": "94b519f5-c763-5101-af65-9db1e6ed48c3", 00:26:29.499 "is_configured": true, 00:26:29.499 "data_offset": 256, 00:26:29.499 "data_size": 7936 00:26:29.499 }, 00:26:29.499 { 00:26:29.499 "name": "BaseBdev2", 00:26:29.499 "uuid": "0e26acd4-732f-5a5b-9cfa-4e72fc073fc9", 00:26:29.499 "is_configured": true, 00:26:29.499 "data_offset": 256, 00:26:29.499 "data_size": 7936 00:26:29.499 } 00:26:29.499 ] 00:26:29.499 }' 00:26:29.499 22:09:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:29.499 22:09:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:29.499 22:09:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:29.499 22:09:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:29.499 22:09:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:26:29.499 22:09:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:26:29.499 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:26:29.499 22:09:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:26:29.499 22:09:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:26:29.499 22:09:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:26:29.500 22:09:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@705 -- # local timeout=919 00:26:29.500 22:09:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:26:29.500 22:09:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:29.500 22:09:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:29.500 22:09:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:29.500 22:09:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:29.500 22:09:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:29.500 22:09:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:29.500 22:09:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:29.759 22:09:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:29.759 "name": "raid_bdev1", 00:26:29.759 "uuid": "a22f0932-c8b3-448f-b2d1-c1e7a2e32f88", 00:26:29.759 "strip_size_kb": 0, 00:26:29.759 "state": "online", 00:26:29.759 "raid_level": "raid1", 00:26:29.759 "superblock": true, 00:26:29.759 "num_base_bdevs": 2, 00:26:29.759 "num_base_bdevs_discovered": 2, 00:26:29.759 "num_base_bdevs_operational": 2, 00:26:29.759 "process": { 00:26:29.759 "type": "rebuild", 00:26:29.759 "target": "spare", 00:26:29.759 "progress": { 00:26:29.759 "blocks": 3584, 00:26:29.759 "percent": 45 00:26:29.759 } 00:26:29.759 }, 00:26:29.759 "base_bdevs_list": [ 00:26:29.759 { 00:26:29.759 "name": "spare", 00:26:29.759 "uuid": "94b519f5-c763-5101-af65-9db1e6ed48c3", 00:26:29.759 "is_configured": true, 00:26:29.759 "data_offset": 256, 00:26:29.759 "data_size": 7936 00:26:29.759 }, 00:26:29.759 { 00:26:29.759 "name": "BaseBdev2", 00:26:29.759 "uuid": "0e26acd4-732f-5a5b-9cfa-4e72fc073fc9", 00:26:29.759 "is_configured": true, 00:26:29.759 "data_offset": 256, 00:26:29.759 "data_size": 7936 00:26:29.759 } 00:26:29.759 ] 00:26:29.759 }' 00:26:29.759 22:09:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:29.759 22:09:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:29.759 22:09:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:29.759 22:09:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:29.759 22:09:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@710 -- # sleep 1 00:26:30.696 22:09:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:26:30.696 22:09:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:30.696 22:09:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:30.696 22:09:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:30.696 22:09:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:30.696 22:09:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:30.696 22:09:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:30.696 22:09:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:30.955 22:09:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:30.955 "name": "raid_bdev1", 00:26:30.955 "uuid": "a22f0932-c8b3-448f-b2d1-c1e7a2e32f88", 00:26:30.955 "strip_size_kb": 0, 00:26:30.955 "state": "online", 00:26:30.955 "raid_level": "raid1", 00:26:30.955 "superblock": true, 00:26:30.955 "num_base_bdevs": 2, 00:26:30.955 "num_base_bdevs_discovered": 2, 00:26:30.955 "num_base_bdevs_operational": 2, 00:26:30.955 "process": { 00:26:30.955 "type": "rebuild", 00:26:30.955 "target": "spare", 00:26:30.955 "progress": { 00:26:30.955 "blocks": 6656, 00:26:30.955 "percent": 83 00:26:30.955 } 00:26:30.955 }, 00:26:30.955 "base_bdevs_list": [ 00:26:30.955 { 00:26:30.955 "name": "spare", 00:26:30.955 "uuid": "94b519f5-c763-5101-af65-9db1e6ed48c3", 00:26:30.955 "is_configured": true, 00:26:30.955 "data_offset": 256, 00:26:30.955 "data_size": 7936 00:26:30.955 }, 00:26:30.955 { 00:26:30.955 "name": "BaseBdev2", 00:26:30.955 "uuid": "0e26acd4-732f-5a5b-9cfa-4e72fc073fc9", 00:26:30.955 "is_configured": true, 00:26:30.955 "data_offset": 256, 00:26:30.955 "data_size": 7936 00:26:30.955 } 00:26:30.955 ] 00:26:30.955 }' 00:26:30.955 22:09:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:30.955 22:09:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:30.955 22:09:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:30.955 22:09:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:30.955 22:09:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@710 -- # sleep 1 00:26:31.214 [2024-07-13 22:09:50.593491] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:26:31.214 [2024-07-13 22:09:50.593546] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:26:31.214 [2024-07-13 22:09:50.593640] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:32.152 22:09:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:26:32.152 22:09:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:32.152 22:09:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:32.152 22:09:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:32.152 22:09:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:32.152 22:09:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:32.152 22:09:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:32.152 22:09:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:32.152 22:09:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:32.152 "name": "raid_bdev1", 00:26:32.152 "uuid": "a22f0932-c8b3-448f-b2d1-c1e7a2e32f88", 00:26:32.152 "strip_size_kb": 0, 00:26:32.152 "state": "online", 00:26:32.152 "raid_level": "raid1", 00:26:32.152 "superblock": true, 00:26:32.152 "num_base_bdevs": 2, 00:26:32.152 "num_base_bdevs_discovered": 2, 00:26:32.152 "num_base_bdevs_operational": 2, 00:26:32.152 "base_bdevs_list": [ 00:26:32.152 { 00:26:32.152 "name": "spare", 00:26:32.152 "uuid": "94b519f5-c763-5101-af65-9db1e6ed48c3", 00:26:32.152 "is_configured": true, 00:26:32.152 "data_offset": 256, 00:26:32.152 "data_size": 7936 00:26:32.152 }, 00:26:32.152 { 00:26:32.152 "name": "BaseBdev2", 00:26:32.152 "uuid": "0e26acd4-732f-5a5b-9cfa-4e72fc073fc9", 00:26:32.152 "is_configured": true, 00:26:32.152 "data_offset": 256, 00:26:32.152 "data_size": 7936 00:26:32.152 } 00:26:32.152 ] 00:26:32.152 }' 00:26:32.152 22:09:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:32.152 22:09:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:26:32.152 22:09:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:32.152 22:09:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:26:32.152 22:09:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # break 00:26:32.152 22:09:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:32.152 22:09:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:32.152 22:09:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:32.152 22:09:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:32.152 22:09:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:32.152 22:09:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:32.152 22:09:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:32.411 22:09:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:32.411 "name": "raid_bdev1", 00:26:32.411 "uuid": "a22f0932-c8b3-448f-b2d1-c1e7a2e32f88", 00:26:32.411 "strip_size_kb": 0, 00:26:32.411 "state": "online", 00:26:32.411 "raid_level": "raid1", 00:26:32.411 "superblock": true, 00:26:32.411 "num_base_bdevs": 2, 00:26:32.411 "num_base_bdevs_discovered": 2, 00:26:32.411 "num_base_bdevs_operational": 2, 00:26:32.411 "base_bdevs_list": [ 00:26:32.411 { 00:26:32.411 "name": "spare", 00:26:32.411 "uuid": "94b519f5-c763-5101-af65-9db1e6ed48c3", 00:26:32.411 "is_configured": true, 00:26:32.411 "data_offset": 256, 00:26:32.411 "data_size": 7936 00:26:32.411 }, 00:26:32.411 { 00:26:32.411 "name": "BaseBdev2", 00:26:32.411 "uuid": "0e26acd4-732f-5a5b-9cfa-4e72fc073fc9", 00:26:32.411 "is_configured": true, 00:26:32.411 "data_offset": 256, 00:26:32.411 "data_size": 7936 00:26:32.411 } 00:26:32.411 ] 00:26:32.411 }' 00:26:32.411 22:09:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:32.411 22:09:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:32.411 22:09:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:32.411 22:09:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:32.411 22:09:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:32.411 22:09:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:32.411 22:09:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:32.411 22:09:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:32.411 22:09:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:32.411 22:09:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:32.411 22:09:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:32.411 22:09:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:32.411 22:09:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:32.411 22:09:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:32.411 22:09:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:32.411 22:09:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:32.670 22:09:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:32.670 "name": "raid_bdev1", 00:26:32.670 "uuid": "a22f0932-c8b3-448f-b2d1-c1e7a2e32f88", 00:26:32.670 "strip_size_kb": 0, 00:26:32.670 "state": "online", 00:26:32.670 "raid_level": "raid1", 00:26:32.670 "superblock": true, 00:26:32.670 "num_base_bdevs": 2, 00:26:32.670 "num_base_bdevs_discovered": 2, 00:26:32.670 "num_base_bdevs_operational": 2, 00:26:32.670 "base_bdevs_list": [ 00:26:32.670 { 00:26:32.670 "name": "spare", 00:26:32.670 "uuid": "94b519f5-c763-5101-af65-9db1e6ed48c3", 00:26:32.670 "is_configured": true, 00:26:32.670 "data_offset": 256, 00:26:32.670 "data_size": 7936 00:26:32.670 }, 00:26:32.670 { 00:26:32.670 "name": "BaseBdev2", 00:26:32.670 "uuid": "0e26acd4-732f-5a5b-9cfa-4e72fc073fc9", 00:26:32.670 "is_configured": true, 00:26:32.670 "data_offset": 256, 00:26:32.670 "data_size": 7936 00:26:32.670 } 00:26:32.670 ] 00:26:32.670 }' 00:26:32.670 22:09:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:32.670 22:09:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:33.237 22:09:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:33.237 [2024-07-13 22:09:52.561600] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:33.237 [2024-07-13 22:09:52.561633] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:33.237 [2024-07-13 22:09:52.561706] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:33.237 [2024-07-13 22:09:52.561772] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:33.237 [2024-07-13 22:09:52.561784] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041a80 name raid_bdev1, state offline 00:26:33.237 22:09:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:33.237 22:09:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # jq length 00:26:33.496 22:09:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:26:33.496 22:09:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:26:33.496 22:09:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:26:33.496 22:09:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:26:33.496 22:09:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:33.496 22:09:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:26:33.496 22:09:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:33.496 22:09:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:33.496 22:09:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:33.496 22:09:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:26:33.496 22:09:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:33.496 22:09:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:33.496 22:09:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:26:33.754 /dev/nbd0 00:26:33.754 22:09:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:33.754 22:09:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:33.754 22:09:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:26:33.754 22:09:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # local i 00:26:33.754 22:09:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:33.754 22:09:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:33.754 22:09:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:26:33.754 22:09:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # break 00:26:33.754 22:09:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:33.754 22:09:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:33.754 22:09:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:33.754 1+0 records in 00:26:33.754 1+0 records out 00:26:33.754 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253978 s, 16.1 MB/s 00:26:33.754 22:09:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:33.754 22:09:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # size=4096 00:26:33.755 22:09:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:33.755 22:09:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:33.755 22:09:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # return 0 00:26:33.755 22:09:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:33.755 22:09:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:33.755 22:09:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:26:33.755 /dev/nbd1 00:26:33.755 22:09:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:33.755 22:09:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:33.755 22:09:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:26:33.755 22:09:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # local i 00:26:33.755 22:09:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:33.755 22:09:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:33.755 22:09:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:26:34.025 22:09:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # break 00:26:34.025 22:09:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:34.025 22:09:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:34.025 22:09:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:34.025 1+0 records in 00:26:34.025 1+0 records out 00:26:34.025 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000221359 s, 18.5 MB/s 00:26:34.025 22:09:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:34.025 22:09:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # size=4096 00:26:34.025 22:09:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:34.025 22:09:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:34.025 22:09:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # return 0 00:26:34.025 22:09:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:34.025 22:09:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:34.025 22:09:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:26:34.025 22:09:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:26:34.025 22:09:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:34.025 22:09:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:34.025 22:09:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:34.025 22:09:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:26:34.025 22:09:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:34.025 22:09:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:34.283 22:09:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:34.283 22:09:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:34.283 22:09:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:34.283 22:09:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:34.283 22:09:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:34.283 22:09:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:34.283 22:09:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:26:34.283 22:09:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:26:34.283 22:09:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:34.283 22:09:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:26:34.609 22:09:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:34.609 22:09:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:34.609 22:09:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:34.609 22:09:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:34.609 22:09:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:34.609 22:09:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:34.609 22:09:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:26:34.609 22:09:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:26:34.609 22:09:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:26:34.609 22:09:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:26:34.609 22:09:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:34.868 [2024-07-13 22:09:54.042689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:34.868 [2024-07-13 22:09:54.042743] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:34.868 [2024-07-13 22:09:54.042769] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043280 00:26:34.868 [2024-07-13 22:09:54.042781] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:34.868 [2024-07-13 22:09:54.044744] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:34.868 [2024-07-13 22:09:54.044772] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:34.868 [2024-07-13 22:09:54.044834] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:26:34.868 [2024-07-13 22:09:54.044877] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:34.868 [2024-07-13 22:09:54.045044] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:34.868 spare 00:26:34.868 22:09:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:34.868 22:09:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:34.868 22:09:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:34.868 22:09:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:34.868 22:09:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:34.868 22:09:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:34.868 22:09:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:34.868 22:09:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:34.868 22:09:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:34.868 22:09:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:34.868 22:09:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:34.868 22:09:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:34.868 [2024-07-13 22:09:54.145375] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000043880 00:26:34.868 [2024-07-13 22:09:54.145432] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:26:34.868 [2024-07-13 22:09:54.145533] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c9390 00:26:34.868 [2024-07-13 22:09:54.145735] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000043880 00:26:34.868 [2024-07-13 22:09:54.145745] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000043880 00:26:34.868 [2024-07-13 22:09:54.145889] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:34.868 22:09:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:34.868 "name": "raid_bdev1", 00:26:34.868 "uuid": "a22f0932-c8b3-448f-b2d1-c1e7a2e32f88", 00:26:34.868 "strip_size_kb": 0, 00:26:34.868 "state": "online", 00:26:34.868 "raid_level": "raid1", 00:26:34.868 "superblock": true, 00:26:34.868 "num_base_bdevs": 2, 00:26:34.868 "num_base_bdevs_discovered": 2, 00:26:34.868 "num_base_bdevs_operational": 2, 00:26:34.868 "base_bdevs_list": [ 00:26:34.868 { 00:26:34.868 "name": "spare", 00:26:34.868 "uuid": "94b519f5-c763-5101-af65-9db1e6ed48c3", 00:26:34.868 "is_configured": true, 00:26:34.868 "data_offset": 256, 00:26:34.868 "data_size": 7936 00:26:34.868 }, 00:26:34.868 { 00:26:34.868 "name": "BaseBdev2", 00:26:34.868 "uuid": "0e26acd4-732f-5a5b-9cfa-4e72fc073fc9", 00:26:34.868 "is_configured": true, 00:26:34.868 "data_offset": 256, 00:26:34.868 "data_size": 7936 00:26:34.868 } 00:26:34.868 ] 00:26:34.868 }' 00:26:34.868 22:09:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:34.868 22:09:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:35.435 22:09:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:35.435 22:09:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:35.435 22:09:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:35.435 22:09:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:35.435 22:09:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:35.435 22:09:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:35.435 22:09:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:35.693 22:09:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:35.693 "name": "raid_bdev1", 00:26:35.693 "uuid": "a22f0932-c8b3-448f-b2d1-c1e7a2e32f88", 00:26:35.693 "strip_size_kb": 0, 00:26:35.693 "state": "online", 00:26:35.693 "raid_level": "raid1", 00:26:35.693 "superblock": true, 00:26:35.693 "num_base_bdevs": 2, 00:26:35.693 "num_base_bdevs_discovered": 2, 00:26:35.693 "num_base_bdevs_operational": 2, 00:26:35.693 "base_bdevs_list": [ 00:26:35.693 { 00:26:35.693 "name": "spare", 00:26:35.693 "uuid": "94b519f5-c763-5101-af65-9db1e6ed48c3", 00:26:35.693 "is_configured": true, 00:26:35.693 "data_offset": 256, 00:26:35.693 "data_size": 7936 00:26:35.693 }, 00:26:35.693 { 00:26:35.693 "name": "BaseBdev2", 00:26:35.693 "uuid": "0e26acd4-732f-5a5b-9cfa-4e72fc073fc9", 00:26:35.693 "is_configured": true, 00:26:35.693 "data_offset": 256, 00:26:35.693 "data_size": 7936 00:26:35.693 } 00:26:35.694 ] 00:26:35.694 }' 00:26:35.694 22:09:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:35.694 22:09:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:35.694 22:09:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:35.694 22:09:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:35.694 22:09:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:26:35.694 22:09:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:35.952 22:09:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:26:35.952 22:09:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:26:35.952 [2024-07-13 22:09:55.278012] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:35.952 22:09:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:35.952 22:09:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:35.952 22:09:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:35.952 22:09:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:35.952 22:09:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:35.952 22:09:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:35.952 22:09:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:35.952 22:09:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:35.952 22:09:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:35.952 22:09:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:35.952 22:09:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:35.952 22:09:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:36.210 22:09:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:36.210 "name": "raid_bdev1", 00:26:36.210 "uuid": "a22f0932-c8b3-448f-b2d1-c1e7a2e32f88", 00:26:36.210 "strip_size_kb": 0, 00:26:36.210 "state": "online", 00:26:36.210 "raid_level": "raid1", 00:26:36.210 "superblock": true, 00:26:36.210 "num_base_bdevs": 2, 00:26:36.210 "num_base_bdevs_discovered": 1, 00:26:36.210 "num_base_bdevs_operational": 1, 00:26:36.210 "base_bdevs_list": [ 00:26:36.210 { 00:26:36.210 "name": null, 00:26:36.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:36.210 "is_configured": false, 00:26:36.210 "data_offset": 256, 00:26:36.210 "data_size": 7936 00:26:36.210 }, 00:26:36.210 { 00:26:36.210 "name": "BaseBdev2", 00:26:36.210 "uuid": "0e26acd4-732f-5a5b-9cfa-4e72fc073fc9", 00:26:36.210 "is_configured": true, 00:26:36.210 "data_offset": 256, 00:26:36.210 "data_size": 7936 00:26:36.210 } 00:26:36.210 ] 00:26:36.210 }' 00:26:36.210 22:09:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:36.210 22:09:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:36.776 22:09:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:36.776 [2024-07-13 22:09:56.124239] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:36.776 [2024-07-13 22:09:56.124403] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:26:36.776 [2024-07-13 22:09:56.124423] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:26:36.776 [2024-07-13 22:09:56.124451] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:36.776 [2024-07-13 22:09:56.140442] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c9460 00:26:36.776 [2024-07-13 22:09:56.142223] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:36.776 22:09:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # sleep 1 00:26:38.205 22:09:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:38.205 22:09:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:38.205 22:09:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:38.205 22:09:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:38.205 22:09:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:38.205 22:09:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:38.205 22:09:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:38.205 22:09:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:38.205 "name": "raid_bdev1", 00:26:38.205 "uuid": "a22f0932-c8b3-448f-b2d1-c1e7a2e32f88", 00:26:38.205 "strip_size_kb": 0, 00:26:38.205 "state": "online", 00:26:38.205 "raid_level": "raid1", 00:26:38.205 "superblock": true, 00:26:38.205 "num_base_bdevs": 2, 00:26:38.205 "num_base_bdevs_discovered": 2, 00:26:38.205 "num_base_bdevs_operational": 2, 00:26:38.205 "process": { 00:26:38.205 "type": "rebuild", 00:26:38.205 "target": "spare", 00:26:38.205 "progress": { 00:26:38.205 "blocks": 2816, 00:26:38.205 "percent": 35 00:26:38.205 } 00:26:38.205 }, 00:26:38.205 "base_bdevs_list": [ 00:26:38.205 { 00:26:38.205 "name": "spare", 00:26:38.205 "uuid": "94b519f5-c763-5101-af65-9db1e6ed48c3", 00:26:38.205 "is_configured": true, 00:26:38.205 "data_offset": 256, 00:26:38.205 "data_size": 7936 00:26:38.205 }, 00:26:38.205 { 00:26:38.205 "name": "BaseBdev2", 00:26:38.205 "uuid": "0e26acd4-732f-5a5b-9cfa-4e72fc073fc9", 00:26:38.205 "is_configured": true, 00:26:38.205 "data_offset": 256, 00:26:38.205 "data_size": 7936 00:26:38.205 } 00:26:38.205 ] 00:26:38.205 }' 00:26:38.205 22:09:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:38.205 22:09:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:38.205 22:09:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:38.205 22:09:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:38.205 22:09:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:26:38.205 [2024-07-13 22:09:57.560157] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:38.464 [2024-07-13 22:09:57.653742] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:38.464 [2024-07-13 22:09:57.653795] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:38.465 [2024-07-13 22:09:57.653811] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:38.465 [2024-07-13 22:09:57.653821] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:38.465 22:09:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:38.465 22:09:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:38.465 22:09:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:38.465 22:09:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:38.465 22:09:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:38.465 22:09:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:38.465 22:09:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:38.465 22:09:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:38.465 22:09:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:38.465 22:09:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:38.465 22:09:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:38.465 22:09:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:38.724 22:09:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:38.724 "name": "raid_bdev1", 00:26:38.724 "uuid": "a22f0932-c8b3-448f-b2d1-c1e7a2e32f88", 00:26:38.724 "strip_size_kb": 0, 00:26:38.724 "state": "online", 00:26:38.724 "raid_level": "raid1", 00:26:38.724 "superblock": true, 00:26:38.724 "num_base_bdevs": 2, 00:26:38.724 "num_base_bdevs_discovered": 1, 00:26:38.724 "num_base_bdevs_operational": 1, 00:26:38.724 "base_bdevs_list": [ 00:26:38.724 { 00:26:38.724 "name": null, 00:26:38.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:38.724 "is_configured": false, 00:26:38.724 "data_offset": 256, 00:26:38.724 "data_size": 7936 00:26:38.724 }, 00:26:38.724 { 00:26:38.724 "name": "BaseBdev2", 00:26:38.724 "uuid": "0e26acd4-732f-5a5b-9cfa-4e72fc073fc9", 00:26:38.724 "is_configured": true, 00:26:38.724 "data_offset": 256, 00:26:38.724 "data_size": 7936 00:26:38.724 } 00:26:38.724 ] 00:26:38.724 }' 00:26:38.724 22:09:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:38.724 22:09:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:38.983 22:09:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:39.241 [2024-07-13 22:09:58.489854] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:39.241 [2024-07-13 22:09:58.489924] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:39.241 [2024-07-13 22:09:58.489946] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043e80 00:26:39.241 [2024-07-13 22:09:58.489959] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:39.241 [2024-07-13 22:09:58.490224] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:39.241 [2024-07-13 22:09:58.490242] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:39.241 [2024-07-13 22:09:58.490298] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:26:39.241 [2024-07-13 22:09:58.490315] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:26:39.241 [2024-07-13 22:09:58.490331] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:26:39.241 [2024-07-13 22:09:58.490358] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:39.241 [2024-07-13 22:09:58.506984] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c9530 00:26:39.241 spare 00:26:39.241 [2024-07-13 22:09:58.508682] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:39.241 22:09:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # sleep 1 00:26:40.177 22:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:40.177 22:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:40.177 22:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:40.177 22:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:40.177 22:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:40.177 22:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:40.177 22:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:40.436 22:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:40.436 "name": "raid_bdev1", 00:26:40.436 "uuid": "a22f0932-c8b3-448f-b2d1-c1e7a2e32f88", 00:26:40.436 "strip_size_kb": 0, 00:26:40.436 "state": "online", 00:26:40.436 "raid_level": "raid1", 00:26:40.436 "superblock": true, 00:26:40.436 "num_base_bdevs": 2, 00:26:40.436 "num_base_bdevs_discovered": 2, 00:26:40.436 "num_base_bdevs_operational": 2, 00:26:40.436 "process": { 00:26:40.436 "type": "rebuild", 00:26:40.436 "target": "spare", 00:26:40.436 "progress": { 00:26:40.436 "blocks": 2816, 00:26:40.436 "percent": 35 00:26:40.436 } 00:26:40.436 }, 00:26:40.436 "base_bdevs_list": [ 00:26:40.436 { 00:26:40.436 "name": "spare", 00:26:40.436 "uuid": "94b519f5-c763-5101-af65-9db1e6ed48c3", 00:26:40.436 "is_configured": true, 00:26:40.436 "data_offset": 256, 00:26:40.436 "data_size": 7936 00:26:40.436 }, 00:26:40.436 { 00:26:40.436 "name": "BaseBdev2", 00:26:40.436 "uuid": "0e26acd4-732f-5a5b-9cfa-4e72fc073fc9", 00:26:40.436 "is_configured": true, 00:26:40.436 "data_offset": 256, 00:26:40.436 "data_size": 7936 00:26:40.436 } 00:26:40.436 ] 00:26:40.436 }' 00:26:40.436 22:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:40.436 22:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:40.436 22:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:40.436 22:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:40.436 22:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:26:40.695 [2024-07-13 22:09:59.942647] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:40.695 [2024-07-13 22:10:00.020301] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:40.695 [2024-07-13 22:10:00.020357] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:40.695 [2024-07-13 22:10:00.020383] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:40.695 [2024-07-13 22:10:00.020392] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:40.695 22:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:40.695 22:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:40.695 22:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:40.695 22:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:40.695 22:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:40.695 22:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:40.695 22:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:40.695 22:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:40.695 22:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:40.695 22:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:40.695 22:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:40.695 22:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:40.953 22:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:40.953 "name": "raid_bdev1", 00:26:40.953 "uuid": "a22f0932-c8b3-448f-b2d1-c1e7a2e32f88", 00:26:40.953 "strip_size_kb": 0, 00:26:40.953 "state": "online", 00:26:40.953 "raid_level": "raid1", 00:26:40.953 "superblock": true, 00:26:40.953 "num_base_bdevs": 2, 00:26:40.953 "num_base_bdevs_discovered": 1, 00:26:40.953 "num_base_bdevs_operational": 1, 00:26:40.953 "base_bdevs_list": [ 00:26:40.953 { 00:26:40.953 "name": null, 00:26:40.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:40.953 "is_configured": false, 00:26:40.953 "data_offset": 256, 00:26:40.953 "data_size": 7936 00:26:40.953 }, 00:26:40.954 { 00:26:40.954 "name": "BaseBdev2", 00:26:40.954 "uuid": "0e26acd4-732f-5a5b-9cfa-4e72fc073fc9", 00:26:40.954 "is_configured": true, 00:26:40.954 "data_offset": 256, 00:26:40.954 "data_size": 7936 00:26:40.954 } 00:26:40.954 ] 00:26:40.954 }' 00:26:40.954 22:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:40.954 22:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:41.520 22:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:41.520 22:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:41.520 22:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:41.521 22:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:41.521 22:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:41.521 22:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:41.521 22:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:41.521 22:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:41.521 "name": "raid_bdev1", 00:26:41.521 "uuid": "a22f0932-c8b3-448f-b2d1-c1e7a2e32f88", 00:26:41.521 "strip_size_kb": 0, 00:26:41.521 "state": "online", 00:26:41.521 "raid_level": "raid1", 00:26:41.521 "superblock": true, 00:26:41.521 "num_base_bdevs": 2, 00:26:41.521 "num_base_bdevs_discovered": 1, 00:26:41.521 "num_base_bdevs_operational": 1, 00:26:41.521 "base_bdevs_list": [ 00:26:41.521 { 00:26:41.521 "name": null, 00:26:41.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:41.521 "is_configured": false, 00:26:41.521 "data_offset": 256, 00:26:41.521 "data_size": 7936 00:26:41.521 }, 00:26:41.521 { 00:26:41.521 "name": "BaseBdev2", 00:26:41.521 "uuid": "0e26acd4-732f-5a5b-9cfa-4e72fc073fc9", 00:26:41.521 "is_configured": true, 00:26:41.521 "data_offset": 256, 00:26:41.521 "data_size": 7936 00:26:41.521 } 00:26:41.521 ] 00:26:41.521 }' 00:26:41.521 22:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:41.521 22:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:41.521 22:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:41.779 22:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:41.779 22:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:26:41.779 22:10:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:42.038 [2024-07-13 22:10:01.230234] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:42.038 [2024-07-13 22:10:01.230286] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:42.038 [2024-07-13 22:10:01.230313] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000044480 00:26:42.038 [2024-07-13 22:10:01.230325] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:42.038 [2024-07-13 22:10:01.230588] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:42.038 [2024-07-13 22:10:01.230605] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:42.038 [2024-07-13 22:10:01.230657] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:26:42.038 [2024-07-13 22:10:01.230675] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:26:42.038 [2024-07-13 22:10:01.230687] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:26:42.038 BaseBdev1 00:26:42.038 22:10:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # sleep 1 00:26:42.972 22:10:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:42.972 22:10:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:42.972 22:10:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:42.972 22:10:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:42.972 22:10:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:42.972 22:10:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:42.972 22:10:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:42.972 22:10:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:42.972 22:10:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:42.972 22:10:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:42.972 22:10:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:42.972 22:10:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:43.231 22:10:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:43.231 "name": "raid_bdev1", 00:26:43.231 "uuid": "a22f0932-c8b3-448f-b2d1-c1e7a2e32f88", 00:26:43.231 "strip_size_kb": 0, 00:26:43.231 "state": "online", 00:26:43.231 "raid_level": "raid1", 00:26:43.231 "superblock": true, 00:26:43.231 "num_base_bdevs": 2, 00:26:43.231 "num_base_bdevs_discovered": 1, 00:26:43.231 "num_base_bdevs_operational": 1, 00:26:43.231 "base_bdevs_list": [ 00:26:43.231 { 00:26:43.231 "name": null, 00:26:43.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:43.231 "is_configured": false, 00:26:43.231 "data_offset": 256, 00:26:43.231 "data_size": 7936 00:26:43.231 }, 00:26:43.231 { 00:26:43.231 "name": "BaseBdev2", 00:26:43.231 "uuid": "0e26acd4-732f-5a5b-9cfa-4e72fc073fc9", 00:26:43.231 "is_configured": true, 00:26:43.231 "data_offset": 256, 00:26:43.231 "data_size": 7936 00:26:43.231 } 00:26:43.231 ] 00:26:43.231 }' 00:26:43.231 22:10:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:43.231 22:10:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:43.799 22:10:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:43.799 22:10:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:43.799 22:10:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:43.799 22:10:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:43.799 22:10:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:43.799 22:10:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:43.799 22:10:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:43.799 22:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:43.799 "name": "raid_bdev1", 00:26:43.799 "uuid": "a22f0932-c8b3-448f-b2d1-c1e7a2e32f88", 00:26:43.799 "strip_size_kb": 0, 00:26:43.799 "state": "online", 00:26:43.799 "raid_level": "raid1", 00:26:43.799 "superblock": true, 00:26:43.799 "num_base_bdevs": 2, 00:26:43.799 "num_base_bdevs_discovered": 1, 00:26:43.799 "num_base_bdevs_operational": 1, 00:26:43.799 "base_bdevs_list": [ 00:26:43.799 { 00:26:43.799 "name": null, 00:26:43.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:43.799 "is_configured": false, 00:26:43.799 "data_offset": 256, 00:26:43.799 "data_size": 7936 00:26:43.799 }, 00:26:43.799 { 00:26:43.799 "name": "BaseBdev2", 00:26:43.799 "uuid": "0e26acd4-732f-5a5b-9cfa-4e72fc073fc9", 00:26:43.799 "is_configured": true, 00:26:43.799 "data_offset": 256, 00:26:43.799 "data_size": 7936 00:26:43.799 } 00:26:43.799 ] 00:26:43.799 }' 00:26:43.799 22:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:43.799 22:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:43.799 22:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:43.799 22:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:43.799 22:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:43.799 22:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@648 -- # local es=0 00:26:43.799 22:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:43.799 22:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:43.799 22:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:43.799 22:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:43.799 22:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:43.799 22:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:43.799 22:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:43.799 22:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:43.799 22:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:26:43.799 22:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:44.057 [2024-07-13 22:10:03.311739] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:44.057 [2024-07-13 22:10:03.311883] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:26:44.057 [2024-07-13 22:10:03.311898] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:26:44.057 request: 00:26:44.057 { 00:26:44.057 "base_bdev": "BaseBdev1", 00:26:44.057 "raid_bdev": "raid_bdev1", 00:26:44.057 "method": "bdev_raid_add_base_bdev", 00:26:44.057 "req_id": 1 00:26:44.057 } 00:26:44.057 Got JSON-RPC error response 00:26:44.057 response: 00:26:44.057 { 00:26:44.057 "code": -22, 00:26:44.057 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:26:44.057 } 00:26:44.057 22:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@651 -- # es=1 00:26:44.057 22:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:44.057 22:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:44.057 22:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:44.057 22:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # sleep 1 00:26:44.992 22:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:44.992 22:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:44.992 22:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:44.992 22:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:44.992 22:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:44.992 22:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:44.992 22:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:44.992 22:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:44.992 22:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:44.992 22:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:44.992 22:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:44.992 22:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:45.251 22:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:45.251 "name": "raid_bdev1", 00:26:45.251 "uuid": "a22f0932-c8b3-448f-b2d1-c1e7a2e32f88", 00:26:45.251 "strip_size_kb": 0, 00:26:45.251 "state": "online", 00:26:45.251 "raid_level": "raid1", 00:26:45.251 "superblock": true, 00:26:45.251 "num_base_bdevs": 2, 00:26:45.251 "num_base_bdevs_discovered": 1, 00:26:45.251 "num_base_bdevs_operational": 1, 00:26:45.251 "base_bdevs_list": [ 00:26:45.251 { 00:26:45.251 "name": null, 00:26:45.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:45.251 "is_configured": false, 00:26:45.251 "data_offset": 256, 00:26:45.251 "data_size": 7936 00:26:45.251 }, 00:26:45.251 { 00:26:45.251 "name": "BaseBdev2", 00:26:45.251 "uuid": "0e26acd4-732f-5a5b-9cfa-4e72fc073fc9", 00:26:45.251 "is_configured": true, 00:26:45.251 "data_offset": 256, 00:26:45.251 "data_size": 7936 00:26:45.251 } 00:26:45.251 ] 00:26:45.251 }' 00:26:45.251 22:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:45.251 22:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:45.817 22:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:45.817 22:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:45.817 22:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:45.817 22:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:45.817 22:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:45.817 22:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:45.817 22:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:45.817 22:10:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:45.817 "name": "raid_bdev1", 00:26:45.817 "uuid": "a22f0932-c8b3-448f-b2d1-c1e7a2e32f88", 00:26:45.817 "strip_size_kb": 0, 00:26:45.817 "state": "online", 00:26:45.817 "raid_level": "raid1", 00:26:45.817 "superblock": true, 00:26:45.817 "num_base_bdevs": 2, 00:26:45.817 "num_base_bdevs_discovered": 1, 00:26:45.817 "num_base_bdevs_operational": 1, 00:26:45.817 "base_bdevs_list": [ 00:26:45.817 { 00:26:45.817 "name": null, 00:26:45.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:45.817 "is_configured": false, 00:26:45.817 "data_offset": 256, 00:26:45.817 "data_size": 7936 00:26:45.817 }, 00:26:45.817 { 00:26:45.817 "name": "BaseBdev2", 00:26:45.817 "uuid": "0e26acd4-732f-5a5b-9cfa-4e72fc073fc9", 00:26:45.817 "is_configured": true, 00:26:45.817 "data_offset": 256, 00:26:45.817 "data_size": 7936 00:26:45.817 } 00:26:45.817 ] 00:26:45.817 }' 00:26:45.817 22:10:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:45.817 22:10:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:45.817 22:10:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:46.076 22:10:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:46.076 22:10:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@782 -- # killprocess 1510944 00:26:46.076 22:10:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@948 -- # '[' -z 1510944 ']' 00:26:46.076 22:10:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@952 -- # kill -0 1510944 00:26:46.076 22:10:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@953 -- # uname 00:26:46.076 22:10:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:46.076 22:10:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1510944 00:26:46.076 22:10:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:46.076 22:10:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:46.076 22:10:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1510944' 00:26:46.076 killing process with pid 1510944 00:26:46.076 22:10:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@967 -- # kill 1510944 00:26:46.076 Received shutdown signal, test time was about 60.000000 seconds 00:26:46.076 00:26:46.076 Latency(us) 00:26:46.076 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:46.076 =================================================================================================================== 00:26:46.076 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:46.076 22:10:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # wait 1510944 00:26:46.076 [2024-07-13 22:10:05.291414] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:46.076 [2024-07-13 22:10:05.291526] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:46.076 [2024-07-13 22:10:05.291576] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:46.076 [2024-07-13 22:10:05.291588] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000043880 name raid_bdev1, state offline 00:26:46.335 [2024-07-13 22:10:05.594133] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:47.712 22:10:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # return 0 00:26:47.712 00:26:47.712 real 0m27.139s 00:26:47.712 user 0m39.812s 00:26:47.712 sys 0m4.170s 00:26:47.712 22:10:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:47.712 22:10:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:47.712 ************************************ 00:26:47.712 END TEST raid_rebuild_test_sb_md_separate 00:26:47.712 ************************************ 00:26:47.712 22:10:06 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:26:47.712 22:10:06 bdev_raid -- bdev/bdev_raid.sh@911 -- # base_malloc_params='-m 32 -i' 00:26:47.712 22:10:06 bdev_raid -- bdev/bdev_raid.sh@912 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:26:47.712 22:10:06 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:26:47.712 22:10:06 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:47.712 22:10:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:47.712 ************************************ 00:26:47.712 START TEST raid_state_function_test_sb_md_interleaved 00:26:47.712 ************************************ 00:26:47.712 22:10:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:26:47.712 22:10:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:26:47.712 22:10:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:26:47.712 22:10:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:26:47.712 22:10:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:26:47.712 22:10:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:26:47.712 22:10:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:47.712 22:10:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:26:47.712 22:10:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:47.712 22:10:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:47.712 22:10:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:26:47.712 22:10:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:47.712 22:10:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:47.712 22:10:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:26:47.712 22:10:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:26:47.712 22:10:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:26:47.712 22:10:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # local strip_size 00:26:47.712 22:10:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:26:47.712 22:10:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:26:47.712 22:10:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:26:47.712 22:10:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:26:47.712 22:10:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:26:47.712 22:10:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:26:47.712 22:10:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # raid_pid=1516045 00:26:47.712 22:10:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:26:47.712 22:10:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1516045' 00:26:47.712 Process raid pid: 1516045 00:26:47.712 22:10:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@246 -- # waitforlisten 1516045 /var/tmp/spdk-raid.sock 00:26:47.712 22:10:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 1516045 ']' 00:26:47.712 22:10:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:47.712 22:10:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:47.712 22:10:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:47.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:47.712 22:10:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:47.712 22:10:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:47.712 [2024-07-13 22:10:06.980484] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:26:47.712 [2024-07-13 22:10:06.980586] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:47.712 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.712 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:47.712 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.712 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:47.712 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.712 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:47.712 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.712 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:47.712 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.712 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:47.712 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.712 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:47.712 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.712 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:47.712 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.712 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:47.712 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.712 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:47.712 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.712 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:47.712 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.712 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:47.712 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.712 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:47.712 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.712 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:47.712 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.712 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:47.712 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.712 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:47.712 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.712 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:47.712 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.712 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:47.712 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.712 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:47.712 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.712 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:47.712 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.712 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:47.712 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.712 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:47.712 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.712 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:47.712 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.712 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:47.712 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.712 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:47.712 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.712 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:47.712 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.712 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:47.712 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.712 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:47.712 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.712 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:47.712 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.712 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:47.712 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.712 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:47.712 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.712 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:47.712 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.712 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:47.971 [2024-07-13 22:10:07.146907] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.971 [2024-07-13 22:10:07.359159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:48.230 [2024-07-13 22:10:07.610217] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:48.230 [2024-07-13 22:10:07.610250] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:48.488 22:10:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:48.488 22:10:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:26:48.488 22:10:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:26:48.746 [2024-07-13 22:10:07.894458] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:48.746 [2024-07-13 22:10:07.894511] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:48.746 [2024-07-13 22:10:07.894526] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:48.746 [2024-07-13 22:10:07.894539] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:48.746 22:10:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:26:48.746 22:10:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:48.746 22:10:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:48.746 22:10:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:48.746 22:10:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:48.746 22:10:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:48.746 22:10:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:48.746 22:10:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:48.746 22:10:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:48.746 22:10:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:48.746 22:10:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:48.746 22:10:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:48.746 22:10:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:48.746 "name": "Existed_Raid", 00:26:48.747 "uuid": "a64b9482-4dc1-434e-bcb3-41e44cb6c718", 00:26:48.747 "strip_size_kb": 0, 00:26:48.747 "state": "configuring", 00:26:48.747 "raid_level": "raid1", 00:26:48.747 "superblock": true, 00:26:48.747 "num_base_bdevs": 2, 00:26:48.747 "num_base_bdevs_discovered": 0, 00:26:48.747 "num_base_bdevs_operational": 2, 00:26:48.747 "base_bdevs_list": [ 00:26:48.747 { 00:26:48.747 "name": "BaseBdev1", 00:26:48.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:48.747 "is_configured": false, 00:26:48.747 "data_offset": 0, 00:26:48.747 "data_size": 0 00:26:48.747 }, 00:26:48.747 { 00:26:48.747 "name": "BaseBdev2", 00:26:48.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:48.747 "is_configured": false, 00:26:48.747 "data_offset": 0, 00:26:48.747 "data_size": 0 00:26:48.747 } 00:26:48.747 ] 00:26:48.747 }' 00:26:48.747 22:10:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:48.747 22:10:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:49.313 22:10:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:49.571 [2024-07-13 22:10:08.744567] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:49.571 [2024-07-13 22:10:08.744603] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:26:49.571 22:10:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:26:49.571 [2024-07-13 22:10:08.913040] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:49.571 [2024-07-13 22:10:08.913077] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:49.571 [2024-07-13 22:10:08.913087] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:49.571 [2024-07-13 22:10:08.913098] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:49.571 22:10:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:26:49.829 [2024-07-13 22:10:09.109500] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:49.829 BaseBdev1 00:26:49.829 22:10:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:26:49.829 22:10:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:26:49.829 22:10:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:49.829 22:10:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local i 00:26:49.829 22:10:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:49.829 22:10:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:49.829 22:10:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:50.121 22:10:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:50.121 [ 00:26:50.121 { 00:26:50.121 "name": "BaseBdev1", 00:26:50.121 "aliases": [ 00:26:50.121 "d7e28b2d-80ba-43ea-aafb-c9155e86edc6" 00:26:50.121 ], 00:26:50.121 "product_name": "Malloc disk", 00:26:50.121 "block_size": 4128, 00:26:50.121 "num_blocks": 8192, 00:26:50.121 "uuid": "d7e28b2d-80ba-43ea-aafb-c9155e86edc6", 00:26:50.121 "md_size": 32, 00:26:50.121 "md_interleave": true, 00:26:50.121 "dif_type": 0, 00:26:50.121 "assigned_rate_limits": { 00:26:50.121 "rw_ios_per_sec": 0, 00:26:50.121 "rw_mbytes_per_sec": 0, 00:26:50.121 "r_mbytes_per_sec": 0, 00:26:50.121 "w_mbytes_per_sec": 0 00:26:50.121 }, 00:26:50.121 "claimed": true, 00:26:50.121 "claim_type": "exclusive_write", 00:26:50.121 "zoned": false, 00:26:50.121 "supported_io_types": { 00:26:50.121 "read": true, 00:26:50.121 "write": true, 00:26:50.121 "unmap": true, 00:26:50.121 "flush": true, 00:26:50.121 "reset": true, 00:26:50.121 "nvme_admin": false, 00:26:50.121 "nvme_io": false, 00:26:50.121 "nvme_io_md": false, 00:26:50.121 "write_zeroes": true, 00:26:50.121 "zcopy": true, 00:26:50.121 "get_zone_info": false, 00:26:50.121 "zone_management": false, 00:26:50.121 "zone_append": false, 00:26:50.121 "compare": false, 00:26:50.121 "compare_and_write": false, 00:26:50.121 "abort": true, 00:26:50.121 "seek_hole": false, 00:26:50.121 "seek_data": false, 00:26:50.121 "copy": true, 00:26:50.121 "nvme_iov_md": false 00:26:50.121 }, 00:26:50.121 "memory_domains": [ 00:26:50.121 { 00:26:50.121 "dma_device_id": "system", 00:26:50.121 "dma_device_type": 1 00:26:50.121 }, 00:26:50.121 { 00:26:50.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:50.121 "dma_device_type": 2 00:26:50.121 } 00:26:50.121 ], 00:26:50.121 "driver_specific": {} 00:26:50.121 } 00:26:50.121 ] 00:26:50.121 22:10:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # return 0 00:26:50.122 22:10:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:26:50.122 22:10:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:50.122 22:10:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:50.122 22:10:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:50.122 22:10:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:50.122 22:10:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:50.122 22:10:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:50.122 22:10:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:50.122 22:10:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:50.122 22:10:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:50.122 22:10:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:50.122 22:10:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:50.380 22:10:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:50.380 "name": "Existed_Raid", 00:26:50.380 "uuid": "beb3ffea-0a71-478d-9190-e28eaeb6761b", 00:26:50.380 "strip_size_kb": 0, 00:26:50.380 "state": "configuring", 00:26:50.380 "raid_level": "raid1", 00:26:50.380 "superblock": true, 00:26:50.380 "num_base_bdevs": 2, 00:26:50.380 "num_base_bdevs_discovered": 1, 00:26:50.380 "num_base_bdevs_operational": 2, 00:26:50.380 "base_bdevs_list": [ 00:26:50.380 { 00:26:50.380 "name": "BaseBdev1", 00:26:50.380 "uuid": "d7e28b2d-80ba-43ea-aafb-c9155e86edc6", 00:26:50.380 "is_configured": true, 00:26:50.380 "data_offset": 256, 00:26:50.380 "data_size": 7936 00:26:50.380 }, 00:26:50.380 { 00:26:50.380 "name": "BaseBdev2", 00:26:50.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:50.380 "is_configured": false, 00:26:50.380 "data_offset": 0, 00:26:50.380 "data_size": 0 00:26:50.380 } 00:26:50.380 ] 00:26:50.380 }' 00:26:50.380 22:10:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:50.380 22:10:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:50.948 22:10:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:50.948 [2024-07-13 22:10:10.312795] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:50.948 [2024-07-13 22:10:10.312851] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:26:50.948 22:10:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:26:51.207 [2024-07-13 22:10:10.481291] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:51.207 [2024-07-13 22:10:10.483055] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:51.207 [2024-07-13 22:10:10.483096] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:51.207 22:10:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:26:51.207 22:10:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:51.207 22:10:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:26:51.207 22:10:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:51.207 22:10:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:51.207 22:10:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:51.207 22:10:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:51.207 22:10:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:51.208 22:10:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:51.208 22:10:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:51.208 22:10:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:51.208 22:10:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:51.208 22:10:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:51.208 22:10:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:51.467 22:10:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:51.467 "name": "Existed_Raid", 00:26:51.467 "uuid": "fccb4fa2-9622-48e4-b079-1a411c8c1128", 00:26:51.467 "strip_size_kb": 0, 00:26:51.467 "state": "configuring", 00:26:51.467 "raid_level": "raid1", 00:26:51.467 "superblock": true, 00:26:51.467 "num_base_bdevs": 2, 00:26:51.467 "num_base_bdevs_discovered": 1, 00:26:51.467 "num_base_bdevs_operational": 2, 00:26:51.467 "base_bdevs_list": [ 00:26:51.467 { 00:26:51.467 "name": "BaseBdev1", 00:26:51.467 "uuid": "d7e28b2d-80ba-43ea-aafb-c9155e86edc6", 00:26:51.467 "is_configured": true, 00:26:51.467 "data_offset": 256, 00:26:51.467 "data_size": 7936 00:26:51.467 }, 00:26:51.467 { 00:26:51.467 "name": "BaseBdev2", 00:26:51.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:51.467 "is_configured": false, 00:26:51.467 "data_offset": 0, 00:26:51.467 "data_size": 0 00:26:51.467 } 00:26:51.467 ] 00:26:51.467 }' 00:26:51.467 22:10:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:51.467 22:10:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:52.035 22:10:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:26:52.035 [2024-07-13 22:10:11.328132] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:52.035 [2024-07-13 22:10:11.328334] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:26:52.035 [2024-07-13 22:10:11.328350] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:26:52.035 [2024-07-13 22:10:11.328437] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:26:52.035 [2024-07-13 22:10:11.328546] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:26:52.035 [2024-07-13 22:10:11.328560] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:26:52.035 [2024-07-13 22:10:11.328635] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:52.035 BaseBdev2 00:26:52.035 22:10:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:26:52.035 22:10:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:26:52.035 22:10:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:52.035 22:10:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local i 00:26:52.035 22:10:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:52.035 22:10:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:52.035 22:10:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:52.294 22:10:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:52.294 [ 00:26:52.294 { 00:26:52.294 "name": "BaseBdev2", 00:26:52.294 "aliases": [ 00:26:52.294 "2929e3d7-59f6-4481-98f1-108189d356ad" 00:26:52.294 ], 00:26:52.294 "product_name": "Malloc disk", 00:26:52.294 "block_size": 4128, 00:26:52.294 "num_blocks": 8192, 00:26:52.294 "uuid": "2929e3d7-59f6-4481-98f1-108189d356ad", 00:26:52.294 "md_size": 32, 00:26:52.294 "md_interleave": true, 00:26:52.294 "dif_type": 0, 00:26:52.294 "assigned_rate_limits": { 00:26:52.294 "rw_ios_per_sec": 0, 00:26:52.294 "rw_mbytes_per_sec": 0, 00:26:52.294 "r_mbytes_per_sec": 0, 00:26:52.294 "w_mbytes_per_sec": 0 00:26:52.294 }, 00:26:52.294 "claimed": true, 00:26:52.294 "claim_type": "exclusive_write", 00:26:52.294 "zoned": false, 00:26:52.294 "supported_io_types": { 00:26:52.294 "read": true, 00:26:52.294 "write": true, 00:26:52.294 "unmap": true, 00:26:52.294 "flush": true, 00:26:52.294 "reset": true, 00:26:52.294 "nvme_admin": false, 00:26:52.294 "nvme_io": false, 00:26:52.294 "nvme_io_md": false, 00:26:52.294 "write_zeroes": true, 00:26:52.294 "zcopy": true, 00:26:52.294 "get_zone_info": false, 00:26:52.294 "zone_management": false, 00:26:52.294 "zone_append": false, 00:26:52.294 "compare": false, 00:26:52.294 "compare_and_write": false, 00:26:52.295 "abort": true, 00:26:52.295 "seek_hole": false, 00:26:52.295 "seek_data": false, 00:26:52.295 "copy": true, 00:26:52.295 "nvme_iov_md": false 00:26:52.295 }, 00:26:52.295 "memory_domains": [ 00:26:52.295 { 00:26:52.295 "dma_device_id": "system", 00:26:52.295 "dma_device_type": 1 00:26:52.295 }, 00:26:52.295 { 00:26:52.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:52.295 "dma_device_type": 2 00:26:52.295 } 00:26:52.295 ], 00:26:52.295 "driver_specific": {} 00:26:52.295 } 00:26:52.295 ] 00:26:52.554 22:10:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # return 0 00:26:52.554 22:10:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:26:52.554 22:10:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:52.554 22:10:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:26:52.554 22:10:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:52.554 22:10:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:52.554 22:10:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:52.554 22:10:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:52.554 22:10:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:52.554 22:10:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:52.554 22:10:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:52.554 22:10:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:52.554 22:10:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:52.554 22:10:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:52.554 22:10:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:52.554 22:10:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:52.554 "name": "Existed_Raid", 00:26:52.554 "uuid": "fccb4fa2-9622-48e4-b079-1a411c8c1128", 00:26:52.554 "strip_size_kb": 0, 00:26:52.554 "state": "online", 00:26:52.554 "raid_level": "raid1", 00:26:52.554 "superblock": true, 00:26:52.554 "num_base_bdevs": 2, 00:26:52.554 "num_base_bdevs_discovered": 2, 00:26:52.554 "num_base_bdevs_operational": 2, 00:26:52.555 "base_bdevs_list": [ 00:26:52.555 { 00:26:52.555 "name": "BaseBdev1", 00:26:52.555 "uuid": "d7e28b2d-80ba-43ea-aafb-c9155e86edc6", 00:26:52.555 "is_configured": true, 00:26:52.555 "data_offset": 256, 00:26:52.555 "data_size": 7936 00:26:52.555 }, 00:26:52.555 { 00:26:52.555 "name": "BaseBdev2", 00:26:52.555 "uuid": "2929e3d7-59f6-4481-98f1-108189d356ad", 00:26:52.555 "is_configured": true, 00:26:52.555 "data_offset": 256, 00:26:52.555 "data_size": 7936 00:26:52.555 } 00:26:52.555 ] 00:26:52.555 }' 00:26:52.555 22:10:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:52.555 22:10:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:53.124 22:10:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:26:53.124 22:10:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:26:53.124 22:10:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:53.124 22:10:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:53.124 22:10:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:53.124 22:10:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:26:53.124 22:10:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:26:53.124 22:10:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:53.124 [2024-07-13 22:10:12.491512] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:53.124 22:10:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:53.124 "name": "Existed_Raid", 00:26:53.124 "aliases": [ 00:26:53.124 "fccb4fa2-9622-48e4-b079-1a411c8c1128" 00:26:53.124 ], 00:26:53.124 "product_name": "Raid Volume", 00:26:53.124 "block_size": 4128, 00:26:53.124 "num_blocks": 7936, 00:26:53.124 "uuid": "fccb4fa2-9622-48e4-b079-1a411c8c1128", 00:26:53.124 "md_size": 32, 00:26:53.124 "md_interleave": true, 00:26:53.124 "dif_type": 0, 00:26:53.124 "assigned_rate_limits": { 00:26:53.124 "rw_ios_per_sec": 0, 00:26:53.124 "rw_mbytes_per_sec": 0, 00:26:53.124 "r_mbytes_per_sec": 0, 00:26:53.124 "w_mbytes_per_sec": 0 00:26:53.124 }, 00:26:53.124 "claimed": false, 00:26:53.124 "zoned": false, 00:26:53.124 "supported_io_types": { 00:26:53.124 "read": true, 00:26:53.124 "write": true, 00:26:53.124 "unmap": false, 00:26:53.124 "flush": false, 00:26:53.124 "reset": true, 00:26:53.124 "nvme_admin": false, 00:26:53.124 "nvme_io": false, 00:26:53.124 "nvme_io_md": false, 00:26:53.124 "write_zeroes": true, 00:26:53.124 "zcopy": false, 00:26:53.124 "get_zone_info": false, 00:26:53.124 "zone_management": false, 00:26:53.124 "zone_append": false, 00:26:53.124 "compare": false, 00:26:53.124 "compare_and_write": false, 00:26:53.124 "abort": false, 00:26:53.124 "seek_hole": false, 00:26:53.124 "seek_data": false, 00:26:53.124 "copy": false, 00:26:53.124 "nvme_iov_md": false 00:26:53.124 }, 00:26:53.124 "memory_domains": [ 00:26:53.124 { 00:26:53.124 "dma_device_id": "system", 00:26:53.124 "dma_device_type": 1 00:26:53.124 }, 00:26:53.124 { 00:26:53.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:53.124 "dma_device_type": 2 00:26:53.124 }, 00:26:53.124 { 00:26:53.124 "dma_device_id": "system", 00:26:53.124 "dma_device_type": 1 00:26:53.124 }, 00:26:53.124 { 00:26:53.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:53.124 "dma_device_type": 2 00:26:53.124 } 00:26:53.124 ], 00:26:53.124 "driver_specific": { 00:26:53.124 "raid": { 00:26:53.124 "uuid": "fccb4fa2-9622-48e4-b079-1a411c8c1128", 00:26:53.124 "strip_size_kb": 0, 00:26:53.124 "state": "online", 00:26:53.124 "raid_level": "raid1", 00:26:53.124 "superblock": true, 00:26:53.124 "num_base_bdevs": 2, 00:26:53.124 "num_base_bdevs_discovered": 2, 00:26:53.124 "num_base_bdevs_operational": 2, 00:26:53.124 "base_bdevs_list": [ 00:26:53.124 { 00:26:53.124 "name": "BaseBdev1", 00:26:53.124 "uuid": "d7e28b2d-80ba-43ea-aafb-c9155e86edc6", 00:26:53.124 "is_configured": true, 00:26:53.124 "data_offset": 256, 00:26:53.124 "data_size": 7936 00:26:53.124 }, 00:26:53.124 { 00:26:53.124 "name": "BaseBdev2", 00:26:53.124 "uuid": "2929e3d7-59f6-4481-98f1-108189d356ad", 00:26:53.124 "is_configured": true, 00:26:53.124 "data_offset": 256, 00:26:53.124 "data_size": 7936 00:26:53.124 } 00:26:53.124 ] 00:26:53.124 } 00:26:53.124 } 00:26:53.124 }' 00:26:53.384 22:10:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:53.384 22:10:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:26:53.384 BaseBdev2' 00:26:53.384 22:10:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:53.384 22:10:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:26:53.384 22:10:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:53.384 22:10:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:53.384 "name": "BaseBdev1", 00:26:53.384 "aliases": [ 00:26:53.384 "d7e28b2d-80ba-43ea-aafb-c9155e86edc6" 00:26:53.384 ], 00:26:53.384 "product_name": "Malloc disk", 00:26:53.384 "block_size": 4128, 00:26:53.384 "num_blocks": 8192, 00:26:53.384 "uuid": "d7e28b2d-80ba-43ea-aafb-c9155e86edc6", 00:26:53.384 "md_size": 32, 00:26:53.384 "md_interleave": true, 00:26:53.384 "dif_type": 0, 00:26:53.384 "assigned_rate_limits": { 00:26:53.384 "rw_ios_per_sec": 0, 00:26:53.384 "rw_mbytes_per_sec": 0, 00:26:53.384 "r_mbytes_per_sec": 0, 00:26:53.384 "w_mbytes_per_sec": 0 00:26:53.384 }, 00:26:53.384 "claimed": true, 00:26:53.384 "claim_type": "exclusive_write", 00:26:53.384 "zoned": false, 00:26:53.384 "supported_io_types": { 00:26:53.384 "read": true, 00:26:53.384 "write": true, 00:26:53.384 "unmap": true, 00:26:53.384 "flush": true, 00:26:53.384 "reset": true, 00:26:53.384 "nvme_admin": false, 00:26:53.384 "nvme_io": false, 00:26:53.384 "nvme_io_md": false, 00:26:53.384 "write_zeroes": true, 00:26:53.384 "zcopy": true, 00:26:53.384 "get_zone_info": false, 00:26:53.384 "zone_management": false, 00:26:53.384 "zone_append": false, 00:26:53.384 "compare": false, 00:26:53.384 "compare_and_write": false, 00:26:53.384 "abort": true, 00:26:53.384 "seek_hole": false, 00:26:53.384 "seek_data": false, 00:26:53.384 "copy": true, 00:26:53.384 "nvme_iov_md": false 00:26:53.384 }, 00:26:53.384 "memory_domains": [ 00:26:53.384 { 00:26:53.384 "dma_device_id": "system", 00:26:53.384 "dma_device_type": 1 00:26:53.384 }, 00:26:53.384 { 00:26:53.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:53.384 "dma_device_type": 2 00:26:53.384 } 00:26:53.384 ], 00:26:53.384 "driver_specific": {} 00:26:53.384 }' 00:26:53.384 22:10:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:53.384 22:10:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:53.643 22:10:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:26:53.643 22:10:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:53.643 22:10:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:53.643 22:10:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:26:53.643 22:10:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:53.643 22:10:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:53.643 22:10:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:26:53.643 22:10:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:53.643 22:10:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:53.643 22:10:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:26:53.643 22:10:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:53.643 22:10:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:26:53.643 22:10:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:53.903 22:10:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:53.903 "name": "BaseBdev2", 00:26:53.903 "aliases": [ 00:26:53.903 "2929e3d7-59f6-4481-98f1-108189d356ad" 00:26:53.903 ], 00:26:53.903 "product_name": "Malloc disk", 00:26:53.903 "block_size": 4128, 00:26:53.903 "num_blocks": 8192, 00:26:53.903 "uuid": "2929e3d7-59f6-4481-98f1-108189d356ad", 00:26:53.903 "md_size": 32, 00:26:53.903 "md_interleave": true, 00:26:53.903 "dif_type": 0, 00:26:53.903 "assigned_rate_limits": { 00:26:53.903 "rw_ios_per_sec": 0, 00:26:53.903 "rw_mbytes_per_sec": 0, 00:26:53.903 "r_mbytes_per_sec": 0, 00:26:53.903 "w_mbytes_per_sec": 0 00:26:53.903 }, 00:26:53.903 "claimed": true, 00:26:53.903 "claim_type": "exclusive_write", 00:26:53.903 "zoned": false, 00:26:53.903 "supported_io_types": { 00:26:53.903 "read": true, 00:26:53.903 "write": true, 00:26:53.903 "unmap": true, 00:26:53.903 "flush": true, 00:26:53.903 "reset": true, 00:26:53.903 "nvme_admin": false, 00:26:53.903 "nvme_io": false, 00:26:53.903 "nvme_io_md": false, 00:26:53.903 "write_zeroes": true, 00:26:53.903 "zcopy": true, 00:26:53.903 "get_zone_info": false, 00:26:53.903 "zone_management": false, 00:26:53.903 "zone_append": false, 00:26:53.903 "compare": false, 00:26:53.903 "compare_and_write": false, 00:26:53.903 "abort": true, 00:26:53.903 "seek_hole": false, 00:26:53.903 "seek_data": false, 00:26:53.903 "copy": true, 00:26:53.903 "nvme_iov_md": false 00:26:53.903 }, 00:26:53.903 "memory_domains": [ 00:26:53.903 { 00:26:53.903 "dma_device_id": "system", 00:26:53.903 "dma_device_type": 1 00:26:53.903 }, 00:26:53.903 { 00:26:53.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:53.903 "dma_device_type": 2 00:26:53.903 } 00:26:53.903 ], 00:26:53.903 "driver_specific": {} 00:26:53.903 }' 00:26:53.903 22:10:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:53.903 22:10:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:53.903 22:10:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:26:53.903 22:10:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:54.161 22:10:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:54.161 22:10:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:26:54.161 22:10:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:54.161 22:10:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:54.161 22:10:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:26:54.161 22:10:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:54.161 22:10:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:54.161 22:10:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:26:54.161 22:10:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:54.419 [2024-07-13 22:10:13.662367] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:54.419 22:10:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@275 -- # local expected_state 00:26:54.419 22:10:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:26:54.419 22:10:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:54.419 22:10:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:26:54.419 22:10:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:26:54.419 22:10:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:26:54.419 22:10:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:54.419 22:10:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:54.419 22:10:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:54.419 22:10:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:54.419 22:10:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:54.419 22:10:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:54.419 22:10:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:54.419 22:10:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:54.419 22:10:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:54.419 22:10:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:54.419 22:10:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:54.677 22:10:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:54.677 "name": "Existed_Raid", 00:26:54.677 "uuid": "fccb4fa2-9622-48e4-b079-1a411c8c1128", 00:26:54.677 "strip_size_kb": 0, 00:26:54.677 "state": "online", 00:26:54.677 "raid_level": "raid1", 00:26:54.677 "superblock": true, 00:26:54.677 "num_base_bdevs": 2, 00:26:54.677 "num_base_bdevs_discovered": 1, 00:26:54.677 "num_base_bdevs_operational": 1, 00:26:54.677 "base_bdevs_list": [ 00:26:54.677 { 00:26:54.677 "name": null, 00:26:54.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:54.677 "is_configured": false, 00:26:54.677 "data_offset": 256, 00:26:54.677 "data_size": 7936 00:26:54.677 }, 00:26:54.677 { 00:26:54.677 "name": "BaseBdev2", 00:26:54.677 "uuid": "2929e3d7-59f6-4481-98f1-108189d356ad", 00:26:54.677 "is_configured": true, 00:26:54.677 "data_offset": 256, 00:26:54.677 "data_size": 7936 00:26:54.677 } 00:26:54.677 ] 00:26:54.677 }' 00:26:54.677 22:10:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:54.677 22:10:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:55.245 22:10:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:26:55.245 22:10:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:55.245 22:10:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:26:55.245 22:10:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:55.245 22:10:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:26:55.245 22:10:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:55.245 22:10:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:26:55.505 [2024-07-13 22:10:14.713913] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:55.505 [2024-07-13 22:10:14.714029] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:55.505 [2024-07-13 22:10:14.810047] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:55.505 [2024-07-13 22:10:14.810095] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:55.505 [2024-07-13 22:10:14.810111] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:26:55.505 22:10:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:26:55.505 22:10:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:55.505 22:10:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:55.505 22:10:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:26:55.765 22:10:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:26:55.765 22:10:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:26:55.765 22:10:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:26:55.765 22:10:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@341 -- # killprocess 1516045 00:26:55.765 22:10:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 1516045 ']' 00:26:55.765 22:10:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 1516045 00:26:55.765 22:10:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:26:55.765 22:10:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:55.765 22:10:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1516045 00:26:55.765 22:10:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:55.765 22:10:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:55.765 22:10:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1516045' 00:26:55.765 killing process with pid 1516045 00:26:55.765 22:10:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@967 -- # kill 1516045 00:26:55.765 [2024-07-13 22:10:15.058726] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:55.765 22:10:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # wait 1516045 00:26:55.765 [2024-07-13 22:10:15.075457] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:57.143 22:10:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@343 -- # return 0 00:26:57.143 00:26:57.143 real 0m9.429s 00:26:57.143 user 0m15.432s 00:26:57.143 sys 0m1.804s 00:26:57.143 22:10:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:57.143 22:10:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:57.143 ************************************ 00:26:57.143 END TEST raid_state_function_test_sb_md_interleaved 00:26:57.143 ************************************ 00:26:57.143 22:10:16 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:26:57.143 22:10:16 bdev_raid -- bdev/bdev_raid.sh@913 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:26:57.143 22:10:16 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:26:57.143 22:10:16 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:57.143 22:10:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:57.143 ************************************ 00:26:57.143 START TEST raid_superblock_test_md_interleaved 00:26:57.143 ************************************ 00:26:57.143 22:10:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:26:57.143 22:10:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:26:57.143 22:10:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:26:57.143 22:10:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:26:57.143 22:10:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:26:57.144 22:10:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:26:57.144 22:10:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:26:57.144 22:10:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:26:57.144 22:10:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:26:57.144 22:10:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:26:57.144 22:10:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local strip_size 00:26:57.144 22:10:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:26:57.144 22:10:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:26:57.144 22:10:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:26:57.144 22:10:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:26:57.144 22:10:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:26:57.144 22:10:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # raid_pid=1517839 00:26:57.144 22:10:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # waitforlisten 1517839 /var/tmp/spdk-raid.sock 00:26:57.144 22:10:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:26:57.144 22:10:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 1517839 ']' 00:26:57.144 22:10:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:57.144 22:10:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:57.144 22:10:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:57.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:57.144 22:10:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:57.144 22:10:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:57.144 [2024-07-13 22:10:16.494581] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:26:57.144 [2024-07-13 22:10:16.494684] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1517839 ] 00:26:57.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.403 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:57.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.403 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:57.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.403 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:57.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.403 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:57.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.403 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:57.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.403 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:57.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.403 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:57.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.403 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:57.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.403 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:57.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.403 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:57.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.403 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:57.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.403 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:57.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.403 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:57.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.403 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:57.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.403 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:57.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.403 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:57.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.403 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:57.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.403 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:57.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.403 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:57.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.403 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:57.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.403 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:57.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.403 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:57.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.403 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:57.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.403 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:57.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.403 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:57.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.403 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:57.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.403 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:57.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.403 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:57.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.403 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:57.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.403 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:57.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.403 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:57.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.403 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:57.403 [2024-07-13 22:10:16.656107] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.662 [2024-07-13 22:10:16.859157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:57.921 [2024-07-13 22:10:17.100274] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:57.921 [2024-07-13 22:10:17.100309] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:57.921 22:10:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:57.921 22:10:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:26:57.921 22:10:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:26:57.921 22:10:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:26:57.921 22:10:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:26:57.921 22:10:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:26:57.921 22:10:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:26:57.921 22:10:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:57.921 22:10:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:26:57.921 22:10:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:57.921 22:10:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:26:58.180 malloc1 00:26:58.180 22:10:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:58.439 [2024-07-13 22:10:17.584049] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:58.439 [2024-07-13 22:10:17.584102] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:58.439 [2024-07-13 22:10:17.584126] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:26:58.439 [2024-07-13 22:10:17.584139] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:58.439 [2024-07-13 22:10:17.586005] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:58.439 [2024-07-13 22:10:17.586033] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:58.439 pt1 00:26:58.439 22:10:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:26:58.439 22:10:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:26:58.439 22:10:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:26:58.439 22:10:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:26:58.439 22:10:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:26:58.439 22:10:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:58.439 22:10:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:26:58.439 22:10:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:58.439 22:10:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:26:58.439 malloc2 00:26:58.439 22:10:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:58.698 [2024-07-13 22:10:17.946364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:58.698 [2024-07-13 22:10:17.946418] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:58.698 [2024-07-13 22:10:17.946441] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:26:58.698 [2024-07-13 22:10:17.946457] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:58.698 [2024-07-13 22:10:17.948387] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:58.698 [2024-07-13 22:10:17.948420] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:58.698 pt2 00:26:58.698 22:10:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:26:58.698 22:10:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:26:58.698 22:10:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:26:58.957 [2024-07-13 22:10:18.118827] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:58.957 [2024-07-13 22:10:18.120532] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:58.957 [2024-07-13 22:10:18.120706] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000040880 00:26:58.957 [2024-07-13 22:10:18.120720] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:26:58.957 [2024-07-13 22:10:18.120795] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:26:58.957 [2024-07-13 22:10:18.120882] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000040880 00:26:58.957 [2024-07-13 22:10:18.120894] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000040880 00:26:58.957 [2024-07-13 22:10:18.120970] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:58.957 22:10:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:58.957 22:10:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:58.957 22:10:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:58.957 22:10:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:58.957 22:10:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:58.957 22:10:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:58.957 22:10:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:58.957 22:10:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:58.957 22:10:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:58.957 22:10:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:58.957 22:10:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:58.957 22:10:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:58.957 22:10:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:58.957 "name": "raid_bdev1", 00:26:58.957 "uuid": "405e3dab-4336-41dc-a2b2-445ff2b86de1", 00:26:58.957 "strip_size_kb": 0, 00:26:58.957 "state": "online", 00:26:58.957 "raid_level": "raid1", 00:26:58.957 "superblock": true, 00:26:58.957 "num_base_bdevs": 2, 00:26:58.957 "num_base_bdevs_discovered": 2, 00:26:58.957 "num_base_bdevs_operational": 2, 00:26:58.957 "base_bdevs_list": [ 00:26:58.957 { 00:26:58.957 "name": "pt1", 00:26:58.957 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:58.957 "is_configured": true, 00:26:58.957 "data_offset": 256, 00:26:58.957 "data_size": 7936 00:26:58.957 }, 00:26:58.957 { 00:26:58.957 "name": "pt2", 00:26:58.957 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:58.957 "is_configured": true, 00:26:58.957 "data_offset": 256, 00:26:58.957 "data_size": 7936 00:26:58.957 } 00:26:58.957 ] 00:26:58.957 }' 00:26:58.957 22:10:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:58.957 22:10:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:59.525 22:10:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:26:59.525 22:10:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:26:59.525 22:10:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:59.525 22:10:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:59.525 22:10:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:59.525 22:10:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:26:59.525 22:10:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:59.525 22:10:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:59.785 [2024-07-13 22:10:18.929165] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:59.785 22:10:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:59.785 "name": "raid_bdev1", 00:26:59.785 "aliases": [ 00:26:59.785 "405e3dab-4336-41dc-a2b2-445ff2b86de1" 00:26:59.785 ], 00:26:59.785 "product_name": "Raid Volume", 00:26:59.785 "block_size": 4128, 00:26:59.785 "num_blocks": 7936, 00:26:59.785 "uuid": "405e3dab-4336-41dc-a2b2-445ff2b86de1", 00:26:59.785 "md_size": 32, 00:26:59.785 "md_interleave": true, 00:26:59.785 "dif_type": 0, 00:26:59.785 "assigned_rate_limits": { 00:26:59.785 "rw_ios_per_sec": 0, 00:26:59.785 "rw_mbytes_per_sec": 0, 00:26:59.785 "r_mbytes_per_sec": 0, 00:26:59.785 "w_mbytes_per_sec": 0 00:26:59.785 }, 00:26:59.785 "claimed": false, 00:26:59.785 "zoned": false, 00:26:59.785 "supported_io_types": { 00:26:59.785 "read": true, 00:26:59.785 "write": true, 00:26:59.785 "unmap": false, 00:26:59.785 "flush": false, 00:26:59.785 "reset": true, 00:26:59.785 "nvme_admin": false, 00:26:59.785 "nvme_io": false, 00:26:59.785 "nvme_io_md": false, 00:26:59.785 "write_zeroes": true, 00:26:59.785 "zcopy": false, 00:26:59.785 "get_zone_info": false, 00:26:59.785 "zone_management": false, 00:26:59.785 "zone_append": false, 00:26:59.785 "compare": false, 00:26:59.785 "compare_and_write": false, 00:26:59.785 "abort": false, 00:26:59.785 "seek_hole": false, 00:26:59.785 "seek_data": false, 00:26:59.785 "copy": false, 00:26:59.785 "nvme_iov_md": false 00:26:59.785 }, 00:26:59.785 "memory_domains": [ 00:26:59.785 { 00:26:59.785 "dma_device_id": "system", 00:26:59.785 "dma_device_type": 1 00:26:59.785 }, 00:26:59.785 { 00:26:59.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:59.785 "dma_device_type": 2 00:26:59.785 }, 00:26:59.785 { 00:26:59.785 "dma_device_id": "system", 00:26:59.785 "dma_device_type": 1 00:26:59.785 }, 00:26:59.785 { 00:26:59.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:59.785 "dma_device_type": 2 00:26:59.785 } 00:26:59.785 ], 00:26:59.785 "driver_specific": { 00:26:59.785 "raid": { 00:26:59.785 "uuid": "405e3dab-4336-41dc-a2b2-445ff2b86de1", 00:26:59.785 "strip_size_kb": 0, 00:26:59.785 "state": "online", 00:26:59.785 "raid_level": "raid1", 00:26:59.785 "superblock": true, 00:26:59.785 "num_base_bdevs": 2, 00:26:59.785 "num_base_bdevs_discovered": 2, 00:26:59.785 "num_base_bdevs_operational": 2, 00:26:59.785 "base_bdevs_list": [ 00:26:59.785 { 00:26:59.785 "name": "pt1", 00:26:59.785 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:59.785 "is_configured": true, 00:26:59.785 "data_offset": 256, 00:26:59.785 "data_size": 7936 00:26:59.785 }, 00:26:59.785 { 00:26:59.785 "name": "pt2", 00:26:59.785 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:59.785 "is_configured": true, 00:26:59.785 "data_offset": 256, 00:26:59.785 "data_size": 7936 00:26:59.785 } 00:26:59.785 ] 00:26:59.785 } 00:26:59.785 } 00:26:59.785 }' 00:26:59.785 22:10:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:59.785 22:10:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:26:59.785 pt2' 00:26:59.785 22:10:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:59.785 22:10:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:26:59.785 22:10:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:59.785 22:10:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:59.785 "name": "pt1", 00:26:59.785 "aliases": [ 00:26:59.785 "00000000-0000-0000-0000-000000000001" 00:26:59.785 ], 00:26:59.785 "product_name": "passthru", 00:26:59.785 "block_size": 4128, 00:26:59.785 "num_blocks": 8192, 00:26:59.785 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:59.785 "md_size": 32, 00:26:59.785 "md_interleave": true, 00:26:59.785 "dif_type": 0, 00:26:59.785 "assigned_rate_limits": { 00:26:59.785 "rw_ios_per_sec": 0, 00:26:59.785 "rw_mbytes_per_sec": 0, 00:26:59.785 "r_mbytes_per_sec": 0, 00:26:59.785 "w_mbytes_per_sec": 0 00:26:59.785 }, 00:26:59.785 "claimed": true, 00:26:59.785 "claim_type": "exclusive_write", 00:26:59.785 "zoned": false, 00:26:59.785 "supported_io_types": { 00:26:59.785 "read": true, 00:26:59.785 "write": true, 00:26:59.785 "unmap": true, 00:26:59.785 "flush": true, 00:26:59.785 "reset": true, 00:26:59.785 "nvme_admin": false, 00:26:59.785 "nvme_io": false, 00:26:59.785 "nvme_io_md": false, 00:26:59.785 "write_zeroes": true, 00:26:59.785 "zcopy": true, 00:26:59.785 "get_zone_info": false, 00:26:59.785 "zone_management": false, 00:26:59.785 "zone_append": false, 00:26:59.785 "compare": false, 00:26:59.785 "compare_and_write": false, 00:26:59.785 "abort": true, 00:26:59.785 "seek_hole": false, 00:26:59.785 "seek_data": false, 00:26:59.785 "copy": true, 00:26:59.785 "nvme_iov_md": false 00:26:59.785 }, 00:26:59.785 "memory_domains": [ 00:26:59.785 { 00:26:59.785 "dma_device_id": "system", 00:26:59.785 "dma_device_type": 1 00:26:59.785 }, 00:26:59.785 { 00:26:59.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:59.785 "dma_device_type": 2 00:26:59.785 } 00:26:59.785 ], 00:26:59.785 "driver_specific": { 00:26:59.785 "passthru": { 00:26:59.785 "name": "pt1", 00:26:59.785 "base_bdev_name": "malloc1" 00:26:59.785 } 00:26:59.785 } 00:26:59.785 }' 00:26:59.785 22:10:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:00.044 22:10:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:00.044 22:10:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:27:00.044 22:10:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:00.044 22:10:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:00.044 22:10:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:27:00.044 22:10:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:00.044 22:10:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:00.044 22:10:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:27:00.044 22:10:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:00.044 22:10:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:00.304 22:10:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:27:00.304 22:10:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:00.304 22:10:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:27:00.304 22:10:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:00.304 22:10:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:00.304 "name": "pt2", 00:27:00.304 "aliases": [ 00:27:00.304 "00000000-0000-0000-0000-000000000002" 00:27:00.304 ], 00:27:00.304 "product_name": "passthru", 00:27:00.304 "block_size": 4128, 00:27:00.304 "num_blocks": 8192, 00:27:00.304 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:00.304 "md_size": 32, 00:27:00.304 "md_interleave": true, 00:27:00.304 "dif_type": 0, 00:27:00.304 "assigned_rate_limits": { 00:27:00.304 "rw_ios_per_sec": 0, 00:27:00.304 "rw_mbytes_per_sec": 0, 00:27:00.304 "r_mbytes_per_sec": 0, 00:27:00.304 "w_mbytes_per_sec": 0 00:27:00.304 }, 00:27:00.304 "claimed": true, 00:27:00.304 "claim_type": "exclusive_write", 00:27:00.304 "zoned": false, 00:27:00.304 "supported_io_types": { 00:27:00.304 "read": true, 00:27:00.304 "write": true, 00:27:00.304 "unmap": true, 00:27:00.304 "flush": true, 00:27:00.304 "reset": true, 00:27:00.304 "nvme_admin": false, 00:27:00.304 "nvme_io": false, 00:27:00.304 "nvme_io_md": false, 00:27:00.304 "write_zeroes": true, 00:27:00.304 "zcopy": true, 00:27:00.304 "get_zone_info": false, 00:27:00.304 "zone_management": false, 00:27:00.304 "zone_append": false, 00:27:00.304 "compare": false, 00:27:00.304 "compare_and_write": false, 00:27:00.304 "abort": true, 00:27:00.304 "seek_hole": false, 00:27:00.304 "seek_data": false, 00:27:00.304 "copy": true, 00:27:00.304 "nvme_iov_md": false 00:27:00.304 }, 00:27:00.304 "memory_domains": [ 00:27:00.304 { 00:27:00.304 "dma_device_id": "system", 00:27:00.304 "dma_device_type": 1 00:27:00.304 }, 00:27:00.304 { 00:27:00.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:00.304 "dma_device_type": 2 00:27:00.304 } 00:27:00.304 ], 00:27:00.304 "driver_specific": { 00:27:00.304 "passthru": { 00:27:00.304 "name": "pt2", 00:27:00.304 "base_bdev_name": "malloc2" 00:27:00.304 } 00:27:00.304 } 00:27:00.304 }' 00:27:00.304 22:10:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:00.304 22:10:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:00.564 22:10:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:27:00.564 22:10:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:00.564 22:10:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:00.564 22:10:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:27:00.564 22:10:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:00.564 22:10:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:00.564 22:10:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:27:00.564 22:10:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:00.564 22:10:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:00.823 22:10:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:27:00.823 22:10:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:00.823 22:10:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:27:00.823 [2024-07-13 22:10:20.112333] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:00.823 22:10:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=405e3dab-4336-41dc-a2b2-445ff2b86de1 00:27:00.823 22:10:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # '[' -z 405e3dab-4336-41dc-a2b2-445ff2b86de1 ']' 00:27:00.823 22:10:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:01.083 [2024-07-13 22:10:20.284519] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:01.083 [2024-07-13 22:10:20.284553] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:01.083 [2024-07-13 22:10:20.284643] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:01.083 [2024-07-13 22:10:20.284715] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:01.083 [2024-07-13 22:10:20.284732] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040880 name raid_bdev1, state offline 00:27:01.083 22:10:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:01.083 22:10:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:27:01.083 22:10:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:27:01.083 22:10:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:27:01.083 22:10:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:27:01.083 22:10:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:27:01.343 22:10:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:27:01.343 22:10:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:27:01.603 22:10:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:27:01.603 22:10:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:27:01.603 22:10:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:27:01.603 22:10:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:27:01.603 22:10:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:27:01.603 22:10:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:27:01.603 22:10:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:01.603 22:10:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:01.603 22:10:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:01.603 22:10:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:01.603 22:10:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:01.603 22:10:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:01.603 22:10:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:01.603 22:10:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:27:01.603 22:10:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:27:01.863 [2024-07-13 22:10:21.134749] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:27:01.863 [2024-07-13 22:10:21.136487] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:27:01.863 [2024-07-13 22:10:21.136554] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:27:01.863 [2024-07-13 22:10:21.136603] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:27:01.863 [2024-07-13 22:10:21.136620] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:01.863 [2024-07-13 22:10:21.136633] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040e80 name raid_bdev1, state configuring 00:27:01.863 request: 00:27:01.863 { 00:27:01.863 "name": "raid_bdev1", 00:27:01.863 "raid_level": "raid1", 00:27:01.863 "base_bdevs": [ 00:27:01.863 "malloc1", 00:27:01.863 "malloc2" 00:27:01.863 ], 00:27:01.863 "superblock": false, 00:27:01.863 "method": "bdev_raid_create", 00:27:01.863 "req_id": 1 00:27:01.863 } 00:27:01.863 Got JSON-RPC error response 00:27:01.863 response: 00:27:01.863 { 00:27:01.863 "code": -17, 00:27:01.863 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:27:01.863 } 00:27:01.863 22:10:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:27:01.863 22:10:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:01.863 22:10:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:01.863 22:10:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:01.863 22:10:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:01.863 22:10:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:27:02.122 22:10:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:27:02.122 22:10:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:27:02.122 22:10:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:02.122 [2024-07-13 22:10:21.467560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:02.122 [2024-07-13 22:10:21.467623] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:02.122 [2024-07-13 22:10:21.467643] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:27:02.122 [2024-07-13 22:10:21.467656] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:02.122 [2024-07-13 22:10:21.469403] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:02.122 [2024-07-13 22:10:21.469434] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:02.122 [2024-07-13 22:10:21.469487] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:02.122 [2024-07-13 22:10:21.469534] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:02.122 pt1 00:27:02.122 22:10:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:27:02.122 22:10:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:02.122 22:10:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:02.122 22:10:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:02.122 22:10:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:02.122 22:10:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:02.122 22:10:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:02.122 22:10:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:02.122 22:10:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:02.122 22:10:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:02.122 22:10:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:02.122 22:10:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:02.382 22:10:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:02.382 "name": "raid_bdev1", 00:27:02.382 "uuid": "405e3dab-4336-41dc-a2b2-445ff2b86de1", 00:27:02.382 "strip_size_kb": 0, 00:27:02.382 "state": "configuring", 00:27:02.382 "raid_level": "raid1", 00:27:02.382 "superblock": true, 00:27:02.382 "num_base_bdevs": 2, 00:27:02.382 "num_base_bdevs_discovered": 1, 00:27:02.382 "num_base_bdevs_operational": 2, 00:27:02.382 "base_bdevs_list": [ 00:27:02.382 { 00:27:02.382 "name": "pt1", 00:27:02.382 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:02.382 "is_configured": true, 00:27:02.382 "data_offset": 256, 00:27:02.382 "data_size": 7936 00:27:02.382 }, 00:27:02.382 { 00:27:02.382 "name": null, 00:27:02.382 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:02.382 "is_configured": false, 00:27:02.382 "data_offset": 256, 00:27:02.382 "data_size": 7936 00:27:02.382 } 00:27:02.382 ] 00:27:02.382 }' 00:27:02.382 22:10:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:02.382 22:10:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:02.951 22:10:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:27:02.951 22:10:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:27:02.951 22:10:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:27:02.951 22:10:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:02.951 [2024-07-13 22:10:22.301745] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:02.951 [2024-07-13 22:10:22.301811] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:02.951 [2024-07-13 22:10:22.301832] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041d80 00:27:02.951 [2024-07-13 22:10:22.301846] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:02.951 [2024-07-13 22:10:22.302060] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:02.951 [2024-07-13 22:10:22.302080] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:02.951 [2024-07-13 22:10:22.302138] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:02.951 [2024-07-13 22:10:22.302169] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:02.951 [2024-07-13 22:10:22.302272] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041a80 00:27:02.951 [2024-07-13 22:10:22.302286] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:27:02.951 [2024-07-13 22:10:22.302350] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:27:02.951 [2024-07-13 22:10:22.302436] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041a80 00:27:02.951 [2024-07-13 22:10:22.302447] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041a80 00:27:02.951 [2024-07-13 22:10:22.302514] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:02.951 pt2 00:27:02.951 22:10:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:27:02.951 22:10:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:27:02.951 22:10:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:02.951 22:10:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:02.951 22:10:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:02.951 22:10:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:02.951 22:10:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:02.951 22:10:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:02.951 22:10:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:02.951 22:10:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:02.951 22:10:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:02.951 22:10:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:02.951 22:10:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:02.951 22:10:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:03.210 22:10:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:03.210 "name": "raid_bdev1", 00:27:03.210 "uuid": "405e3dab-4336-41dc-a2b2-445ff2b86de1", 00:27:03.210 "strip_size_kb": 0, 00:27:03.210 "state": "online", 00:27:03.210 "raid_level": "raid1", 00:27:03.210 "superblock": true, 00:27:03.210 "num_base_bdevs": 2, 00:27:03.210 "num_base_bdevs_discovered": 2, 00:27:03.210 "num_base_bdevs_operational": 2, 00:27:03.210 "base_bdevs_list": [ 00:27:03.210 { 00:27:03.210 "name": "pt1", 00:27:03.210 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:03.210 "is_configured": true, 00:27:03.210 "data_offset": 256, 00:27:03.210 "data_size": 7936 00:27:03.210 }, 00:27:03.210 { 00:27:03.210 "name": "pt2", 00:27:03.210 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:03.210 "is_configured": true, 00:27:03.210 "data_offset": 256, 00:27:03.210 "data_size": 7936 00:27:03.210 } 00:27:03.210 ] 00:27:03.210 }' 00:27:03.211 22:10:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:03.211 22:10:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:03.779 22:10:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:27:03.779 22:10:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:27:03.779 22:10:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:27:03.779 22:10:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:27:03.779 22:10:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:27:03.779 22:10:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:27:03.779 22:10:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:03.779 22:10:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:27:03.779 [2024-07-13 22:10:23.132207] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:03.779 22:10:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:27:03.779 "name": "raid_bdev1", 00:27:03.779 "aliases": [ 00:27:03.779 "405e3dab-4336-41dc-a2b2-445ff2b86de1" 00:27:03.779 ], 00:27:03.779 "product_name": "Raid Volume", 00:27:03.779 "block_size": 4128, 00:27:03.779 "num_blocks": 7936, 00:27:03.779 "uuid": "405e3dab-4336-41dc-a2b2-445ff2b86de1", 00:27:03.779 "md_size": 32, 00:27:03.779 "md_interleave": true, 00:27:03.779 "dif_type": 0, 00:27:03.779 "assigned_rate_limits": { 00:27:03.779 "rw_ios_per_sec": 0, 00:27:03.779 "rw_mbytes_per_sec": 0, 00:27:03.779 "r_mbytes_per_sec": 0, 00:27:03.779 "w_mbytes_per_sec": 0 00:27:03.779 }, 00:27:03.779 "claimed": false, 00:27:03.779 "zoned": false, 00:27:03.779 "supported_io_types": { 00:27:03.779 "read": true, 00:27:03.779 "write": true, 00:27:03.779 "unmap": false, 00:27:03.779 "flush": false, 00:27:03.779 "reset": true, 00:27:03.779 "nvme_admin": false, 00:27:03.779 "nvme_io": false, 00:27:03.779 "nvme_io_md": false, 00:27:03.779 "write_zeroes": true, 00:27:03.779 "zcopy": false, 00:27:03.779 "get_zone_info": false, 00:27:03.779 "zone_management": false, 00:27:03.779 "zone_append": false, 00:27:03.779 "compare": false, 00:27:03.779 "compare_and_write": false, 00:27:03.779 "abort": false, 00:27:03.779 "seek_hole": false, 00:27:03.779 "seek_data": false, 00:27:03.779 "copy": false, 00:27:03.779 "nvme_iov_md": false 00:27:03.779 }, 00:27:03.779 "memory_domains": [ 00:27:03.779 { 00:27:03.779 "dma_device_id": "system", 00:27:03.779 "dma_device_type": 1 00:27:03.779 }, 00:27:03.779 { 00:27:03.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:03.779 "dma_device_type": 2 00:27:03.779 }, 00:27:03.779 { 00:27:03.779 "dma_device_id": "system", 00:27:03.779 "dma_device_type": 1 00:27:03.779 }, 00:27:03.779 { 00:27:03.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:03.779 "dma_device_type": 2 00:27:03.779 } 00:27:03.779 ], 00:27:03.779 "driver_specific": { 00:27:03.779 "raid": { 00:27:03.779 "uuid": "405e3dab-4336-41dc-a2b2-445ff2b86de1", 00:27:03.779 "strip_size_kb": 0, 00:27:03.779 "state": "online", 00:27:03.779 "raid_level": "raid1", 00:27:03.779 "superblock": true, 00:27:03.779 "num_base_bdevs": 2, 00:27:03.779 "num_base_bdevs_discovered": 2, 00:27:03.779 "num_base_bdevs_operational": 2, 00:27:03.779 "base_bdevs_list": [ 00:27:03.779 { 00:27:03.779 "name": "pt1", 00:27:03.779 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:03.779 "is_configured": true, 00:27:03.779 "data_offset": 256, 00:27:03.779 "data_size": 7936 00:27:03.779 }, 00:27:03.779 { 00:27:03.779 "name": "pt2", 00:27:03.779 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:03.779 "is_configured": true, 00:27:03.779 "data_offset": 256, 00:27:03.779 "data_size": 7936 00:27:03.779 } 00:27:03.779 ] 00:27:03.779 } 00:27:03.779 } 00:27:03.779 }' 00:27:03.779 22:10:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:04.037 22:10:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:27:04.037 pt2' 00:27:04.038 22:10:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:04.038 22:10:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:27:04.038 22:10:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:04.038 22:10:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:04.038 "name": "pt1", 00:27:04.038 "aliases": [ 00:27:04.038 "00000000-0000-0000-0000-000000000001" 00:27:04.038 ], 00:27:04.038 "product_name": "passthru", 00:27:04.038 "block_size": 4128, 00:27:04.038 "num_blocks": 8192, 00:27:04.038 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:04.038 "md_size": 32, 00:27:04.038 "md_interleave": true, 00:27:04.038 "dif_type": 0, 00:27:04.038 "assigned_rate_limits": { 00:27:04.038 "rw_ios_per_sec": 0, 00:27:04.038 "rw_mbytes_per_sec": 0, 00:27:04.038 "r_mbytes_per_sec": 0, 00:27:04.038 "w_mbytes_per_sec": 0 00:27:04.038 }, 00:27:04.038 "claimed": true, 00:27:04.038 "claim_type": "exclusive_write", 00:27:04.038 "zoned": false, 00:27:04.038 "supported_io_types": { 00:27:04.038 "read": true, 00:27:04.038 "write": true, 00:27:04.038 "unmap": true, 00:27:04.038 "flush": true, 00:27:04.038 "reset": true, 00:27:04.038 "nvme_admin": false, 00:27:04.038 "nvme_io": false, 00:27:04.038 "nvme_io_md": false, 00:27:04.038 "write_zeroes": true, 00:27:04.038 "zcopy": true, 00:27:04.038 "get_zone_info": false, 00:27:04.038 "zone_management": false, 00:27:04.038 "zone_append": false, 00:27:04.038 "compare": false, 00:27:04.038 "compare_and_write": false, 00:27:04.038 "abort": true, 00:27:04.038 "seek_hole": false, 00:27:04.038 "seek_data": false, 00:27:04.038 "copy": true, 00:27:04.038 "nvme_iov_md": false 00:27:04.038 }, 00:27:04.038 "memory_domains": [ 00:27:04.038 { 00:27:04.038 "dma_device_id": "system", 00:27:04.038 "dma_device_type": 1 00:27:04.038 }, 00:27:04.038 { 00:27:04.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:04.038 "dma_device_type": 2 00:27:04.038 } 00:27:04.038 ], 00:27:04.038 "driver_specific": { 00:27:04.038 "passthru": { 00:27:04.038 "name": "pt1", 00:27:04.038 "base_bdev_name": "malloc1" 00:27:04.038 } 00:27:04.038 } 00:27:04.038 }' 00:27:04.038 22:10:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:04.038 22:10:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:04.331 22:10:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:27:04.331 22:10:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:04.331 22:10:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:04.331 22:10:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:27:04.331 22:10:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:04.331 22:10:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:04.331 22:10:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:27:04.331 22:10:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:04.331 22:10:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:04.331 22:10:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:27:04.331 22:10:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:04.331 22:10:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:27:04.331 22:10:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:04.590 22:10:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:04.590 "name": "pt2", 00:27:04.590 "aliases": [ 00:27:04.590 "00000000-0000-0000-0000-000000000002" 00:27:04.590 ], 00:27:04.590 "product_name": "passthru", 00:27:04.590 "block_size": 4128, 00:27:04.590 "num_blocks": 8192, 00:27:04.590 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:04.590 "md_size": 32, 00:27:04.590 "md_interleave": true, 00:27:04.590 "dif_type": 0, 00:27:04.590 "assigned_rate_limits": { 00:27:04.590 "rw_ios_per_sec": 0, 00:27:04.590 "rw_mbytes_per_sec": 0, 00:27:04.590 "r_mbytes_per_sec": 0, 00:27:04.590 "w_mbytes_per_sec": 0 00:27:04.590 }, 00:27:04.590 "claimed": true, 00:27:04.590 "claim_type": "exclusive_write", 00:27:04.590 "zoned": false, 00:27:04.590 "supported_io_types": { 00:27:04.590 "read": true, 00:27:04.590 "write": true, 00:27:04.590 "unmap": true, 00:27:04.590 "flush": true, 00:27:04.590 "reset": true, 00:27:04.590 "nvme_admin": false, 00:27:04.590 "nvme_io": false, 00:27:04.590 "nvme_io_md": false, 00:27:04.590 "write_zeroes": true, 00:27:04.590 "zcopy": true, 00:27:04.590 "get_zone_info": false, 00:27:04.590 "zone_management": false, 00:27:04.591 "zone_append": false, 00:27:04.591 "compare": false, 00:27:04.591 "compare_and_write": false, 00:27:04.591 "abort": true, 00:27:04.591 "seek_hole": false, 00:27:04.591 "seek_data": false, 00:27:04.591 "copy": true, 00:27:04.591 "nvme_iov_md": false 00:27:04.591 }, 00:27:04.591 "memory_domains": [ 00:27:04.591 { 00:27:04.591 "dma_device_id": "system", 00:27:04.591 "dma_device_type": 1 00:27:04.591 }, 00:27:04.591 { 00:27:04.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:04.591 "dma_device_type": 2 00:27:04.591 } 00:27:04.591 ], 00:27:04.591 "driver_specific": { 00:27:04.591 "passthru": { 00:27:04.591 "name": "pt2", 00:27:04.591 "base_bdev_name": "malloc2" 00:27:04.591 } 00:27:04.591 } 00:27:04.591 }' 00:27:04.591 22:10:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:04.591 22:10:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:04.591 22:10:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:27:04.591 22:10:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:04.591 22:10:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:04.849 22:10:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:27:04.850 22:10:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:04.850 22:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:04.850 22:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:27:04.850 22:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:04.850 22:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:04.850 22:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:27:04.850 22:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:27:04.850 22:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:05.109 [2024-07-13 22:10:24.303312] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:05.109 22:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # '[' 405e3dab-4336-41dc-a2b2-445ff2b86de1 '!=' 405e3dab-4336-41dc-a2b2-445ff2b86de1 ']' 00:27:05.109 22:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:27:05.109 22:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:27:05.109 22:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:27:05.109 22:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:27:05.109 [2024-07-13 22:10:24.459467] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:27:05.109 22:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:05.109 22:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:05.109 22:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:05.109 22:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:05.109 22:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:05.109 22:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:05.109 22:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:05.109 22:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:05.109 22:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:05.109 22:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:05.109 22:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:05.109 22:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:05.368 22:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:05.368 "name": "raid_bdev1", 00:27:05.368 "uuid": "405e3dab-4336-41dc-a2b2-445ff2b86de1", 00:27:05.368 "strip_size_kb": 0, 00:27:05.368 "state": "online", 00:27:05.368 "raid_level": "raid1", 00:27:05.368 "superblock": true, 00:27:05.368 "num_base_bdevs": 2, 00:27:05.368 "num_base_bdevs_discovered": 1, 00:27:05.368 "num_base_bdevs_operational": 1, 00:27:05.368 "base_bdevs_list": [ 00:27:05.368 { 00:27:05.368 "name": null, 00:27:05.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:05.368 "is_configured": false, 00:27:05.368 "data_offset": 256, 00:27:05.368 "data_size": 7936 00:27:05.368 }, 00:27:05.368 { 00:27:05.368 "name": "pt2", 00:27:05.368 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:05.368 "is_configured": true, 00:27:05.368 "data_offset": 256, 00:27:05.368 "data_size": 7936 00:27:05.368 } 00:27:05.368 ] 00:27:05.368 }' 00:27:05.368 22:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:05.368 22:10:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:05.936 22:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:05.936 [2024-07-13 22:10:25.273548] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:05.936 [2024-07-13 22:10:25.273579] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:05.936 [2024-07-13 22:10:25.273651] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:05.936 [2024-07-13 22:10:25.273698] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:05.936 [2024-07-13 22:10:25.273711] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041a80 name raid_bdev1, state offline 00:27:05.936 22:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:05.936 22:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:27:06.195 22:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:27:06.195 22:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:27:06.195 22:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:27:06.195 22:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:27:06.195 22:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:27:06.455 22:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:27:06.455 22:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:27:06.455 22:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:27:06.455 22:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:27:06.455 22:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@518 -- # i=1 00:27:06.455 22:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:06.455 [2024-07-13 22:10:25.794910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:06.455 [2024-07-13 22:10:25.794992] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:06.455 [2024-07-13 22:10:25.795011] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042080 00:27:06.455 [2024-07-13 22:10:25.795025] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:06.455 [2024-07-13 22:10:25.796922] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:06.455 [2024-07-13 22:10:25.796953] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:06.455 [2024-07-13 22:10:25.797009] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:06.455 [2024-07-13 22:10:25.797058] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:06.455 [2024-07-13 22:10:25.797146] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042680 00:27:06.455 [2024-07-13 22:10:25.797159] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:27:06.455 [2024-07-13 22:10:25.797226] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:27:06.455 [2024-07-13 22:10:25.797317] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042680 00:27:06.455 [2024-07-13 22:10:25.797327] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000042680 00:27:06.455 [2024-07-13 22:10:25.797400] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:06.455 pt2 00:27:06.455 22:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:06.455 22:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:06.455 22:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:06.455 22:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:06.455 22:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:06.455 22:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:06.455 22:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:06.455 22:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:06.455 22:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:06.455 22:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:06.455 22:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:06.455 22:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:06.713 22:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:06.713 "name": "raid_bdev1", 00:27:06.713 "uuid": "405e3dab-4336-41dc-a2b2-445ff2b86de1", 00:27:06.713 "strip_size_kb": 0, 00:27:06.713 "state": "online", 00:27:06.713 "raid_level": "raid1", 00:27:06.713 "superblock": true, 00:27:06.713 "num_base_bdevs": 2, 00:27:06.713 "num_base_bdevs_discovered": 1, 00:27:06.713 "num_base_bdevs_operational": 1, 00:27:06.713 "base_bdevs_list": [ 00:27:06.713 { 00:27:06.713 "name": null, 00:27:06.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:06.713 "is_configured": false, 00:27:06.713 "data_offset": 256, 00:27:06.713 "data_size": 7936 00:27:06.714 }, 00:27:06.714 { 00:27:06.714 "name": "pt2", 00:27:06.714 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:06.714 "is_configured": true, 00:27:06.714 "data_offset": 256, 00:27:06.714 "data_size": 7936 00:27:06.714 } 00:27:06.714 ] 00:27:06.714 }' 00:27:06.714 22:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:06.714 22:10:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:07.281 22:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:07.281 [2024-07-13 22:10:26.637136] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:07.281 [2024-07-13 22:10:26.637170] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:07.281 [2024-07-13 22:10:26.637245] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:07.281 [2024-07-13 22:10:26.637296] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:07.281 [2024-07-13 22:10:26.637307] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042680 name raid_bdev1, state offline 00:27:07.281 22:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:07.281 22:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:27:07.540 22:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:27:07.540 22:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:27:07.540 22:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:27:07.540 22:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:07.800 [2024-07-13 22:10:26.957969] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:07.800 [2024-07-13 22:10:26.958030] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:07.800 [2024-07-13 22:10:26.958051] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042980 00:27:07.800 [2024-07-13 22:10:26.958063] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:07.800 [2024-07-13 22:10:26.959942] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:07.800 [2024-07-13 22:10:26.959975] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:07.800 [2024-07-13 22:10:26.960036] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:07.800 [2024-07-13 22:10:26.960114] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:07.800 [2024-07-13 22:10:26.960246] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:27:07.800 [2024-07-13 22:10:26.960259] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:07.800 [2024-07-13 22:10:26.960279] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042f80 name raid_bdev1, state configuring 00:27:07.800 [2024-07-13 22:10:26.960344] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:07.800 [2024-07-13 22:10:26.960417] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000043280 00:27:07.800 [2024-07-13 22:10:26.960427] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:27:07.800 [2024-07-13 22:10:26.960494] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:27:07.800 [2024-07-13 22:10:26.960583] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000043280 00:27:07.800 [2024-07-13 22:10:26.960594] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000043280 00:27:07.800 [2024-07-13 22:10:26.960663] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:07.800 pt1 00:27:07.800 22:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:27:07.800 22:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:07.800 22:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:07.800 22:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:07.800 22:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:07.800 22:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:07.800 22:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:07.800 22:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:07.800 22:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:07.800 22:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:07.800 22:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:07.800 22:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:07.800 22:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:07.800 22:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:07.800 "name": "raid_bdev1", 00:27:07.800 "uuid": "405e3dab-4336-41dc-a2b2-445ff2b86de1", 00:27:07.800 "strip_size_kb": 0, 00:27:07.800 "state": "online", 00:27:07.800 "raid_level": "raid1", 00:27:07.800 "superblock": true, 00:27:07.800 "num_base_bdevs": 2, 00:27:07.800 "num_base_bdevs_discovered": 1, 00:27:07.800 "num_base_bdevs_operational": 1, 00:27:07.800 "base_bdevs_list": [ 00:27:07.800 { 00:27:07.800 "name": null, 00:27:07.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:07.800 "is_configured": false, 00:27:07.800 "data_offset": 256, 00:27:07.800 "data_size": 7936 00:27:07.800 }, 00:27:07.800 { 00:27:07.800 "name": "pt2", 00:27:07.800 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:07.800 "is_configured": true, 00:27:07.800 "data_offset": 256, 00:27:07.800 "data_size": 7936 00:27:07.800 } 00:27:07.800 ] 00:27:07.800 }' 00:27:07.800 22:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:07.800 22:10:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:08.367 22:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:27:08.367 22:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:27:08.625 22:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:27:08.625 22:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:08.625 22:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:27:08.625 [2024-07-13 22:10:27.944793] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:08.625 22:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # '[' 405e3dab-4336-41dc-a2b2-445ff2b86de1 '!=' 405e3dab-4336-41dc-a2b2-445ff2b86de1 ']' 00:27:08.625 22:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@562 -- # killprocess 1517839 00:27:08.626 22:10:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 1517839 ']' 00:27:08.626 22:10:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 1517839 00:27:08.626 22:10:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:27:08.626 22:10:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:08.626 22:10:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1517839 00:27:08.626 22:10:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:08.626 22:10:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:08.626 22:10:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1517839' 00:27:08.626 killing process with pid 1517839 00:27:08.626 22:10:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@967 -- # kill 1517839 00:27:08.626 [2024-07-13 22:10:28.014993] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:08.626 [2024-07-13 22:10:28.015087] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:08.626 [2024-07-13 22:10:28.015136] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:08.626 [2024-07-13 22:10:28.015151] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000043280 name raid_bdev1, state offline 00:27:08.626 22:10:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # wait 1517839 00:27:08.884 [2024-07-13 22:10:28.150700] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:10.261 22:10:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@564 -- # return 0 00:27:10.261 00:27:10.261 real 0m12.963s 00:27:10.261 user 0m22.293s 00:27:10.261 sys 0m2.368s 00:27:10.261 22:10:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:10.261 22:10:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:10.261 ************************************ 00:27:10.261 END TEST raid_superblock_test_md_interleaved 00:27:10.261 ************************************ 00:27:10.261 22:10:29 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:27:10.261 22:10:29 bdev_raid -- bdev/bdev_raid.sh@914 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:27:10.261 22:10:29 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:27:10.261 22:10:29 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:10.261 22:10:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:10.261 ************************************ 00:27:10.261 START TEST raid_rebuild_test_sb_md_interleaved 00:27:10.261 ************************************ 00:27:10.261 22:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false false 00:27:10.261 22:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:27:10.261 22:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:27:10.261 22:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:27:10.261 22:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:27:10.261 22:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local verify=false 00:27:10.261 22:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:27:10.261 22:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:10.261 22:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:27:10.261 22:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:27:10.261 22:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:10.261 22:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:27:10.261 22:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:27:10.261 22:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:10.261 22:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:10.261 22:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:27:10.261 22:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:27:10.261 22:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local strip_size 00:27:10.261 22:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local create_arg 00:27:10.261 22:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:27:10.261 22:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local data_offset 00:27:10.261 22:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:27:10.261 22:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:27:10.261 22:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:27:10.261 22:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:27:10.261 22:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # raid_pid=1520297 00:27:10.261 22:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # waitforlisten 1520297 /var/tmp/spdk-raid.sock 00:27:10.261 22:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:10.261 22:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 1520297 ']' 00:27:10.261 22:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:10.261 22:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:10.261 22:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:10.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:10.261 22:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:10.261 22:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:10.261 [2024-07-13 22:10:29.552612] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:10.261 [2024-07-13 22:10:29.552713] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1520297 ] 00:27:10.261 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:10.261 Zero copy mechanism will not be used. 00:27:10.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.261 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:10.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.261 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:10.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.261 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:10.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.261 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:10.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.261 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:10.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.261 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:10.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.261 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:10.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.261 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:10.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.261 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:10.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.261 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:10.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.261 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:10.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.262 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:10.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.262 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:10.521 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.521 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:10.521 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.521 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:10.521 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.521 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:10.521 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.521 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:10.521 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.521 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:10.521 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.521 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:10.521 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.521 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:10.521 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.521 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:10.521 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.521 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:10.521 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.521 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:10.521 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.521 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:10.521 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.521 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:10.521 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.521 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:10.521 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.521 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:10.521 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.521 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:10.521 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.521 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:10.521 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.521 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:10.521 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.521 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:10.521 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.521 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:10.521 [2024-07-13 22:10:29.713452] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:10.779 [2024-07-13 22:10:29.919552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:10.779 [2024-07-13 22:10:30.162608] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:10.779 [2024-07-13 22:10:30.162638] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:11.037 22:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:11.037 22:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:27:11.037 22:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:27:11.037 22:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:27:11.296 BaseBdev1_malloc 00:27:11.296 22:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:11.296 [2024-07-13 22:10:30.656039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:11.296 [2024-07-13 22:10:30.656090] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:11.296 [2024-07-13 22:10:30.656113] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:27:11.296 [2024-07-13 22:10:30.656126] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:11.296 [2024-07-13 22:10:30.657880] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:11.296 [2024-07-13 22:10:30.657922] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:11.296 BaseBdev1 00:27:11.296 22:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:27:11.296 22:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:27:11.554 BaseBdev2_malloc 00:27:11.554 22:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:11.813 [2024-07-13 22:10:31.013689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:11.813 [2024-07-13 22:10:31.013747] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:11.813 [2024-07-13 22:10:31.013769] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:27:11.813 [2024-07-13 22:10:31.013788] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:11.813 [2024-07-13 22:10:31.015612] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:11.813 [2024-07-13 22:10:31.015640] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:11.813 BaseBdev2 00:27:11.813 22:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:27:12.071 spare_malloc 00:27:12.071 22:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:12.071 spare_delay 00:27:12.071 22:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:12.330 [2024-07-13 22:10:31.530307] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:12.330 [2024-07-13 22:10:31.530361] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:12.330 [2024-07-13 22:10:31.530385] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:27:12.330 [2024-07-13 22:10:31.530398] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:12.330 [2024-07-13 22:10:31.532217] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:12.330 [2024-07-13 22:10:31.532247] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:12.330 spare 00:27:12.330 22:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:27:12.330 [2024-07-13 22:10:31.694768] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:12.330 [2024-07-13 22:10:31.696475] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:12.330 [2024-07-13 22:10:31.696660] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041a80 00:27:12.330 [2024-07-13 22:10:31.696677] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:27:12.330 [2024-07-13 22:10:31.696764] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:27:12.330 [2024-07-13 22:10:31.696870] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041a80 00:27:12.330 [2024-07-13 22:10:31.696879] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041a80 00:27:12.330 [2024-07-13 22:10:31.696966] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:12.330 22:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:12.330 22:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:12.330 22:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:12.330 22:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:12.330 22:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:12.330 22:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:12.330 22:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:12.330 22:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:12.330 22:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:12.330 22:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:12.330 22:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:12.330 22:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:12.589 22:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:12.589 "name": "raid_bdev1", 00:27:12.589 "uuid": "d67425bf-ed12-45bf-8721-baf8bae81ebd", 00:27:12.589 "strip_size_kb": 0, 00:27:12.589 "state": "online", 00:27:12.589 "raid_level": "raid1", 00:27:12.589 "superblock": true, 00:27:12.589 "num_base_bdevs": 2, 00:27:12.589 "num_base_bdevs_discovered": 2, 00:27:12.589 "num_base_bdevs_operational": 2, 00:27:12.589 "base_bdevs_list": [ 00:27:12.589 { 00:27:12.589 "name": "BaseBdev1", 00:27:12.589 "uuid": "aaf0b053-a781-5cc6-b3ca-d6353b02c488", 00:27:12.589 "is_configured": true, 00:27:12.589 "data_offset": 256, 00:27:12.589 "data_size": 7936 00:27:12.589 }, 00:27:12.589 { 00:27:12.589 "name": "BaseBdev2", 00:27:12.589 "uuid": "54dba6e2-180d-56b8-b84e-1034c159c6c3", 00:27:12.589 "is_configured": true, 00:27:12.589 "data_offset": 256, 00:27:12.589 "data_size": 7936 00:27:12.589 } 00:27:12.589 ] 00:27:12.589 }' 00:27:12.589 22:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:12.589 22:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:13.156 22:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:13.156 22:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:27:13.156 [2024-07-13 22:10:32.505140] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:13.156 22:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:27:13.156 22:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:13.156 22:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:13.414 22:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:27:13.414 22:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:27:13.414 22:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@623 -- # '[' false = true ']' 00:27:13.414 22:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:27:13.672 [2024-07-13 22:10:32.845763] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:13.672 22:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:13.672 22:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:13.672 22:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:13.672 22:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:13.672 22:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:13.672 22:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:13.673 22:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:13.673 22:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:13.673 22:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:13.673 22:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:13.673 22:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:13.673 22:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:13.673 22:10:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:13.673 "name": "raid_bdev1", 00:27:13.673 "uuid": "d67425bf-ed12-45bf-8721-baf8bae81ebd", 00:27:13.673 "strip_size_kb": 0, 00:27:13.673 "state": "online", 00:27:13.673 "raid_level": "raid1", 00:27:13.673 "superblock": true, 00:27:13.673 "num_base_bdevs": 2, 00:27:13.673 "num_base_bdevs_discovered": 1, 00:27:13.673 "num_base_bdevs_operational": 1, 00:27:13.673 "base_bdevs_list": [ 00:27:13.673 { 00:27:13.673 "name": null, 00:27:13.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:13.673 "is_configured": false, 00:27:13.673 "data_offset": 256, 00:27:13.673 "data_size": 7936 00:27:13.673 }, 00:27:13.673 { 00:27:13.673 "name": "BaseBdev2", 00:27:13.673 "uuid": "54dba6e2-180d-56b8-b84e-1034c159c6c3", 00:27:13.673 "is_configured": true, 00:27:13.673 "data_offset": 256, 00:27:13.673 "data_size": 7936 00:27:13.673 } 00:27:13.673 ] 00:27:13.673 }' 00:27:13.673 22:10:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:13.673 22:10:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:14.269 22:10:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:14.527 [2024-07-13 22:10:33.659921] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:14.527 [2024-07-13 22:10:33.679311] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:27:14.527 [2024-07-13 22:10:33.681065] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:14.527 22:10:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # sleep 1 00:27:15.460 22:10:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:15.460 22:10:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:15.460 22:10:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:15.460 22:10:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:15.460 22:10:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:15.460 22:10:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:15.460 22:10:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:15.718 22:10:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:15.718 "name": "raid_bdev1", 00:27:15.718 "uuid": "d67425bf-ed12-45bf-8721-baf8bae81ebd", 00:27:15.718 "strip_size_kb": 0, 00:27:15.718 "state": "online", 00:27:15.718 "raid_level": "raid1", 00:27:15.719 "superblock": true, 00:27:15.719 "num_base_bdevs": 2, 00:27:15.719 "num_base_bdevs_discovered": 2, 00:27:15.719 "num_base_bdevs_operational": 2, 00:27:15.719 "process": { 00:27:15.719 "type": "rebuild", 00:27:15.719 "target": "spare", 00:27:15.719 "progress": { 00:27:15.719 "blocks": 2816, 00:27:15.719 "percent": 35 00:27:15.719 } 00:27:15.719 }, 00:27:15.719 "base_bdevs_list": [ 00:27:15.719 { 00:27:15.719 "name": "spare", 00:27:15.719 "uuid": "2f15d22d-3b3b-5402-af45-0c455c7f957f", 00:27:15.719 "is_configured": true, 00:27:15.719 "data_offset": 256, 00:27:15.719 "data_size": 7936 00:27:15.719 }, 00:27:15.719 { 00:27:15.719 "name": "BaseBdev2", 00:27:15.719 "uuid": "54dba6e2-180d-56b8-b84e-1034c159c6c3", 00:27:15.719 "is_configured": true, 00:27:15.719 "data_offset": 256, 00:27:15.719 "data_size": 7936 00:27:15.719 } 00:27:15.719 ] 00:27:15.719 }' 00:27:15.719 22:10:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:15.719 22:10:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:15.719 22:10:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:15.719 22:10:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:15.719 22:10:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:15.719 [2024-07-13 22:10:35.081942] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:15.719 [2024-07-13 22:10:35.091809] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:15.719 [2024-07-13 22:10:35.091857] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:15.719 [2024-07-13 22:10:35.091872] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:15.719 [2024-07-13 22:10:35.091883] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:15.976 22:10:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:15.976 22:10:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:15.976 22:10:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:15.976 22:10:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:15.976 22:10:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:15.976 22:10:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:15.976 22:10:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:15.976 22:10:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:15.976 22:10:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:15.976 22:10:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:15.976 22:10:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:15.976 22:10:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:15.976 22:10:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:15.977 "name": "raid_bdev1", 00:27:15.977 "uuid": "d67425bf-ed12-45bf-8721-baf8bae81ebd", 00:27:15.977 "strip_size_kb": 0, 00:27:15.977 "state": "online", 00:27:15.977 "raid_level": "raid1", 00:27:15.977 "superblock": true, 00:27:15.977 "num_base_bdevs": 2, 00:27:15.977 "num_base_bdevs_discovered": 1, 00:27:15.977 "num_base_bdevs_operational": 1, 00:27:15.977 "base_bdevs_list": [ 00:27:15.977 { 00:27:15.977 "name": null, 00:27:15.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:15.977 "is_configured": false, 00:27:15.977 "data_offset": 256, 00:27:15.977 "data_size": 7936 00:27:15.977 }, 00:27:15.977 { 00:27:15.977 "name": "BaseBdev2", 00:27:15.977 "uuid": "54dba6e2-180d-56b8-b84e-1034c159c6c3", 00:27:15.977 "is_configured": true, 00:27:15.977 "data_offset": 256, 00:27:15.977 "data_size": 7936 00:27:15.977 } 00:27:15.977 ] 00:27:15.977 }' 00:27:15.977 22:10:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:15.977 22:10:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:16.542 22:10:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:16.542 22:10:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:16.542 22:10:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:16.542 22:10:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:16.542 22:10:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:16.542 22:10:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:16.542 22:10:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:16.799 22:10:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:16.799 "name": "raid_bdev1", 00:27:16.799 "uuid": "d67425bf-ed12-45bf-8721-baf8bae81ebd", 00:27:16.799 "strip_size_kb": 0, 00:27:16.799 "state": "online", 00:27:16.799 "raid_level": "raid1", 00:27:16.800 "superblock": true, 00:27:16.800 "num_base_bdevs": 2, 00:27:16.800 "num_base_bdevs_discovered": 1, 00:27:16.800 "num_base_bdevs_operational": 1, 00:27:16.800 "base_bdevs_list": [ 00:27:16.800 { 00:27:16.800 "name": null, 00:27:16.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:16.800 "is_configured": false, 00:27:16.800 "data_offset": 256, 00:27:16.800 "data_size": 7936 00:27:16.800 }, 00:27:16.800 { 00:27:16.800 "name": "BaseBdev2", 00:27:16.800 "uuid": "54dba6e2-180d-56b8-b84e-1034c159c6c3", 00:27:16.800 "is_configured": true, 00:27:16.800 "data_offset": 256, 00:27:16.800 "data_size": 7936 00:27:16.800 } 00:27:16.800 ] 00:27:16.800 }' 00:27:16.800 22:10:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:16.800 22:10:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:16.800 22:10:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:16.800 22:10:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:16.800 22:10:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:17.057 [2024-07-13 22:10:36.204749] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:17.057 [2024-07-13 22:10:36.222593] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:27:17.057 [2024-07-13 22:10:36.224323] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:17.057 22:10:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:17.990 22:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:17.990 22:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:17.990 22:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:17.990 22:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:17.990 22:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:17.990 22:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:17.990 22:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:18.248 22:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:18.248 "name": "raid_bdev1", 00:27:18.248 "uuid": "d67425bf-ed12-45bf-8721-baf8bae81ebd", 00:27:18.248 "strip_size_kb": 0, 00:27:18.248 "state": "online", 00:27:18.248 "raid_level": "raid1", 00:27:18.248 "superblock": true, 00:27:18.248 "num_base_bdevs": 2, 00:27:18.248 "num_base_bdevs_discovered": 2, 00:27:18.248 "num_base_bdevs_operational": 2, 00:27:18.248 "process": { 00:27:18.248 "type": "rebuild", 00:27:18.248 "target": "spare", 00:27:18.248 "progress": { 00:27:18.248 "blocks": 2816, 00:27:18.248 "percent": 35 00:27:18.248 } 00:27:18.248 }, 00:27:18.248 "base_bdevs_list": [ 00:27:18.248 { 00:27:18.248 "name": "spare", 00:27:18.248 "uuid": "2f15d22d-3b3b-5402-af45-0c455c7f957f", 00:27:18.248 "is_configured": true, 00:27:18.248 "data_offset": 256, 00:27:18.248 "data_size": 7936 00:27:18.248 }, 00:27:18.248 { 00:27:18.248 "name": "BaseBdev2", 00:27:18.248 "uuid": "54dba6e2-180d-56b8-b84e-1034c159c6c3", 00:27:18.248 "is_configured": true, 00:27:18.248 "data_offset": 256, 00:27:18.248 "data_size": 7936 00:27:18.248 } 00:27:18.248 ] 00:27:18.248 }' 00:27:18.248 22:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:18.248 22:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:18.248 22:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:18.248 22:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:18.248 22:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:27:18.248 22:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:27:18.248 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:27:18.248 22:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:27:18.248 22:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:27:18.248 22:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:27:18.248 22:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@705 -- # local timeout=968 00:27:18.248 22:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:18.248 22:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:18.248 22:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:18.248 22:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:18.248 22:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:18.248 22:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:18.248 22:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:18.248 22:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:18.506 22:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:18.506 "name": "raid_bdev1", 00:27:18.506 "uuid": "d67425bf-ed12-45bf-8721-baf8bae81ebd", 00:27:18.506 "strip_size_kb": 0, 00:27:18.506 "state": "online", 00:27:18.506 "raid_level": "raid1", 00:27:18.506 "superblock": true, 00:27:18.506 "num_base_bdevs": 2, 00:27:18.506 "num_base_bdevs_discovered": 2, 00:27:18.506 "num_base_bdevs_operational": 2, 00:27:18.506 "process": { 00:27:18.506 "type": "rebuild", 00:27:18.506 "target": "spare", 00:27:18.506 "progress": { 00:27:18.506 "blocks": 3584, 00:27:18.506 "percent": 45 00:27:18.506 } 00:27:18.506 }, 00:27:18.506 "base_bdevs_list": [ 00:27:18.506 { 00:27:18.506 "name": "spare", 00:27:18.506 "uuid": "2f15d22d-3b3b-5402-af45-0c455c7f957f", 00:27:18.506 "is_configured": true, 00:27:18.506 "data_offset": 256, 00:27:18.506 "data_size": 7936 00:27:18.506 }, 00:27:18.506 { 00:27:18.506 "name": "BaseBdev2", 00:27:18.506 "uuid": "54dba6e2-180d-56b8-b84e-1034c159c6c3", 00:27:18.506 "is_configured": true, 00:27:18.506 "data_offset": 256, 00:27:18.506 "data_size": 7936 00:27:18.506 } 00:27:18.506 ] 00:27:18.506 }' 00:27:18.506 22:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:18.506 22:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:18.506 22:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:18.506 22:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:18.506 22:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:27:19.491 22:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:19.491 22:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:19.491 22:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:19.491 22:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:19.491 22:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:19.491 22:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:19.491 22:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:19.491 22:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:19.748 22:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:19.748 "name": "raid_bdev1", 00:27:19.748 "uuid": "d67425bf-ed12-45bf-8721-baf8bae81ebd", 00:27:19.748 "strip_size_kb": 0, 00:27:19.748 "state": "online", 00:27:19.748 "raid_level": "raid1", 00:27:19.748 "superblock": true, 00:27:19.748 "num_base_bdevs": 2, 00:27:19.748 "num_base_bdevs_discovered": 2, 00:27:19.748 "num_base_bdevs_operational": 2, 00:27:19.748 "process": { 00:27:19.748 "type": "rebuild", 00:27:19.748 "target": "spare", 00:27:19.748 "progress": { 00:27:19.748 "blocks": 6656, 00:27:19.748 "percent": 83 00:27:19.748 } 00:27:19.748 }, 00:27:19.748 "base_bdevs_list": [ 00:27:19.748 { 00:27:19.748 "name": "spare", 00:27:19.748 "uuid": "2f15d22d-3b3b-5402-af45-0c455c7f957f", 00:27:19.748 "is_configured": true, 00:27:19.748 "data_offset": 256, 00:27:19.748 "data_size": 7936 00:27:19.748 }, 00:27:19.748 { 00:27:19.748 "name": "BaseBdev2", 00:27:19.748 "uuid": "54dba6e2-180d-56b8-b84e-1034c159c6c3", 00:27:19.748 "is_configured": true, 00:27:19.748 "data_offset": 256, 00:27:19.748 "data_size": 7936 00:27:19.748 } 00:27:19.748 ] 00:27:19.748 }' 00:27:19.748 22:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:19.748 22:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:19.748 22:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:19.748 22:10:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:19.748 22:10:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:27:20.005 [2024-07-13 22:10:39.347823] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:20.005 [2024-07-13 22:10:39.347883] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:20.005 [2024-07-13 22:10:39.347972] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:20.939 22:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:20.939 22:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:20.939 22:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:20.939 22:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:20.939 22:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:20.939 22:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:20.939 22:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:20.939 22:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:20.939 22:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:20.939 "name": "raid_bdev1", 00:27:20.939 "uuid": "d67425bf-ed12-45bf-8721-baf8bae81ebd", 00:27:20.939 "strip_size_kb": 0, 00:27:20.939 "state": "online", 00:27:20.939 "raid_level": "raid1", 00:27:20.939 "superblock": true, 00:27:20.939 "num_base_bdevs": 2, 00:27:20.939 "num_base_bdevs_discovered": 2, 00:27:20.939 "num_base_bdevs_operational": 2, 00:27:20.939 "base_bdevs_list": [ 00:27:20.939 { 00:27:20.939 "name": "spare", 00:27:20.939 "uuid": "2f15d22d-3b3b-5402-af45-0c455c7f957f", 00:27:20.939 "is_configured": true, 00:27:20.939 "data_offset": 256, 00:27:20.939 "data_size": 7936 00:27:20.939 }, 00:27:20.939 { 00:27:20.939 "name": "BaseBdev2", 00:27:20.939 "uuid": "54dba6e2-180d-56b8-b84e-1034c159c6c3", 00:27:20.939 "is_configured": true, 00:27:20.939 "data_offset": 256, 00:27:20.939 "data_size": 7936 00:27:20.939 } 00:27:20.939 ] 00:27:20.939 }' 00:27:20.939 22:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:20.939 22:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:20.939 22:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:20.939 22:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:27:20.939 22:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # break 00:27:20.939 22:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:20.939 22:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:20.939 22:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:20.939 22:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:20.939 22:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:20.939 22:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:20.939 22:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:21.197 22:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:21.197 "name": "raid_bdev1", 00:27:21.197 "uuid": "d67425bf-ed12-45bf-8721-baf8bae81ebd", 00:27:21.197 "strip_size_kb": 0, 00:27:21.197 "state": "online", 00:27:21.197 "raid_level": "raid1", 00:27:21.197 "superblock": true, 00:27:21.197 "num_base_bdevs": 2, 00:27:21.197 "num_base_bdevs_discovered": 2, 00:27:21.197 "num_base_bdevs_operational": 2, 00:27:21.197 "base_bdevs_list": [ 00:27:21.197 { 00:27:21.197 "name": "spare", 00:27:21.197 "uuid": "2f15d22d-3b3b-5402-af45-0c455c7f957f", 00:27:21.197 "is_configured": true, 00:27:21.197 "data_offset": 256, 00:27:21.197 "data_size": 7936 00:27:21.197 }, 00:27:21.197 { 00:27:21.197 "name": "BaseBdev2", 00:27:21.197 "uuid": "54dba6e2-180d-56b8-b84e-1034c159c6c3", 00:27:21.197 "is_configured": true, 00:27:21.197 "data_offset": 256, 00:27:21.197 "data_size": 7936 00:27:21.197 } 00:27:21.197 ] 00:27:21.197 }' 00:27:21.197 22:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:21.197 22:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:21.197 22:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:21.197 22:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:21.197 22:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:21.197 22:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:21.197 22:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:21.197 22:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:21.197 22:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:21.197 22:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:21.197 22:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:21.197 22:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:21.197 22:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:21.197 22:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:21.197 22:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:21.197 22:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:21.455 22:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:21.455 "name": "raid_bdev1", 00:27:21.455 "uuid": "d67425bf-ed12-45bf-8721-baf8bae81ebd", 00:27:21.455 "strip_size_kb": 0, 00:27:21.455 "state": "online", 00:27:21.455 "raid_level": "raid1", 00:27:21.455 "superblock": true, 00:27:21.455 "num_base_bdevs": 2, 00:27:21.455 "num_base_bdevs_discovered": 2, 00:27:21.455 "num_base_bdevs_operational": 2, 00:27:21.455 "base_bdevs_list": [ 00:27:21.455 { 00:27:21.455 "name": "spare", 00:27:21.455 "uuid": "2f15d22d-3b3b-5402-af45-0c455c7f957f", 00:27:21.455 "is_configured": true, 00:27:21.455 "data_offset": 256, 00:27:21.455 "data_size": 7936 00:27:21.455 }, 00:27:21.455 { 00:27:21.455 "name": "BaseBdev2", 00:27:21.455 "uuid": "54dba6e2-180d-56b8-b84e-1034c159c6c3", 00:27:21.455 "is_configured": true, 00:27:21.455 "data_offset": 256, 00:27:21.455 "data_size": 7936 00:27:21.455 } 00:27:21.455 ] 00:27:21.455 }' 00:27:21.455 22:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:21.455 22:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:22.048 22:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:22.048 [2024-07-13 22:10:41.334875] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:22.048 [2024-07-13 22:10:41.334911] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:22.048 [2024-07-13 22:10:41.334997] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:22.048 [2024-07-13 22:10:41.335057] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:22.048 [2024-07-13 22:10:41.335070] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041a80 name raid_bdev1, state offline 00:27:22.048 22:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:22.048 22:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # jq length 00:27:22.306 22:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:27:22.306 22:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@721 -- # '[' false = true ']' 00:27:22.306 22:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:27:22.306 22:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:22.306 22:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:22.565 [2024-07-13 22:10:41.848194] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:22.565 [2024-07-13 22:10:41.848251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:22.565 [2024-07-13 22:10:41.848273] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042680 00:27:22.565 [2024-07-13 22:10:41.848284] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:22.565 [2024-07-13 22:10:41.850137] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:22.565 [2024-07-13 22:10:41.850165] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:22.566 [2024-07-13 22:10:41.850226] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:22.566 [2024-07-13 22:10:41.850273] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:22.566 [2024-07-13 22:10:41.850378] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:22.566 spare 00:27:22.566 22:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:22.566 22:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:22.566 22:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:22.566 22:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:22.566 22:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:22.566 22:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:22.566 22:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:22.566 22:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:22.566 22:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:22.566 22:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:22.566 22:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:22.566 22:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:22.566 [2024-07-13 22:10:41.950699] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042c80 00:27:22.566 [2024-07-13 22:10:41.950738] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:27:22.566 [2024-07-13 22:10:41.950844] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000108b0 00:27:22.566 [2024-07-13 22:10:41.950998] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042c80 00:27:22.566 [2024-07-13 22:10:41.951010] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000042c80 00:27:22.566 [2024-07-13 22:10:41.951096] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:22.824 22:10:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:22.824 "name": "raid_bdev1", 00:27:22.824 "uuid": "d67425bf-ed12-45bf-8721-baf8bae81ebd", 00:27:22.824 "strip_size_kb": 0, 00:27:22.824 "state": "online", 00:27:22.824 "raid_level": "raid1", 00:27:22.824 "superblock": true, 00:27:22.824 "num_base_bdevs": 2, 00:27:22.824 "num_base_bdevs_discovered": 2, 00:27:22.824 "num_base_bdevs_operational": 2, 00:27:22.824 "base_bdevs_list": [ 00:27:22.824 { 00:27:22.824 "name": "spare", 00:27:22.824 "uuid": "2f15d22d-3b3b-5402-af45-0c455c7f957f", 00:27:22.824 "is_configured": true, 00:27:22.824 "data_offset": 256, 00:27:22.824 "data_size": 7936 00:27:22.824 }, 00:27:22.824 { 00:27:22.824 "name": "BaseBdev2", 00:27:22.824 "uuid": "54dba6e2-180d-56b8-b84e-1034c159c6c3", 00:27:22.824 "is_configured": true, 00:27:22.824 "data_offset": 256, 00:27:22.824 "data_size": 7936 00:27:22.824 } 00:27:22.824 ] 00:27:22.824 }' 00:27:22.824 22:10:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:22.824 22:10:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:23.390 22:10:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:23.390 22:10:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:23.390 22:10:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:23.390 22:10:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:23.390 22:10:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:23.390 22:10:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:23.390 22:10:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:23.390 22:10:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:23.390 "name": "raid_bdev1", 00:27:23.390 "uuid": "d67425bf-ed12-45bf-8721-baf8bae81ebd", 00:27:23.390 "strip_size_kb": 0, 00:27:23.390 "state": "online", 00:27:23.390 "raid_level": "raid1", 00:27:23.390 "superblock": true, 00:27:23.390 "num_base_bdevs": 2, 00:27:23.390 "num_base_bdevs_discovered": 2, 00:27:23.390 "num_base_bdevs_operational": 2, 00:27:23.390 "base_bdevs_list": [ 00:27:23.390 { 00:27:23.390 "name": "spare", 00:27:23.390 "uuid": "2f15d22d-3b3b-5402-af45-0c455c7f957f", 00:27:23.390 "is_configured": true, 00:27:23.390 "data_offset": 256, 00:27:23.390 "data_size": 7936 00:27:23.390 }, 00:27:23.390 { 00:27:23.390 "name": "BaseBdev2", 00:27:23.390 "uuid": "54dba6e2-180d-56b8-b84e-1034c159c6c3", 00:27:23.390 "is_configured": true, 00:27:23.390 "data_offset": 256, 00:27:23.390 "data_size": 7936 00:27:23.390 } 00:27:23.390 ] 00:27:23.390 }' 00:27:23.390 22:10:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:23.390 22:10:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:23.390 22:10:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:23.390 22:10:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:23.390 22:10:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:23.390 22:10:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:27:23.648 22:10:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:27:23.648 22:10:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:23.907 [2024-07-13 22:10:43.067461] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:23.907 22:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:23.907 22:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:23.907 22:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:23.907 22:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:23.907 22:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:23.907 22:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:23.907 22:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:23.907 22:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:23.907 22:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:23.907 22:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:23.907 22:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:23.907 22:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:23.907 22:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:23.907 "name": "raid_bdev1", 00:27:23.907 "uuid": "d67425bf-ed12-45bf-8721-baf8bae81ebd", 00:27:23.907 "strip_size_kb": 0, 00:27:23.907 "state": "online", 00:27:23.907 "raid_level": "raid1", 00:27:23.907 "superblock": true, 00:27:23.907 "num_base_bdevs": 2, 00:27:23.907 "num_base_bdevs_discovered": 1, 00:27:23.907 "num_base_bdevs_operational": 1, 00:27:23.907 "base_bdevs_list": [ 00:27:23.907 { 00:27:23.907 "name": null, 00:27:23.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:23.907 "is_configured": false, 00:27:23.907 "data_offset": 256, 00:27:23.907 "data_size": 7936 00:27:23.907 }, 00:27:23.907 { 00:27:23.907 "name": "BaseBdev2", 00:27:23.907 "uuid": "54dba6e2-180d-56b8-b84e-1034c159c6c3", 00:27:23.907 "is_configured": true, 00:27:23.907 "data_offset": 256, 00:27:23.907 "data_size": 7936 00:27:23.907 } 00:27:23.907 ] 00:27:23.907 }' 00:27:23.907 22:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:23.907 22:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:24.471 22:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:24.729 [2024-07-13 22:10:43.881615] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:24.729 [2024-07-13 22:10:43.881793] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:24.729 [2024-07-13 22:10:43.881812] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:24.729 [2024-07-13 22:10:43.881846] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:24.729 [2024-07-13 22:10:43.899687] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010980 00:27:24.729 [2024-07-13 22:10:43.901453] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:24.729 22:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # sleep 1 00:27:25.660 22:10:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:25.660 22:10:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:25.660 22:10:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:25.660 22:10:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:25.660 22:10:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:25.660 22:10:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:25.660 22:10:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:25.917 22:10:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:25.917 "name": "raid_bdev1", 00:27:25.917 "uuid": "d67425bf-ed12-45bf-8721-baf8bae81ebd", 00:27:25.917 "strip_size_kb": 0, 00:27:25.917 "state": "online", 00:27:25.917 "raid_level": "raid1", 00:27:25.917 "superblock": true, 00:27:25.917 "num_base_bdevs": 2, 00:27:25.917 "num_base_bdevs_discovered": 2, 00:27:25.917 "num_base_bdevs_operational": 2, 00:27:25.917 "process": { 00:27:25.917 "type": "rebuild", 00:27:25.917 "target": "spare", 00:27:25.917 "progress": { 00:27:25.917 "blocks": 2816, 00:27:25.917 "percent": 35 00:27:25.917 } 00:27:25.917 }, 00:27:25.917 "base_bdevs_list": [ 00:27:25.917 { 00:27:25.917 "name": "spare", 00:27:25.917 "uuid": "2f15d22d-3b3b-5402-af45-0c455c7f957f", 00:27:25.917 "is_configured": true, 00:27:25.917 "data_offset": 256, 00:27:25.917 "data_size": 7936 00:27:25.917 }, 00:27:25.917 { 00:27:25.917 "name": "BaseBdev2", 00:27:25.917 "uuid": "54dba6e2-180d-56b8-b84e-1034c159c6c3", 00:27:25.917 "is_configured": true, 00:27:25.917 "data_offset": 256, 00:27:25.917 "data_size": 7936 00:27:25.917 } 00:27:25.917 ] 00:27:25.917 }' 00:27:25.917 22:10:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:25.917 22:10:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:25.917 22:10:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:25.917 22:10:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:25.917 22:10:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:26.176 [2024-07-13 22:10:45.338836] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:26.176 [2024-07-13 22:10:45.412831] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:26.176 [2024-07-13 22:10:45.412882] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:26.176 [2024-07-13 22:10:45.412897] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:26.176 [2024-07-13 22:10:45.412917] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:26.176 22:10:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:26.176 22:10:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:26.176 22:10:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:26.176 22:10:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:26.176 22:10:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:26.176 22:10:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:26.176 22:10:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:26.176 22:10:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:26.176 22:10:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:26.176 22:10:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:26.176 22:10:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:26.176 22:10:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:26.434 22:10:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:26.434 "name": "raid_bdev1", 00:27:26.434 "uuid": "d67425bf-ed12-45bf-8721-baf8bae81ebd", 00:27:26.434 "strip_size_kb": 0, 00:27:26.434 "state": "online", 00:27:26.434 "raid_level": "raid1", 00:27:26.434 "superblock": true, 00:27:26.434 "num_base_bdevs": 2, 00:27:26.434 "num_base_bdevs_discovered": 1, 00:27:26.434 "num_base_bdevs_operational": 1, 00:27:26.434 "base_bdevs_list": [ 00:27:26.434 { 00:27:26.434 "name": null, 00:27:26.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:26.434 "is_configured": false, 00:27:26.434 "data_offset": 256, 00:27:26.434 "data_size": 7936 00:27:26.434 }, 00:27:26.434 { 00:27:26.434 "name": "BaseBdev2", 00:27:26.434 "uuid": "54dba6e2-180d-56b8-b84e-1034c159c6c3", 00:27:26.434 "is_configured": true, 00:27:26.434 "data_offset": 256, 00:27:26.434 "data_size": 7936 00:27:26.434 } 00:27:26.434 ] 00:27:26.434 }' 00:27:26.434 22:10:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:26.434 22:10:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:27.001 22:10:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:27.001 [2024-07-13 22:10:46.268561] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:27.001 [2024-07-13 22:10:46.268622] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:27.001 [2024-07-13 22:10:46.268643] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043280 00:27:27.001 [2024-07-13 22:10:46.268656] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:27.001 [2024-07-13 22:10:46.268877] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:27.001 [2024-07-13 22:10:46.268895] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:27.001 [2024-07-13 22:10:46.268978] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:27.001 [2024-07-13 22:10:46.268993] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:27.001 [2024-07-13 22:10:46.269005] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:27.001 [2024-07-13 22:10:46.269029] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:27.001 [2024-07-13 22:10:46.286876] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010a50 00:27:27.002 spare 00:27:27.002 [2024-07-13 22:10:46.288625] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:27.002 22:10:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # sleep 1 00:27:27.937 22:10:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:27.937 22:10:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:27.937 22:10:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:27.937 22:10:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:27.937 22:10:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:27.937 22:10:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:27.937 22:10:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:28.195 22:10:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:28.195 "name": "raid_bdev1", 00:27:28.195 "uuid": "d67425bf-ed12-45bf-8721-baf8bae81ebd", 00:27:28.195 "strip_size_kb": 0, 00:27:28.195 "state": "online", 00:27:28.195 "raid_level": "raid1", 00:27:28.195 "superblock": true, 00:27:28.195 "num_base_bdevs": 2, 00:27:28.195 "num_base_bdevs_discovered": 2, 00:27:28.195 "num_base_bdevs_operational": 2, 00:27:28.195 "process": { 00:27:28.195 "type": "rebuild", 00:27:28.195 "target": "spare", 00:27:28.195 "progress": { 00:27:28.195 "blocks": 2816, 00:27:28.195 "percent": 35 00:27:28.195 } 00:27:28.195 }, 00:27:28.195 "base_bdevs_list": [ 00:27:28.195 { 00:27:28.195 "name": "spare", 00:27:28.195 "uuid": "2f15d22d-3b3b-5402-af45-0c455c7f957f", 00:27:28.195 "is_configured": true, 00:27:28.195 "data_offset": 256, 00:27:28.195 "data_size": 7936 00:27:28.195 }, 00:27:28.195 { 00:27:28.195 "name": "BaseBdev2", 00:27:28.195 "uuid": "54dba6e2-180d-56b8-b84e-1034c159c6c3", 00:27:28.195 "is_configured": true, 00:27:28.195 "data_offset": 256, 00:27:28.195 "data_size": 7936 00:27:28.195 } 00:27:28.195 ] 00:27:28.195 }' 00:27:28.195 22:10:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:28.195 22:10:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:28.195 22:10:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:28.195 22:10:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:28.195 22:10:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:28.454 [2024-07-13 22:10:47.714098] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:28.454 [2024-07-13 22:10:47.800150] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:28.454 [2024-07-13 22:10:47.800199] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:28.454 [2024-07-13 22:10:47.800220] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:28.454 [2024-07-13 22:10:47.800228] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:28.713 22:10:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:28.713 22:10:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:28.713 22:10:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:28.713 22:10:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:28.713 22:10:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:28.713 22:10:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:28.713 22:10:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:28.713 22:10:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:28.713 22:10:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:28.713 22:10:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:28.713 22:10:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:28.713 22:10:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:28.713 22:10:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:28.713 "name": "raid_bdev1", 00:27:28.713 "uuid": "d67425bf-ed12-45bf-8721-baf8bae81ebd", 00:27:28.713 "strip_size_kb": 0, 00:27:28.713 "state": "online", 00:27:28.713 "raid_level": "raid1", 00:27:28.713 "superblock": true, 00:27:28.713 "num_base_bdevs": 2, 00:27:28.713 "num_base_bdevs_discovered": 1, 00:27:28.713 "num_base_bdevs_operational": 1, 00:27:28.713 "base_bdevs_list": [ 00:27:28.713 { 00:27:28.713 "name": null, 00:27:28.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:28.713 "is_configured": false, 00:27:28.713 "data_offset": 256, 00:27:28.713 "data_size": 7936 00:27:28.713 }, 00:27:28.713 { 00:27:28.713 "name": "BaseBdev2", 00:27:28.713 "uuid": "54dba6e2-180d-56b8-b84e-1034c159c6c3", 00:27:28.713 "is_configured": true, 00:27:28.713 "data_offset": 256, 00:27:28.713 "data_size": 7936 00:27:28.713 } 00:27:28.713 ] 00:27:28.713 }' 00:27:28.713 22:10:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:28.713 22:10:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:29.280 22:10:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:29.281 22:10:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:29.281 22:10:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:29.281 22:10:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:29.281 22:10:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:29.281 22:10:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:29.281 22:10:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:29.539 22:10:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:29.539 "name": "raid_bdev1", 00:27:29.539 "uuid": "d67425bf-ed12-45bf-8721-baf8bae81ebd", 00:27:29.539 "strip_size_kb": 0, 00:27:29.539 "state": "online", 00:27:29.539 "raid_level": "raid1", 00:27:29.539 "superblock": true, 00:27:29.539 "num_base_bdevs": 2, 00:27:29.539 "num_base_bdevs_discovered": 1, 00:27:29.539 "num_base_bdevs_operational": 1, 00:27:29.539 "base_bdevs_list": [ 00:27:29.539 { 00:27:29.539 "name": null, 00:27:29.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:29.539 "is_configured": false, 00:27:29.539 "data_offset": 256, 00:27:29.539 "data_size": 7936 00:27:29.539 }, 00:27:29.539 { 00:27:29.539 "name": "BaseBdev2", 00:27:29.539 "uuid": "54dba6e2-180d-56b8-b84e-1034c159c6c3", 00:27:29.539 "is_configured": true, 00:27:29.539 "data_offset": 256, 00:27:29.539 "data_size": 7936 00:27:29.539 } 00:27:29.539 ] 00:27:29.539 }' 00:27:29.539 22:10:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:29.539 22:10:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:29.539 22:10:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:29.539 22:10:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:29.539 22:10:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:27:29.798 22:10:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:29.798 [2024-07-13 22:10:49.077446] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:29.798 [2024-07-13 22:10:49.077501] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:29.798 [2024-07-13 22:10:49.077528] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043880 00:27:29.798 [2024-07-13 22:10:49.077539] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:29.798 [2024-07-13 22:10:49.077730] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:29.798 [2024-07-13 22:10:49.077747] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:29.798 [2024-07-13 22:10:49.077796] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:27:29.798 [2024-07-13 22:10:49.077810] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:27:29.798 [2024-07-13 22:10:49.077823] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:29.798 BaseBdev1 00:27:29.798 22:10:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # sleep 1 00:27:30.734 22:10:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:30.734 22:10:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:30.734 22:10:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:30.734 22:10:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:30.734 22:10:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:30.734 22:10:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:30.734 22:10:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:30.734 22:10:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:30.734 22:10:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:30.734 22:10:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:30.734 22:10:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:30.734 22:10:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:30.993 22:10:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:30.993 "name": "raid_bdev1", 00:27:30.993 "uuid": "d67425bf-ed12-45bf-8721-baf8bae81ebd", 00:27:30.993 "strip_size_kb": 0, 00:27:30.993 "state": "online", 00:27:30.993 "raid_level": "raid1", 00:27:30.993 "superblock": true, 00:27:30.993 "num_base_bdevs": 2, 00:27:30.993 "num_base_bdevs_discovered": 1, 00:27:30.993 "num_base_bdevs_operational": 1, 00:27:30.993 "base_bdevs_list": [ 00:27:30.993 { 00:27:30.993 "name": null, 00:27:30.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:30.993 "is_configured": false, 00:27:30.993 "data_offset": 256, 00:27:30.993 "data_size": 7936 00:27:30.993 }, 00:27:30.993 { 00:27:30.993 "name": "BaseBdev2", 00:27:30.993 "uuid": "54dba6e2-180d-56b8-b84e-1034c159c6c3", 00:27:30.993 "is_configured": true, 00:27:30.993 "data_offset": 256, 00:27:30.993 "data_size": 7936 00:27:30.993 } 00:27:30.993 ] 00:27:30.993 }' 00:27:30.993 22:10:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:30.993 22:10:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:31.561 22:10:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:31.561 22:10:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:31.561 22:10:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:31.561 22:10:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:31.561 22:10:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:31.561 22:10:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:31.561 22:10:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:31.820 22:10:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:31.820 "name": "raid_bdev1", 00:27:31.820 "uuid": "d67425bf-ed12-45bf-8721-baf8bae81ebd", 00:27:31.820 "strip_size_kb": 0, 00:27:31.820 "state": "online", 00:27:31.820 "raid_level": "raid1", 00:27:31.820 "superblock": true, 00:27:31.820 "num_base_bdevs": 2, 00:27:31.820 "num_base_bdevs_discovered": 1, 00:27:31.820 "num_base_bdevs_operational": 1, 00:27:31.820 "base_bdevs_list": [ 00:27:31.820 { 00:27:31.820 "name": null, 00:27:31.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:31.820 "is_configured": false, 00:27:31.820 "data_offset": 256, 00:27:31.820 "data_size": 7936 00:27:31.820 }, 00:27:31.820 { 00:27:31.820 "name": "BaseBdev2", 00:27:31.820 "uuid": "54dba6e2-180d-56b8-b84e-1034c159c6c3", 00:27:31.820 "is_configured": true, 00:27:31.820 "data_offset": 256, 00:27:31.820 "data_size": 7936 00:27:31.820 } 00:27:31.820 ] 00:27:31.820 }' 00:27:31.820 22:10:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:31.820 22:10:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:31.820 22:10:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:31.820 22:10:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:31.820 22:10:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:31.820 22:10:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:27:31.820 22:10:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:31.820 22:10:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:31.820 22:10:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:31.820 22:10:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:31.820 22:10:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:31.820 22:10:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:31.820 22:10:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:31.820 22:10:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:31.820 22:10:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:27:31.820 22:10:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:31.820 [2024-07-13 22:10:51.191024] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:31.820 [2024-07-13 22:10:51.191170] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:27:31.820 [2024-07-13 22:10:51.191187] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:31.820 request: 00:27:31.820 { 00:27:31.820 "base_bdev": "BaseBdev1", 00:27:31.820 "raid_bdev": "raid_bdev1", 00:27:31.820 "method": "bdev_raid_add_base_bdev", 00:27:31.820 "req_id": 1 00:27:31.820 } 00:27:31.820 Got JSON-RPC error response 00:27:31.820 response: 00:27:31.820 { 00:27:31.820 "code": -22, 00:27:31.820 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:27:31.820 } 00:27:32.079 22:10:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:27:32.079 22:10:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:32.079 22:10:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:32.079 22:10:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:32.079 22:10:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # sleep 1 00:27:33.013 22:10:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:33.013 22:10:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:33.013 22:10:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:33.013 22:10:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:33.013 22:10:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:33.013 22:10:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:33.013 22:10:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:33.013 22:10:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:33.013 22:10:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:33.013 22:10:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:33.013 22:10:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:33.013 22:10:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:33.013 22:10:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:33.013 "name": "raid_bdev1", 00:27:33.013 "uuid": "d67425bf-ed12-45bf-8721-baf8bae81ebd", 00:27:33.013 "strip_size_kb": 0, 00:27:33.013 "state": "online", 00:27:33.013 "raid_level": "raid1", 00:27:33.013 "superblock": true, 00:27:33.013 "num_base_bdevs": 2, 00:27:33.013 "num_base_bdevs_discovered": 1, 00:27:33.013 "num_base_bdevs_operational": 1, 00:27:33.013 "base_bdevs_list": [ 00:27:33.013 { 00:27:33.013 "name": null, 00:27:33.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:33.013 "is_configured": false, 00:27:33.013 "data_offset": 256, 00:27:33.013 "data_size": 7936 00:27:33.013 }, 00:27:33.013 { 00:27:33.013 "name": "BaseBdev2", 00:27:33.013 "uuid": "54dba6e2-180d-56b8-b84e-1034c159c6c3", 00:27:33.013 "is_configured": true, 00:27:33.013 "data_offset": 256, 00:27:33.013 "data_size": 7936 00:27:33.013 } 00:27:33.013 ] 00:27:33.013 }' 00:27:33.013 22:10:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:33.013 22:10:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:33.580 22:10:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:33.580 22:10:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:33.580 22:10:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:33.580 22:10:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:33.580 22:10:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:33.580 22:10:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:33.580 22:10:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:33.838 22:10:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:33.838 "name": "raid_bdev1", 00:27:33.838 "uuid": "d67425bf-ed12-45bf-8721-baf8bae81ebd", 00:27:33.838 "strip_size_kb": 0, 00:27:33.838 "state": "online", 00:27:33.838 "raid_level": "raid1", 00:27:33.838 "superblock": true, 00:27:33.838 "num_base_bdevs": 2, 00:27:33.838 "num_base_bdevs_discovered": 1, 00:27:33.838 "num_base_bdevs_operational": 1, 00:27:33.838 "base_bdevs_list": [ 00:27:33.838 { 00:27:33.838 "name": null, 00:27:33.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:33.838 "is_configured": false, 00:27:33.838 "data_offset": 256, 00:27:33.838 "data_size": 7936 00:27:33.838 }, 00:27:33.838 { 00:27:33.838 "name": "BaseBdev2", 00:27:33.838 "uuid": "54dba6e2-180d-56b8-b84e-1034c159c6c3", 00:27:33.838 "is_configured": true, 00:27:33.838 "data_offset": 256, 00:27:33.838 "data_size": 7936 00:27:33.838 } 00:27:33.838 ] 00:27:33.838 }' 00:27:33.838 22:10:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:33.838 22:10:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:33.838 22:10:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:33.838 22:10:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:33.838 22:10:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@782 -- # killprocess 1520297 00:27:33.838 22:10:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 1520297 ']' 00:27:33.838 22:10:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 1520297 00:27:33.838 22:10:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:27:33.838 22:10:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:33.838 22:10:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1520297 00:27:33.838 22:10:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:33.838 22:10:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:33.838 22:10:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1520297' 00:27:33.838 killing process with pid 1520297 00:27:33.838 22:10:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@967 -- # kill 1520297 00:27:33.838 Received shutdown signal, test time was about 60.000000 seconds 00:27:33.838 00:27:33.838 Latency(us) 00:27:33.838 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:33.838 =================================================================================================================== 00:27:33.838 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:33.838 [2024-07-13 22:10:53.217349] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:33.838 [2024-07-13 22:10:53.217456] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:33.838 [2024-07-13 22:10:53.217506] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:33.838 [2024-07-13 22:10:53.217518] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042c80 name raid_bdev1, state offline 00:27:33.838 22:10:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # wait 1520297 00:27:34.096 [2024-07-13 22:10:53.451296] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:35.513 22:10:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # return 0 00:27:35.513 00:27:35.513 real 0m25.201s 00:27:35.513 user 0m37.795s 00:27:35.513 sys 0m3.185s 00:27:35.513 22:10:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:35.513 22:10:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:35.513 ************************************ 00:27:35.513 END TEST raid_rebuild_test_sb_md_interleaved 00:27:35.513 ************************************ 00:27:35.513 22:10:54 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:27:35.513 22:10:54 bdev_raid -- bdev/bdev_raid.sh@916 -- # trap - EXIT 00:27:35.513 22:10:54 bdev_raid -- bdev/bdev_raid.sh@917 -- # cleanup 00:27:35.513 22:10:54 bdev_raid -- bdev/bdev_raid.sh@58 -- # '[' -n 1520297 ']' 00:27:35.513 22:10:54 bdev_raid -- bdev/bdev_raid.sh@58 -- # ps -p 1520297 00:27:35.513 22:10:54 bdev_raid -- bdev/bdev_raid.sh@62 -- # rm -rf /raidtest 00:27:35.513 00:27:35.513 real 15m55.103s 00:27:35.513 user 25m8.249s 00:27:35.513 sys 2m51.166s 00:27:35.513 22:10:54 bdev_raid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:35.513 22:10:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:35.513 ************************************ 00:27:35.513 END TEST bdev_raid 00:27:35.513 ************************************ 00:27:35.513 22:10:54 -- common/autotest_common.sh@1142 -- # return 0 00:27:35.513 22:10:54 -- spdk/autotest.sh@191 -- # run_test bdevperf_config /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test_config.sh 00:27:35.513 22:10:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:35.513 22:10:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:35.513 22:10:54 -- common/autotest_common.sh@10 -- # set +x 00:27:35.513 ************************************ 00:27:35.513 START TEST bdevperf_config 00:27:35.513 ************************************ 00:27:35.513 22:10:54 bdevperf_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test_config.sh 00:27:35.773 * Looking for test storage... 00:27:35.773 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf 00:27:35.773 22:10:54 bdevperf_config -- bdevperf/test_config.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/common.sh 00:27:35.773 22:10:54 bdevperf_config -- bdevperf/common.sh@5 -- # bdevperf=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf 00:27:35.773 22:10:54 bdevperf_config -- bdevperf/test_config.sh@12 -- # jsonconf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json 00:27:35.773 22:10:54 bdevperf_config -- bdevperf/test_config.sh@13 -- # testconf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:27:35.773 22:10:54 bdevperf_config -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:35.773 22:10:54 bdevperf_config -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:27:35.773 22:10:54 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:27:35.773 22:10:54 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=read 00:27:35.773 22:10:54 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:27:35.773 22:10:54 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:27:35.773 22:10:54 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:27:35.773 22:10:54 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:27:35.773 22:10:54 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:27:35.773 00:27:35.773 22:10:54 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:27:35.773 22:10:54 bdevperf_config -- bdevperf/test_config.sh@18 -- # create_job job0 00:27:35.773 22:10:54 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:27:35.773 22:10:54 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:27:35.773 22:10:54 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:27:35.773 22:10:54 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:27:35.773 22:10:54 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:27:35.773 22:10:54 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:27:35.773 00:27:35.773 22:10:54 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:27:35.773 22:10:54 bdevperf_config -- bdevperf/test_config.sh@19 -- # create_job job1 00:27:35.773 22:10:54 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:27:35.773 22:10:54 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:27:35.773 22:10:54 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:27:35.773 22:10:54 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:27:35.773 22:10:54 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:27:35.773 22:10:54 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:27:35.773 00:27:35.773 22:10:54 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:27:35.773 22:10:54 bdevperf_config -- bdevperf/test_config.sh@20 -- # create_job job2 00:27:35.773 22:10:54 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:27:35.773 22:10:54 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:27:35.773 22:10:54 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:27:35.773 22:10:54 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:27:35.773 22:10:54 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:27:35.773 22:10:54 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:27:35.773 00:27:35.773 22:10:54 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:27:35.773 22:10:54 bdevperf_config -- bdevperf/test_config.sh@21 -- # create_job job3 00:27:35.773 22:10:54 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:27:35.773 22:10:54 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:27:35.773 22:10:54 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:27:35.773 22:10:54 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:27:35.773 22:10:54 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:27:35.773 22:10:54 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:27:35.773 00:27:35.773 22:10:54 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:27:35.773 22:10:54 bdevperf_config -- bdevperf/test_config.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:27:41.050 22:10:59 bdevperf_config -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-07-13 22:10:55.070594] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:41.050 [2024-07-13 22:10:55.070692] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1524995 ] 00:27:41.050 Using job config with 4 jobs 00:27:41.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.050 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:41.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.050 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:41.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.050 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:41.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.050 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:41.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.050 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:41.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.050 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:41.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.050 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:41.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.050 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:41.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.050 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:41.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:41.051 [2024-07-13 22:10:55.240060] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.051 [2024-07-13 22:10:55.457618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:41.051 cpumask for '\''job0'\'' is too big 00:27:41.051 cpumask for '\''job1'\'' is too big 00:27:41.051 cpumask for '\''job2'\'' is too big 00:27:41.051 cpumask for '\''job3'\'' is too big 00:27:41.051 Running I/O for 2 seconds... 00:27:41.051 00:27:41.051 Latency(us) 00:27:41.051 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:41.051 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:41.051 Malloc0 : 2.01 34492.72 33.68 0.00 0.00 7411.69 1402.47 11324.62 00:27:41.051 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:41.051 Malloc0 : 2.01 34471.78 33.66 0.00 0.00 7404.30 1291.06 10013.90 00:27:41.051 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:41.051 Malloc0 : 2.01 34451.46 33.64 0.00 0.00 7397.20 1297.61 8703.18 00:27:41.051 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:41.051 Malloc0 : 2.02 34524.56 33.72 0.00 0.00 7370.20 668.47 8021.61 00:27:41.051 =================================================================================================================== 00:27:41.051 Total : 137940.51 134.71 0.00 0.00 7395.82 668.47 11324.62' 00:27:41.051 22:10:59 bdevperf_config -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-07-13 22:10:55.070594] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:41.051 [2024-07-13 22:10:55.070692] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1524995 ] 00:27:41.051 Using job config with 4 jobs 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:41.051 [2024-07-13 22:10:55.240060] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.051 [2024-07-13 22:10:55.457618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:41.051 cpumask for '\''job0'\'' is too big 00:27:41.051 cpumask for '\''job1'\'' is too big 00:27:41.051 cpumask for '\''job2'\'' is too big 00:27:41.051 cpumask for '\''job3'\'' is too big 00:27:41.051 Running I/O for 2 seconds... 00:27:41.051 00:27:41.051 Latency(us) 00:27:41.051 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:41.051 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:41.051 Malloc0 : 2.01 34492.72 33.68 0.00 0.00 7411.69 1402.47 11324.62 00:27:41.051 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:41.051 Malloc0 : 2.01 34471.78 33.66 0.00 0.00 7404.30 1291.06 10013.90 00:27:41.051 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:41.051 Malloc0 : 2.01 34451.46 33.64 0.00 0.00 7397.20 1297.61 8703.18 00:27:41.051 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:41.051 Malloc0 : 2.02 34524.56 33.72 0.00 0.00 7370.20 668.47 8021.61 00:27:41.051 =================================================================================================================== 00:27:41.051 Total : 137940.51 134.71 0.00 0.00 7395.82 668.47 11324.62' 00:27:41.051 22:10:59 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-13 22:10:55.070594] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:41.051 [2024-07-13 22:10:55.070692] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1524995 ] 00:27:41.051 Using job config with 4 jobs 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:41.051 [2024-07-13 22:10:55.240060] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.051 [2024-07-13 22:10:55.457618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:41.051 cpumask for '\''job0'\'' is too big 00:27:41.051 cpumask for '\''job1'\'' is too big 00:27:41.051 cpumask for '\''job2'\'' is too big 00:27:41.051 cpumask for '\''job3'\'' is too big 00:27:41.051 Running I/O for 2 seconds... 00:27:41.051 00:27:41.051 Latency(us) 00:27:41.051 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:41.051 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:41.051 Malloc0 : 2.01 34492.72 33.68 0.00 0.00 7411.69 1402.47 11324.62 00:27:41.051 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:41.051 Malloc0 : 2.01 34471.78 33.66 0.00 0.00 7404.30 1291.06 10013.90 00:27:41.051 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:41.051 Malloc0 : 2.01 34451.46 33.64 0.00 0.00 7397.20 1297.61 8703.18 00:27:41.051 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:41.051 Malloc0 : 2.02 34524.56 33.72 0.00 0.00 7370.20 668.47 8021.61 00:27:41.051 =================================================================================================================== 00:27:41.051 Total : 137940.51 134.71 0.00 0.00 7395.82 668.47 11324.62' 00:27:41.051 22:10:59 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:27:41.051 22:10:59 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:27:41.051 22:10:59 bdevperf_config -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:27:41.051 22:10:59 bdevperf_config -- bdevperf/test_config.sh@25 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -C -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:27:41.051 [2024-07-13 22:10:59.485899] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:41.051 [2024-07-13 22:10:59.485999] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1525643 ] 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:41.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.051 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:41.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.052 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:41.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.052 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:41.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.052 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:41.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.052 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:41.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.052 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:41.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.052 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:41.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.052 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:41.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.052 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:41.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.052 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:41.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.052 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:41.052 [2024-07-13 22:10:59.660550] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.052 [2024-07-13 22:10:59.877214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:41.052 cpumask for 'job0' is too big 00:27:41.052 cpumask for 'job1' is too big 00:27:41.052 cpumask for 'job2' is too big 00:27:41.052 cpumask for 'job3' is too big 00:27:45.245 22:11:03 bdevperf_config -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:27:45.245 Running I/O for 2 seconds... 00:27:45.245 00:27:45.245 Latency(us) 00:27:45.245 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:45.245 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:45.245 Malloc0 : 2.01 35189.28 34.36 0.00 0.00 7266.98 1382.81 11639.19 00:27:45.245 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:45.245 Malloc0 : 2.01 35198.65 34.37 0.00 0.00 7253.31 1350.04 10171.19 00:27:45.245 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:45.246 Malloc0 : 2.02 35177.78 34.35 0.00 0.00 7245.30 1297.61 8860.47 00:27:45.246 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:45.246 Malloc0 : 2.02 35156.61 34.33 0.00 0.00 7237.85 1297.61 7969.18 00:27:45.246 =================================================================================================================== 00:27:45.246 Total : 140722.31 137.42 0.00 0.00 7250.85 1297.61 11639.19' 00:27:45.246 22:11:03 bdevperf_config -- bdevperf/test_config.sh@27 -- # cleanup 00:27:45.246 22:11:03 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:27:45.246 22:11:03 bdevperf_config -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:27:45.246 22:11:03 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:27:45.246 22:11:03 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:27:45.246 22:11:03 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:27:45.246 22:11:03 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:27:45.246 22:11:03 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:27:45.246 22:11:03 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:27:45.246 00:27:45.246 22:11:03 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:27:45.246 22:11:03 bdevperf_config -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:27:45.246 22:11:03 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:27:45.246 22:11:03 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:27:45.246 22:11:03 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:27:45.246 22:11:03 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:27:45.246 22:11:03 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:27:45.246 22:11:03 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:27:45.246 00:27:45.246 22:11:03 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:27:45.246 22:11:03 bdevperf_config -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:27:45.246 22:11:03 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:27:45.246 22:11:03 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:27:45.246 22:11:03 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:27:45.246 22:11:03 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:27:45.246 22:11:03 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:27:45.246 22:11:03 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:27:45.246 00:27:45.246 22:11:03 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:27:45.246 22:11:03 bdevperf_config -- bdevperf/test_config.sh@32 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:27:49.440 22:11:08 bdevperf_config -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-07-13 22:11:03.934329] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:49.440 [2024-07-13 22:11:03.934421] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1526332 ] 00:27:49.440 Using job config with 3 jobs 00:27:49.440 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.440 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:49.440 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.440 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:49.440 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.440 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:49.440 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.440 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:49.440 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.440 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:49.440 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.440 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:49.440 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.440 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:49.440 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.440 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:49.440 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.440 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:49.440 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.440 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:49.440 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.440 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:49.440 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.440 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:49.440 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.440 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:49.440 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.440 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:49.440 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.440 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:49.440 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.440 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:49.440 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.440 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:49.440 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.440 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:49.440 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.440 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:49.440 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.440 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:49.440 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:49.441 [2024-07-13 22:11:04.114853] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:49.441 [2024-07-13 22:11:04.340179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:49.441 cpumask for '\''job0'\'' is too big 00:27:49.441 cpumask for '\''job1'\'' is too big 00:27:49.441 cpumask for '\''job2'\'' is too big 00:27:49.441 Running I/O for 2 seconds... 00:27:49.441 00:27:49.441 Latency(us) 00:27:49.441 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:49.441 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:49.441 Malloc0 : 2.01 47296.78 46.19 0.00 0.00 5407.93 1343.49 8021.61 00:27:49.441 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:49.441 Malloc0 : 2.01 47268.65 46.16 0.00 0.00 5402.88 1251.74 6710.89 00:27:49.441 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:49.441 Malloc0 : 2.01 47325.99 46.22 0.00 0.00 5388.16 609.48 5793.38 00:27:49.441 =================================================================================================================== 00:27:49.441 Total : 141891.42 138.57 0.00 0.00 5399.65 609.48 8021.61' 00:27:49.441 22:11:08 bdevperf_config -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-07-13 22:11:03.934329] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:49.441 [2024-07-13 22:11:03.934421] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1526332 ] 00:27:49.441 Using job config with 3 jobs 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:49.441 [2024-07-13 22:11:04.114853] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:49.441 [2024-07-13 22:11:04.340179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:49.441 cpumask for '\''job0'\'' is too big 00:27:49.441 cpumask for '\''job1'\'' is too big 00:27:49.441 cpumask for '\''job2'\'' is too big 00:27:49.441 Running I/O for 2 seconds... 00:27:49.441 00:27:49.441 Latency(us) 00:27:49.441 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:49.441 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:49.441 Malloc0 : 2.01 47296.78 46.19 0.00 0.00 5407.93 1343.49 8021.61 00:27:49.441 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:49.441 Malloc0 : 2.01 47268.65 46.16 0.00 0.00 5402.88 1251.74 6710.89 00:27:49.441 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:49.441 Malloc0 : 2.01 47325.99 46.22 0.00 0.00 5388.16 609.48 5793.38 00:27:49.441 =================================================================================================================== 00:27:49.441 Total : 141891.42 138.57 0.00 0.00 5399.65 609.48 8021.61' 00:27:49.441 22:11:08 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:27:49.441 22:11:08 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-13 22:11:03.934329] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:49.441 [2024-07-13 22:11:03.934421] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1526332 ] 00:27:49.441 Using job config with 3 jobs 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.441 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:49.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.442 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:49.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.442 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:49.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.442 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:49.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.442 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:49.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.442 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:49.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.442 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:49.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.442 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:49.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.442 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:49.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.442 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:49.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.442 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:49.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.442 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:49.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.442 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:49.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.442 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:49.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.442 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:49.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.442 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:49.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.442 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:49.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.442 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:49.442 [2024-07-13 22:11:04.114853] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:49.442 [2024-07-13 22:11:04.340179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:49.442 cpumask for '\''job0'\'' is too big 00:27:49.442 cpumask for '\''job1'\'' is too big 00:27:49.442 cpumask for '\''job2'\'' is too big 00:27:49.442 Running I/O for 2 seconds... 00:27:49.442 00:27:49.442 Latency(us) 00:27:49.442 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:49.442 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:49.442 Malloc0 : 2.01 47296.78 46.19 0.00 0.00 5407.93 1343.49 8021.61 00:27:49.442 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:49.442 Malloc0 : 2.01 47268.65 46.16 0.00 0.00 5402.88 1251.74 6710.89 00:27:49.442 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:49.442 Malloc0 : 2.01 47325.99 46.22 0.00 0.00 5388.16 609.48 5793.38 00:27:49.442 =================================================================================================================== 00:27:49.442 Total : 141891.42 138.57 0.00 0.00 5399.65 609.48 8021.61' 00:27:49.442 22:11:08 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:27:49.442 22:11:08 bdevperf_config -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:27:49.442 22:11:08 bdevperf_config -- bdevperf/test_config.sh@35 -- # cleanup 00:27:49.442 22:11:08 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:27:49.442 22:11:08 bdevperf_config -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:27:49.442 22:11:08 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:27:49.442 22:11:08 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=rw 00:27:49.442 22:11:08 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:27:49.442 22:11:08 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:27:49.442 22:11:08 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:27:49.442 22:11:08 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:27:49.442 22:11:08 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:27:49.442 00:27:49.442 22:11:08 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:27:49.442 22:11:08 bdevperf_config -- bdevperf/test_config.sh@38 -- # create_job job0 00:27:49.442 22:11:08 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:27:49.442 22:11:08 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:27:49.442 22:11:08 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:27:49.442 22:11:08 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:27:49.442 22:11:08 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:27:49.442 22:11:08 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:27:49.442 00:27:49.442 22:11:08 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:27:49.442 22:11:08 bdevperf_config -- bdevperf/test_config.sh@39 -- # create_job job1 00:27:49.442 22:11:08 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:27:49.442 22:11:08 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:27:49.442 22:11:08 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:27:49.442 22:11:08 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:27:49.442 22:11:08 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:27:49.442 22:11:08 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:27:49.442 00:27:49.442 22:11:08 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:27:49.442 22:11:08 bdevperf_config -- bdevperf/test_config.sh@40 -- # create_job job2 00:27:49.442 22:11:08 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:27:49.442 22:11:08 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:27:49.442 22:11:08 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:27:49.442 22:11:08 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:27:49.442 22:11:08 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:27:49.442 22:11:08 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:27:49.442 00:27:49.442 22:11:08 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:27:49.442 22:11:08 bdevperf_config -- bdevperf/test_config.sh@41 -- # create_job job3 00:27:49.442 22:11:08 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:27:49.442 22:11:08 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:27:49.442 22:11:08 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:27:49.442 22:11:08 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:27:49.442 22:11:08 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:27:49.442 22:11:08 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:27:49.442 00:27:49.442 22:11:08 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:27:49.442 22:11:08 bdevperf_config -- bdevperf/test_config.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:27:53.635 22:11:12 bdevperf_config -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-07-13 22:11:08.366839] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:53.635 [2024-07-13 22:11:08.366957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1527128 ] 00:27:53.635 Using job config with 4 jobs 00:27:53.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.635 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:53.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.635 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:53.636 [2024-07-13 22:11:08.536267] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:53.636 [2024-07-13 22:11:08.759969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:53.636 cpumask for '\''job0'\'' is too big 00:27:53.636 cpumask for '\''job1'\'' is too big 00:27:53.636 cpumask for '\''job2'\'' is too big 00:27:53.636 cpumask for '\''job3'\'' is too big 00:27:53.636 Running I/O for 2 seconds... 00:27:53.636 00:27:53.636 Latency(us) 00:27:53.636 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:53.636 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:53.636 Malloc0 : 2.03 17545.69 17.13 0.00 0.00 14581.83 2778.73 23173.53 00:27:53.636 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:53.636 Malloc1 : 2.03 17534.76 17.12 0.00 0.00 14579.43 3381.66 22963.81 00:27:53.636 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:53.636 Malloc0 : 2.03 17522.49 17.11 0.00 0.00 14553.65 2804.94 20132.66 00:27:53.636 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:53.636 Malloc1 : 2.03 17511.71 17.10 0.00 0.00 14550.97 3276.80 20132.66 00:27:53.636 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:53.636 Malloc0 : 2.03 17501.07 17.09 0.00 0.00 14524.81 2700.08 17301.50 00:27:53.636 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:53.636 Malloc1 : 2.03 17489.67 17.08 0.00 0.00 14523.34 3224.37 17301.50 00:27:53.636 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:53.636 Malloc0 : 2.04 17478.95 17.07 0.00 0.00 14497.26 2713.19 15623.78 00:27:53.636 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:53.636 Malloc1 : 2.04 17467.66 17.06 0.00 0.00 14498.52 3224.37 15623.78 00:27:53.636 =================================================================================================================== 00:27:53.636 Total : 140052.01 136.77 0.00 0.00 14538.73 2700.08 23173.53' 00:27:53.636 22:11:12 bdevperf_config -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-07-13 22:11:08.366839] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:53.636 [2024-07-13 22:11:08.366957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1527128 ] 00:27:53.636 Using job config with 4 jobs 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:53.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.636 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:53.636 [2024-07-13 22:11:08.536267] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:53.636 [2024-07-13 22:11:08.759969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:53.636 cpumask for '\''job0'\'' is too big 00:27:53.636 cpumask for '\''job1'\'' is too big 00:27:53.636 cpumask for '\''job2'\'' is too big 00:27:53.636 cpumask for '\''job3'\'' is too big 00:27:53.636 Running I/O for 2 seconds... 00:27:53.636 00:27:53.637 Latency(us) 00:27:53.637 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:53.637 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:53.637 Malloc0 : 2.03 17545.69 17.13 0.00 0.00 14581.83 2778.73 23173.53 00:27:53.637 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:53.637 Malloc1 : 2.03 17534.76 17.12 0.00 0.00 14579.43 3381.66 22963.81 00:27:53.637 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:53.637 Malloc0 : 2.03 17522.49 17.11 0.00 0.00 14553.65 2804.94 20132.66 00:27:53.637 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:53.637 Malloc1 : 2.03 17511.71 17.10 0.00 0.00 14550.97 3276.80 20132.66 00:27:53.637 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:53.637 Malloc0 : 2.03 17501.07 17.09 0.00 0.00 14524.81 2700.08 17301.50 00:27:53.637 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:53.637 Malloc1 : 2.03 17489.67 17.08 0.00 0.00 14523.34 3224.37 17301.50 00:27:53.637 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:53.637 Malloc0 : 2.04 17478.95 17.07 0.00 0.00 14497.26 2713.19 15623.78 00:27:53.637 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:53.637 Malloc1 : 2.04 17467.66 17.06 0.00 0.00 14498.52 3224.37 15623.78 00:27:53.637 =================================================================================================================== 00:27:53.637 Total : 140052.01 136.77 0.00 0.00 14538.73 2700.08 23173.53' 00:27:53.637 22:11:12 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:27:53.637 22:11:12 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-13 22:11:08.366839] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:53.637 [2024-07-13 22:11:08.366957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1527128 ] 00:27:53.637 Using job config with 4 jobs 00:27:53.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.637 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:53.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.637 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:53.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.637 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:53.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.637 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:53.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.637 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:53.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.637 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:53.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.637 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:53.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.637 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:53.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.637 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:53.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.637 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:53.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.637 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:53.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.637 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:53.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.637 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:53.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.637 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:53.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.637 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:53.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.637 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:53.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.637 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:53.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.637 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:53.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.637 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:53.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.637 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:53.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.637 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:53.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.637 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:53.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.637 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:53.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.637 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:53.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.637 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:53.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.637 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:53.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.637 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:53.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.637 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:53.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.637 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:53.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.637 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:53.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.637 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:53.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.637 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:53.637 [2024-07-13 22:11:08.536267] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:53.637 [2024-07-13 22:11:08.759969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:53.637 cpumask for '\''job0'\'' is too big 00:27:53.637 cpumask for '\''job1'\'' is too big 00:27:53.637 cpumask for '\''job2'\'' is too big 00:27:53.637 cpumask for '\''job3'\'' is too big 00:27:53.637 Running I/O for 2 seconds... 00:27:53.637 00:27:53.637 Latency(us) 00:27:53.637 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:53.637 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:53.637 Malloc0 : 2.03 17545.69 17.13 0.00 0.00 14581.83 2778.73 23173.53 00:27:53.637 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:53.637 Malloc1 : 2.03 17534.76 17.12 0.00 0.00 14579.43 3381.66 22963.81 00:27:53.637 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:53.637 Malloc0 : 2.03 17522.49 17.11 0.00 0.00 14553.65 2804.94 20132.66 00:27:53.637 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:53.637 Malloc1 : 2.03 17511.71 17.10 0.00 0.00 14550.97 3276.80 20132.66 00:27:53.637 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:53.637 Malloc0 : 2.03 17501.07 17.09 0.00 0.00 14524.81 2700.08 17301.50 00:27:53.637 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:53.637 Malloc1 : 2.03 17489.67 17.08 0.00 0.00 14523.34 3224.37 17301.50 00:27:53.637 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:53.637 Malloc0 : 2.04 17478.95 17.07 0.00 0.00 14497.26 2713.19 15623.78 00:27:53.637 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:53.637 Malloc1 : 2.04 17467.66 17.06 0.00 0.00 14498.52 3224.37 15623.78 00:27:53.637 =================================================================================================================== 00:27:53.637 Total : 140052.01 136.77 0.00 0.00 14538.73 2700.08 23173.53' 00:27:53.637 22:11:12 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:27:53.637 22:11:12 bdevperf_config -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:27:53.637 22:11:12 bdevperf_config -- bdevperf/test_config.sh@44 -- # cleanup 00:27:53.637 22:11:12 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:27:53.637 22:11:12 bdevperf_config -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:27:53.637 00:27:53.637 real 0m17.905s 00:27:53.637 user 0m16.230s 00:27:53.637 sys 0m1.490s 00:27:53.637 22:11:12 bdevperf_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:53.637 22:11:12 bdevperf_config -- common/autotest_common.sh@10 -- # set +x 00:27:53.637 ************************************ 00:27:53.637 END TEST bdevperf_config 00:27:53.637 ************************************ 00:27:53.637 22:11:12 -- common/autotest_common.sh@1142 -- # return 0 00:27:53.637 22:11:12 -- spdk/autotest.sh@192 -- # uname -s 00:27:53.637 22:11:12 -- spdk/autotest.sh@192 -- # [[ Linux == Linux ]] 00:27:53.637 22:11:12 -- spdk/autotest.sh@193 -- # run_test reactor_set_interrupt /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:27:53.637 22:11:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:53.637 22:11:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:53.637 22:11:12 -- common/autotest_common.sh@10 -- # set +x 00:27:53.637 ************************************ 00:27:53.637 START TEST reactor_set_interrupt 00:27:53.637 ************************************ 00:27:53.637 22:11:12 reactor_set_interrupt -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:27:53.637 * Looking for test storage... 00:27:53.637 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:27:53.637 22:11:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/interrupt_common.sh 00:27:53.637 22:11:12 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:27:53.637 22:11:12 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:27:53.637 22:11:12 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # testdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:27:53.637 22:11:12 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/../.. 00:27:53.637 22:11:12 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:27:53.638 22:11:12 reactor_set_interrupt -- interrupt/interrupt_common.sh@7 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh 00:27:53.638 22:11:12 reactor_set_interrupt -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:27:53.638 22:11:12 reactor_set_interrupt -- common/autotest_common.sh@34 -- # set -e 00:27:53.638 22:11:12 reactor_set_interrupt -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:27:53.638 22:11:12 reactor_set_interrupt -- common/autotest_common.sh@36 -- # shopt -s extglob 00:27:53.638 22:11:12 reactor_set_interrupt -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:27:53.638 22:11:12 reactor_set_interrupt -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/crypto-phy-autotest/spdk/../output ']' 00:27:53.638 22:11:12 reactor_set_interrupt -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh ]] 00:27:53.638 22:11:12 reactor_set_interrupt -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=y 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@22 -- # CONFIG_CET=n 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=y 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@37 -- # CONFIG_CRYPTO=y 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@70 -- # CONFIG_FC=n 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=y 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=y 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=y 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:27:53.638 22:11:12 reactor_set_interrupt -- common/build_config.sh@83 -- # CONFIG_URING=n 00:27:53.638 22:11:12 reactor_set_interrupt -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:27:53.638 22:11:12 reactor_set_interrupt -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:27:53.638 22:11:12 reactor_set_interrupt -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:27:53.638 22:11:12 reactor_set_interrupt -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:27:53.638 22:11:12 reactor_set_interrupt -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:27:53.638 22:11:12 reactor_set_interrupt -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:27:53.638 22:11:12 reactor_set_interrupt -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:27:53.638 22:11:12 reactor_set_interrupt -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:27:53.638 22:11:12 reactor_set_interrupt -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:27:53.638 22:11:12 reactor_set_interrupt -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:27:53.638 22:11:12 reactor_set_interrupt -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:27:53.638 22:11:12 reactor_set_interrupt -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:27:53.638 22:11:12 reactor_set_interrupt -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:27:53.638 22:11:12 reactor_set_interrupt -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:27:53.638 22:11:12 reactor_set_interrupt -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/config.h ]] 00:27:53.638 22:11:12 reactor_set_interrupt -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:27:53.638 #define SPDK_CONFIG_H 00:27:53.638 #define SPDK_CONFIG_APPS 1 00:27:53.638 #define SPDK_CONFIG_ARCH native 00:27:53.638 #define SPDK_CONFIG_ASAN 1 00:27:53.638 #undef SPDK_CONFIG_AVAHI 00:27:53.638 #undef SPDK_CONFIG_CET 00:27:53.638 #define SPDK_CONFIG_COVERAGE 1 00:27:53.638 #define SPDK_CONFIG_CROSS_PREFIX 00:27:53.638 #define SPDK_CONFIG_CRYPTO 1 00:27:53.638 #define SPDK_CONFIG_CRYPTO_MLX5 1 00:27:53.638 #undef SPDK_CONFIG_CUSTOMOCF 00:27:53.638 #undef SPDK_CONFIG_DAOS 00:27:53.638 #define SPDK_CONFIG_DAOS_DIR 00:27:53.638 #define SPDK_CONFIG_DEBUG 1 00:27:53.638 #define SPDK_CONFIG_DPDK_COMPRESSDEV 1 00:27:53.638 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:27:53.638 #define SPDK_CONFIG_DPDK_INC_DIR 00:27:53.638 #define SPDK_CONFIG_DPDK_LIB_DIR 00:27:53.638 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:27:53.638 #undef SPDK_CONFIG_DPDK_UADK 00:27:53.638 #define SPDK_CONFIG_ENV /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:27:53.638 #define SPDK_CONFIG_EXAMPLES 1 00:27:53.638 #undef SPDK_CONFIG_FC 00:27:53.638 #define SPDK_CONFIG_FC_PATH 00:27:53.638 #define SPDK_CONFIG_FIO_PLUGIN 1 00:27:53.638 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:27:53.638 #undef SPDK_CONFIG_FUSE 00:27:53.638 #undef SPDK_CONFIG_FUZZER 00:27:53.638 #define SPDK_CONFIG_FUZZER_LIB 00:27:53.638 #undef SPDK_CONFIG_GOLANG 00:27:53.639 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:27:53.639 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:27:53.639 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:27:53.639 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:27:53.639 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:27:53.639 #undef SPDK_CONFIG_HAVE_LIBBSD 00:27:53.639 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:27:53.639 #define SPDK_CONFIG_IDXD 1 00:27:53.639 #define SPDK_CONFIG_IDXD_KERNEL 1 00:27:53.639 #define SPDK_CONFIG_IPSEC_MB 1 00:27:53.639 #define SPDK_CONFIG_IPSEC_MB_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:27:53.639 #define SPDK_CONFIG_ISAL 1 00:27:53.639 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:27:53.639 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:27:53.639 #define SPDK_CONFIG_LIBDIR 00:27:53.639 #undef SPDK_CONFIG_LTO 00:27:53.639 #define SPDK_CONFIG_MAX_LCORES 128 00:27:53.639 #define SPDK_CONFIG_NVME_CUSE 1 00:27:53.639 #undef SPDK_CONFIG_OCF 00:27:53.639 #define SPDK_CONFIG_OCF_PATH 00:27:53.639 #define SPDK_CONFIG_OPENSSL_PATH 00:27:53.639 #undef SPDK_CONFIG_PGO_CAPTURE 00:27:53.639 #define SPDK_CONFIG_PGO_DIR 00:27:53.639 #undef SPDK_CONFIG_PGO_USE 00:27:53.639 #define SPDK_CONFIG_PREFIX /usr/local 00:27:53.639 #undef SPDK_CONFIG_RAID5F 00:27:53.639 #undef SPDK_CONFIG_RBD 00:27:53.639 #define SPDK_CONFIG_RDMA 1 00:27:53.639 #define SPDK_CONFIG_RDMA_PROV verbs 00:27:53.639 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:27:53.639 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:27:53.639 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:27:53.639 #define SPDK_CONFIG_SHARED 1 00:27:53.639 #undef SPDK_CONFIG_SMA 00:27:53.639 #define SPDK_CONFIG_TESTS 1 00:27:53.639 #undef SPDK_CONFIG_TSAN 00:27:53.639 #define SPDK_CONFIG_UBLK 1 00:27:53.639 #define SPDK_CONFIG_UBSAN 1 00:27:53.639 #undef SPDK_CONFIG_UNIT_TESTS 00:27:53.639 #undef SPDK_CONFIG_URING 00:27:53.639 #define SPDK_CONFIG_URING_PATH 00:27:53.639 #undef SPDK_CONFIG_URING_ZNS 00:27:53.639 #undef SPDK_CONFIG_USDT 00:27:53.639 #define SPDK_CONFIG_VBDEV_COMPRESS 1 00:27:53.639 #define SPDK_CONFIG_VBDEV_COMPRESS_MLX5 1 00:27:53.639 #undef SPDK_CONFIG_VFIO_USER 00:27:53.639 #define SPDK_CONFIG_VFIO_USER_DIR 00:27:53.639 #define SPDK_CONFIG_VHOST 1 00:27:53.639 #define SPDK_CONFIG_VIRTIO 1 00:27:53.639 #undef SPDK_CONFIG_VTUNE 00:27:53.639 #define SPDK_CONFIG_VTUNE_DIR 00:27:53.639 #define SPDK_CONFIG_WERROR 1 00:27:53.639 #define SPDK_CONFIG_WPDK_DIR 00:27:53.639 #undef SPDK_CONFIG_XNVME 00:27:53.639 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:27:53.639 22:11:12 reactor_set_interrupt -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:27:53.639 22:11:12 reactor_set_interrupt -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:27:53.639 22:11:12 reactor_set_interrupt -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:53.639 22:11:12 reactor_set_interrupt -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:53.639 22:11:12 reactor_set_interrupt -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:53.639 22:11:12 reactor_set_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.639 22:11:12 reactor_set_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.639 22:11:12 reactor_set_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.639 22:11:12 reactor_set_interrupt -- paths/export.sh@5 -- # export PATH 00:27:53.639 22:11:12 reactor_set_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.639 22:11:12 reactor_set_interrupt -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:27:53.639 22:11:12 reactor_set_interrupt -- pm/common@6 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:27:53.639 22:11:12 reactor_set_interrupt -- pm/common@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:27:53.639 22:11:12 reactor_set_interrupt -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:27:53.639 22:11:13 reactor_set_interrupt -- pm/common@7 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/../../../ 00:27:53.639 22:11:13 reactor_set_interrupt -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:27:53.639 22:11:13 reactor_set_interrupt -- pm/common@64 -- # TEST_TAG=N/A 00:27:53.639 22:11:13 reactor_set_interrupt -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/crypto-phy-autotest/spdk/.run_test_name 00:27:53.639 22:11:13 reactor_set_interrupt -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power 00:27:53.639 22:11:13 reactor_set_interrupt -- pm/common@68 -- # uname -s 00:27:53.639 22:11:13 reactor_set_interrupt -- pm/common@68 -- # PM_OS=Linux 00:27:53.639 22:11:13 reactor_set_interrupt -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:27:53.639 22:11:13 reactor_set_interrupt -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:27:53.639 22:11:13 reactor_set_interrupt -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:27:53.639 22:11:13 reactor_set_interrupt -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:27:53.639 22:11:13 reactor_set_interrupt -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:27:53.639 22:11:13 reactor_set_interrupt -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:27:53.639 22:11:13 reactor_set_interrupt -- pm/common@76 -- # SUDO[0]= 00:27:53.639 22:11:13 reactor_set_interrupt -- pm/common@76 -- # SUDO[1]='sudo -E' 00:27:53.639 22:11:13 reactor_set_interrupt -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:27:53.639 22:11:13 reactor_set_interrupt -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:27:53.639 22:11:13 reactor_set_interrupt -- pm/common@81 -- # [[ Linux == Linux ]] 00:27:53.639 22:11:13 reactor_set_interrupt -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:27:53.639 22:11:13 reactor_set_interrupt -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:27:53.639 22:11:13 reactor_set_interrupt -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:27:53.639 22:11:13 reactor_set_interrupt -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:27:53.639 22:11:13 reactor_set_interrupt -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power ]] 00:27:53.639 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@58 -- # : 1 00:27:53.639 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:27:53.639 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@62 -- # : 0 00:27:53.639 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:27:53.639 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@64 -- # : 0 00:27:53.639 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:27:53.639 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@66 -- # : 1 00:27:53.639 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:27:53.639 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@68 -- # : 0 00:27:53.639 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:27:53.639 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@70 -- # : 00:27:53.639 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:27:53.639 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@72 -- # : 0 00:27:53.639 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:27:53.639 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@74 -- # : 1 00:27:53.900 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:27:53.900 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@76 -- # : 0 00:27:53.900 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:27:53.900 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@78 -- # : 0 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@80 -- # : 0 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@82 -- # : 0 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@84 -- # : 0 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@86 -- # : 0 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@88 -- # : 0 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@90 -- # : 0 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@92 -- # : 0 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@94 -- # : 0 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@96 -- # : 0 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@98 -- # : 0 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@100 -- # : 0 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@102 -- # : rdma 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@104 -- # : 0 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@106 -- # : 0 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@108 -- # : 1 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@110 -- # : 0 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@112 -- # : 0 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@114 -- # : 0 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@116 -- # : 0 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@118 -- # : 1 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@120 -- # : 1 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@122 -- # : 1 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@124 -- # : 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@126 -- # : 0 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@128 -- # : 1 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@130 -- # : 0 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@132 -- # : 0 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@134 -- # : 0 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@136 -- # : 0 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@138 -- # : 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@140 -- # : true 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@142 -- # : 0 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@144 -- # : 0 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@146 -- # : 0 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@148 -- # : 0 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@150 -- # : 0 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@152 -- # : 0 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@154 -- # : 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@156 -- # : 0 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@158 -- # : 0 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@160 -- # : 0 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@162 -- # : 0 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@164 -- # : 0 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@167 -- # : 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@169 -- # : 0 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@171 -- # : 0 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:27:53.901 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@200 -- # cat 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@263 -- # export valgrind= 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@263 -- # valgrind= 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@269 -- # uname -s 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@272 -- # [[ 1 -eq 1 ]] 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@276 -- # export HUGE_EVEN_ALLOC=yes 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@276 -- # HUGE_EVEN_ALLOC=yes 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@279 -- # MAKE=make 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j112 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@299 -- # TEST_MODE= 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@318 -- # [[ -z 1527955 ]] 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@318 -- # kill -0 1527955 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@331 -- # local mount target_dir 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.gWjSJ0 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt /tmp/spdk.gWjSJ0/tests/interrupt /tmp/spdk.gWjSJ0 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@327 -- # df -T 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=954302464 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=4330127360 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=50630979584 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=61742297088 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=11111317504 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=30866337792 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871146496 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=4808704 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=12338610176 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=12348461056 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=9850880 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=30869966848 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871150592 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=1183744 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=6174224384 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=6174228480 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:27:53.902 * Looking for test storage... 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@368 -- # local target_space new_size 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@372 -- # mount=/ 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@374 -- # target_space=50630979584 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@381 -- # new_size=13325910016 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:27:53.902 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@389 -- # return 0 00:27:53.902 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@1682 -- # set -o errtrace 00:27:53.903 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:27:53.903 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:27:53.903 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:27:53.903 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@1687 -- # true 00:27:53.903 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@1689 -- # xtrace_fd 00:27:53.903 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:27:53.903 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:27:53.903 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@27 -- # exec 00:27:53.903 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@29 -- # exec 00:27:53.903 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@31 -- # xtrace_restore 00:27:53.903 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:27:53.903 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:27:53.903 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@18 -- # set -x 00:27:53.903 22:11:13 reactor_set_interrupt -- interrupt/interrupt_common.sh@8 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/common.sh 00:27:53.903 22:11:13 reactor_set_interrupt -- interrupt/interrupt_common.sh@10 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:53.903 22:11:13 reactor_set_interrupt -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:27:53.903 22:11:13 reactor_set_interrupt -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:27:53.903 22:11:13 reactor_set_interrupt -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:27:53.903 22:11:13 reactor_set_interrupt -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:27:53.903 22:11:13 reactor_set_interrupt -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:27:53.903 22:11:13 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:27:53.903 22:11:13 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:27:53.903 22:11:13 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:27:53.903 22:11:13 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:53.903 22:11:13 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:27:53.903 22:11:13 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=1527996 00:27:53.903 22:11:13 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:53.903 22:11:13 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:27:53.903 22:11:13 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 1527996 /var/tmp/spdk.sock 00:27:53.903 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@829 -- # '[' -z 1527996 ']' 00:27:53.903 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:53.903 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:53.903 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:53.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:53.903 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:53.903 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:27:53.903 [2024-07-13 22:11:13.175329] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:53.903 [2024-07-13 22:11:13.175419] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1527996 ] 00:27:53.903 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.903 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:53.903 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.903 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:53.903 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.903 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:53.903 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.903 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:53.903 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.903 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:53.903 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.903 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:53.903 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.903 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:53.903 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.903 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:53.903 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.903 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:53.903 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.903 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:53.903 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.903 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:53.903 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.903 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:53.903 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.903 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:53.903 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.903 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:53.903 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.903 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:53.903 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.903 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:53.903 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.903 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:53.903 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.903 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:53.903 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.903 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:53.903 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.903 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:53.903 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.903 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:53.903 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.903 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:53.903 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.903 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:53.903 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.903 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:53.903 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.903 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:53.903 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.903 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:53.903 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.903 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:53.903 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.903 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:53.903 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.903 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:53.903 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.903 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:53.903 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.903 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:53.903 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:53.903 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:54.162 [2024-07-13 22:11:13.335715] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:54.162 [2024-07-13 22:11:13.543145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:54.162 [2024-07-13 22:11:13.543216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:54.162 [2024-07-13 22:11:13.543219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:54.731 [2024-07-13 22:11:13.891087] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:54.731 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:54.731 22:11:13 reactor_set_interrupt -- common/autotest_common.sh@862 -- # return 0 00:27:54.731 22:11:13 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:27:54.731 22:11:13 reactor_set_interrupt -- interrupt/common.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:54.990 Malloc0 00:27:54.990 Malloc1 00:27:54.990 Malloc2 00:27:54.990 22:11:14 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:27:54.990 22:11:14 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:27:54.990 22:11:14 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:27:54.990 22:11:14 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:27:54.990 5000+0 records in 00:27:54.990 5000+0 records out 00:27:54.991 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0279173 s, 367 MB/s 00:27:54.991 22:11:14 reactor_set_interrupt -- interrupt/common.sh@77 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:27:55.250 AIO0 00:27:55.250 22:11:14 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 1527996 00:27:55.250 22:11:14 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 1527996 without_thd 00:27:55.250 22:11:14 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=1527996 00:27:55.250 22:11:14 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:27:55.250 22:11:14 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:27:55.250 22:11:14 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:27:55.250 22:11:14 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:27:55.250 22:11:14 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:27:55.250 22:11:14 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:27:55.250 22:11:14 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:27:55.250 22:11:14 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:27:55.250 22:11:14 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:27:55.510 22:11:14 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:27:55.510 22:11:14 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:27:55.510 22:11:14 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:27:55.510 22:11:14 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:27:55.510 22:11:14 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:27:55.510 22:11:14 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:27:55.510 22:11:14 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:27:55.510 22:11:14 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:27:55.510 22:11:14 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:27:55.510 22:11:14 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:27:55.510 22:11:14 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:27:55.510 22:11:14 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:27:55.510 spdk_thread ids are 1 on reactor0. 00:27:55.510 22:11:14 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:27:55.510 22:11:14 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 1527996 0 00:27:55.510 22:11:14 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1527996 0 idle 00:27:55.510 22:11:14 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1527996 00:27:55.510 22:11:14 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:27:55.510 22:11:14 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:27:55.510 22:11:14 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:27:55.510 22:11:14 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:27:55.510 22:11:14 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:27:55.510 22:11:14 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:27:55.510 22:11:14 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:27:55.510 22:11:14 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1527996 -w 256 00:27:55.510 22:11:14 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:27:55.770 22:11:14 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1527996 root 20 0 20.1t 203392 34944 S 0.0 0.3 0:00.90 reactor_0' 00:27:55.770 22:11:14 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1527996 root 20 0 20.1t 203392 34944 S 0.0 0.3 0:00.90 reactor_0 00:27:55.770 22:11:14 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:27:55.770 22:11:14 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:27:55.770 22:11:14 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:27:55.770 22:11:14 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:27:55.770 22:11:14 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:27:55.770 22:11:14 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:27:55.770 22:11:14 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:27:55.770 22:11:14 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:27:55.770 22:11:14 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:27:55.770 22:11:14 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 1527996 1 00:27:55.770 22:11:14 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1527996 1 idle 00:27:55.770 22:11:14 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1527996 00:27:55.770 22:11:14 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:27:55.770 22:11:14 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:27:55.770 22:11:14 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:27:55.770 22:11:14 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:27:55.770 22:11:14 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:27:55.770 22:11:14 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:27:55.770 22:11:14 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:27:55.770 22:11:14 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1527996 -w 256 00:27:55.770 22:11:14 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:27:55.770 22:11:15 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1527999 root 20 0 20.1t 203392 34944 S 0.0 0.3 0:00.00 reactor_1' 00:27:56.029 22:11:15 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1527999 root 20 0 20.1t 203392 34944 S 0.0 0.3 0:00.00 reactor_1 00:27:56.029 22:11:15 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:27:56.029 22:11:15 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:27:56.029 22:11:15 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:27:56.029 22:11:15 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:27:56.029 22:11:15 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:27:56.029 22:11:15 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:27:56.029 22:11:15 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:27:56.029 22:11:15 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:27:56.029 22:11:15 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:27:56.029 22:11:15 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 1527996 2 00:27:56.029 22:11:15 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1527996 2 idle 00:27:56.029 22:11:15 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1527996 00:27:56.029 22:11:15 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:27:56.029 22:11:15 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:27:56.029 22:11:15 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:27:56.029 22:11:15 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:27:56.029 22:11:15 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:27:56.029 22:11:15 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:27:56.029 22:11:15 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:27:56.030 22:11:15 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1527996 -w 256 00:27:56.030 22:11:15 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:27:56.030 22:11:15 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1528000 root 20 0 20.1t 203392 34944 S 0.0 0.3 0:00.00 reactor_2' 00:27:56.030 22:11:15 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1528000 root 20 0 20.1t 203392 34944 S 0.0 0.3 0:00.00 reactor_2 00:27:56.030 22:11:15 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:27:56.030 22:11:15 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:27:56.030 22:11:15 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:27:56.030 22:11:15 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:27:56.030 22:11:15 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:27:56.030 22:11:15 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:27:56.030 22:11:15 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:27:56.030 22:11:15 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:27:56.030 22:11:15 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:27:56.030 22:11:15 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:27:56.030 22:11:15 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@36 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:27:56.289 [2024-07-13 22:11:15.504402] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:56.289 22:11:15 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:27:56.289 [2024-07-13 22:11:15.676209] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:27:56.289 [2024-07-13 22:11:15.676641] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:27:56.549 22:11:15 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:27:56.549 [2024-07-13 22:11:15.839887] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:27:56.549 [2024-07-13 22:11:15.840088] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:27:56.549 22:11:15 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:27:56.549 22:11:15 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 1527996 0 00:27:56.549 22:11:15 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 1527996 0 busy 00:27:56.549 22:11:15 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1527996 00:27:56.549 22:11:15 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:27:56.549 22:11:15 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:27:56.549 22:11:15 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:27:56.549 22:11:15 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:27:56.549 22:11:15 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:27:56.549 22:11:15 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:27:56.549 22:11:15 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1527996 -w 256 00:27:56.549 22:11:15 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:27:56.808 22:11:16 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1527996 root 20 0 20.1t 206080 34944 R 99.9 0.3 0:01.24 reactor_0' 00:27:56.808 22:11:16 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1527996 root 20 0 20.1t 206080 34944 R 99.9 0.3 0:01.24 reactor_0 00:27:56.808 22:11:16 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:27:56.808 22:11:16 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:27:56.808 22:11:16 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:27:56.808 22:11:16 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:27:56.808 22:11:16 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:27:56.808 22:11:16 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:27:56.808 22:11:16 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:27:56.808 22:11:16 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:27:56.808 22:11:16 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:27:56.809 22:11:16 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 1527996 2 00:27:56.809 22:11:16 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 1527996 2 busy 00:27:56.809 22:11:16 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1527996 00:27:56.809 22:11:16 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:27:56.809 22:11:16 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:27:56.809 22:11:16 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:27:56.809 22:11:16 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:27:56.809 22:11:16 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:27:56.809 22:11:16 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:27:56.809 22:11:16 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1527996 -w 256 00:27:56.809 22:11:16 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:27:57.069 22:11:16 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1528000 root 20 0 20.1t 206080 34944 R 99.9 0.3 0:00.35 reactor_2' 00:27:57.069 22:11:16 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1528000 root 20 0 20.1t 206080 34944 R 99.9 0.3 0:00.35 reactor_2 00:27:57.069 22:11:16 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:27:57.069 22:11:16 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:27:57.069 22:11:16 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:27:57.069 22:11:16 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:27:57.069 22:11:16 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:27:57.069 22:11:16 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:27:57.069 22:11:16 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:27:57.069 22:11:16 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:27:57.069 22:11:16 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:27:57.069 [2024-07-13 22:11:16.359896] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:27:57.069 [2024-07-13 22:11:16.360017] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:27:57.069 22:11:16 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:27:57.069 22:11:16 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 1527996 2 00:27:57.069 22:11:16 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1527996 2 idle 00:27:57.069 22:11:16 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1527996 00:27:57.069 22:11:16 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:27:57.069 22:11:16 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:27:57.069 22:11:16 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:27:57.069 22:11:16 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:27:57.069 22:11:16 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:27:57.069 22:11:16 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:27:57.069 22:11:16 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:27:57.069 22:11:16 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1527996 -w 256 00:27:57.069 22:11:16 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:27:57.328 22:11:16 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1528000 root 20 0 20.1t 206080 34944 S 0.0 0.3 0:00.51 reactor_2' 00:27:57.328 22:11:16 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:27:57.328 22:11:16 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1528000 root 20 0 20.1t 206080 34944 S 0.0 0.3 0:00.51 reactor_2 00:27:57.328 22:11:16 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:27:57.328 22:11:16 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:27:57.328 22:11:16 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:27:57.328 22:11:16 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:27:57.328 22:11:16 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:27:57.329 22:11:16 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:27:57.329 22:11:16 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:27:57.329 22:11:16 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:27:57.329 [2024-07-13 22:11:16.703891] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:27:57.329 [2024-07-13 22:11:16.704015] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:27:57.589 22:11:16 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:27:57.589 22:11:16 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:27:57.589 22:11:16 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@66 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:27:57.589 [2024-07-13 22:11:16.872359] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:57.589 22:11:16 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 1527996 0 00:27:57.590 22:11:16 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1527996 0 idle 00:27:57.590 22:11:16 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1527996 00:27:57.590 22:11:16 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:27:57.590 22:11:16 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:27:57.590 22:11:16 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:27:57.590 22:11:16 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:27:57.590 22:11:16 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:27:57.590 22:11:16 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:27:57.590 22:11:16 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:27:57.590 22:11:16 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1527996 -w 256 00:27:57.590 22:11:16 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:27:57.886 22:11:17 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1527996 root 20 0 20.1t 206080 34944 S 0.0 0.3 0:01.92 reactor_0' 00:27:57.886 22:11:17 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1527996 root 20 0 20.1t 206080 34944 S 0.0 0.3 0:01.92 reactor_0 00:27:57.886 22:11:17 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:27:57.886 22:11:17 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:27:57.886 22:11:17 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:27:57.886 22:11:17 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:27:57.886 22:11:17 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:27:57.886 22:11:17 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:27:57.886 22:11:17 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:27:57.886 22:11:17 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:27:57.886 22:11:17 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:27:57.886 22:11:17 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:27:57.886 22:11:17 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:27:57.886 22:11:17 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 1527996 00:27:57.886 22:11:17 reactor_set_interrupt -- common/autotest_common.sh@948 -- # '[' -z 1527996 ']' 00:27:57.886 22:11:17 reactor_set_interrupt -- common/autotest_common.sh@952 -- # kill -0 1527996 00:27:57.886 22:11:17 reactor_set_interrupt -- common/autotest_common.sh@953 -- # uname 00:27:57.886 22:11:17 reactor_set_interrupt -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:57.886 22:11:17 reactor_set_interrupt -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1527996 00:27:57.886 22:11:17 reactor_set_interrupt -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:57.886 22:11:17 reactor_set_interrupt -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:57.886 22:11:17 reactor_set_interrupt -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1527996' 00:27:57.886 killing process with pid 1527996 00:27:57.886 22:11:17 reactor_set_interrupt -- common/autotest_common.sh@967 -- # kill 1527996 00:27:57.886 22:11:17 reactor_set_interrupt -- common/autotest_common.sh@972 -- # wait 1527996 00:27:59.263 22:11:18 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:27:59.263 22:11:18 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:27:59.263 22:11:18 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:27:59.263 22:11:18 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:59.263 22:11:18 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:27:59.263 22:11:18 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=1528993 00:27:59.263 22:11:18 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:59.263 22:11:18 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:27:59.263 22:11:18 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 1528993 /var/tmp/spdk.sock 00:27:59.263 22:11:18 reactor_set_interrupt -- common/autotest_common.sh@829 -- # '[' -z 1528993 ']' 00:27:59.263 22:11:18 reactor_set_interrupt -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:59.263 22:11:18 reactor_set_interrupt -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:59.263 22:11:18 reactor_set_interrupt -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:59.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:59.263 22:11:18 reactor_set_interrupt -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:59.263 22:11:18 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:27:59.522 [2024-07-13 22:11:18.664788] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:59.522 [2024-07-13 22:11:18.664885] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1528993 ] 00:27:59.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.522 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:59.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.522 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:59.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.522 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:59.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.522 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:59.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.522 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:59.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.522 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:59.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.522 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:59.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.522 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:59.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.522 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:59.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.522 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:59.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.522 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:59.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.522 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:59.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.522 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:59.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.522 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:59.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.522 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:59.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.522 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:59.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.522 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:59.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.522 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:59.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.522 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:59.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.522 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:59.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.522 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:59.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.522 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:59.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.522 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:59.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.522 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:59.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.522 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:59.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.522 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:59.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.522 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:59.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.522 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:59.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.522 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:59.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.522 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:59.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.522 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:59.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.523 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:59.523 [2024-07-13 22:11:18.831222] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:59.780 [2024-07-13 22:11:19.040309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:59.780 [2024-07-13 22:11:19.040378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:59.780 [2024-07-13 22:11:19.040382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:00.038 [2024-07-13 22:11:19.369359] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:00.323 22:11:19 reactor_set_interrupt -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:00.323 22:11:19 reactor_set_interrupt -- common/autotest_common.sh@862 -- # return 0 00:28:00.323 22:11:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:28:00.323 22:11:19 reactor_set_interrupt -- interrupt/common.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:00.323 Malloc0 00:28:00.323 Malloc1 00:28:00.323 Malloc2 00:28:00.582 22:11:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:28:00.582 22:11:19 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:28:00.582 22:11:19 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:28:00.582 22:11:19 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:28:00.582 5000+0 records in 00:28:00.582 5000+0 records out 00:28:00.582 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0242166 s, 423 MB/s 00:28:00.582 22:11:19 reactor_set_interrupt -- interrupt/common.sh@77 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:28:00.582 AIO0 00:28:00.582 22:11:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 1528993 00:28:00.582 22:11:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 1528993 00:28:00.582 22:11:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=1528993 00:28:00.582 22:11:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:28:00.582 22:11:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:28:00.582 22:11:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:28:00.582 22:11:19 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:28:00.582 22:11:19 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:28:00.582 22:11:19 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:28:00.582 22:11:19 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:28:00.582 22:11:19 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:28:00.582 22:11:19 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:28:00.841 22:11:20 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:28:00.841 22:11:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:28:00.841 22:11:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:28:00.841 22:11:20 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:28:00.841 22:11:20 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:28:00.841 22:11:20 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:28:00.841 22:11:20 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:28:00.841 22:11:20 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:28:00.841 22:11:20 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:28:01.101 22:11:20 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:28:01.101 22:11:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:28:01.101 22:11:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:28:01.101 spdk_thread ids are 1 on reactor0. 00:28:01.101 22:11:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:28:01.101 22:11:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 1528993 0 00:28:01.101 22:11:20 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1528993 0 idle 00:28:01.101 22:11:20 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1528993 00:28:01.101 22:11:20 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:28:01.101 22:11:20 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:28:01.101 22:11:20 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:28:01.101 22:11:20 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:28:01.101 22:11:20 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:28:01.101 22:11:20 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:28:01.101 22:11:20 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:28:01.101 22:11:20 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1528993 -w 256 00:28:01.101 22:11:20 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:28:01.101 22:11:20 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1528993 root 20 0 20.1t 203392 34944 S 0.0 0.3 0:00.88 reactor_0' 00:28:01.101 22:11:20 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1528993 root 20 0 20.1t 203392 34944 S 0.0 0.3 0:00.88 reactor_0 00:28:01.101 22:11:20 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:28:01.101 22:11:20 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:28:01.361 22:11:20 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:28:01.361 22:11:20 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:28:01.361 22:11:20 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:28:01.361 22:11:20 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:28:01.361 22:11:20 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:28:01.361 22:11:20 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:28:01.361 22:11:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:28:01.361 22:11:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 1528993 1 00:28:01.361 22:11:20 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1528993 1 idle 00:28:01.361 22:11:20 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1528993 00:28:01.361 22:11:20 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:28:01.361 22:11:20 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:28:01.361 22:11:20 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:28:01.361 22:11:20 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:28:01.361 22:11:20 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:28:01.361 22:11:20 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:28:01.361 22:11:20 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:28:01.361 22:11:20 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1528993 -w 256 00:28:01.361 22:11:20 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:28:01.361 22:11:20 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1529071 root 20 0 20.1t 203392 34944 S 0.0 0.3 0:00.00 reactor_1' 00:28:01.361 22:11:20 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1529071 root 20 0 20.1t 203392 34944 S 0.0 0.3 0:00.00 reactor_1 00:28:01.361 22:11:20 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:28:01.361 22:11:20 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:28:01.361 22:11:20 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:28:01.361 22:11:20 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:28:01.361 22:11:20 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:28:01.361 22:11:20 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:28:01.361 22:11:20 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:28:01.361 22:11:20 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:28:01.361 22:11:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:28:01.361 22:11:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 1528993 2 00:28:01.361 22:11:20 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1528993 2 idle 00:28:01.361 22:11:20 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1528993 00:28:01.361 22:11:20 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:28:01.361 22:11:20 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:28:01.361 22:11:20 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:28:01.361 22:11:20 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:28:01.361 22:11:20 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:28:01.361 22:11:20 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:28:01.361 22:11:20 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:28:01.361 22:11:20 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1528993 -w 256 00:28:01.362 22:11:20 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:28:01.621 22:11:20 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1529072 root 20 0 20.1t 203392 34944 S 0.0 0.3 0:00.00 reactor_2' 00:28:01.621 22:11:20 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1529072 root 20 0 20.1t 203392 34944 S 0.0 0.3 0:00.00 reactor_2 00:28:01.621 22:11:20 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:28:01.621 22:11:20 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:28:01.621 22:11:20 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:28:01.621 22:11:20 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:28:01.621 22:11:20 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:28:01.621 22:11:20 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:28:01.621 22:11:20 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:28:01.621 22:11:20 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:28:01.621 22:11:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:28:01.621 22:11:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:28:01.880 [2024-07-13 22:11:21.013180] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:28:01.880 [2024-07-13 22:11:21.013435] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:28:01.880 [2024-07-13 22:11:21.013681] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:28:01.880 22:11:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:28:01.880 [2024-07-13 22:11:21.193524] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:28:01.880 [2024-07-13 22:11:21.193761] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:28:01.880 22:11:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:28:01.880 22:11:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 1528993 0 00:28:01.880 22:11:21 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 1528993 0 busy 00:28:01.880 22:11:21 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1528993 00:28:01.880 22:11:21 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:28:01.880 22:11:21 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:28:01.880 22:11:21 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:28:01.880 22:11:21 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:28:01.880 22:11:21 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:28:01.880 22:11:21 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:28:01.880 22:11:21 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1528993 -w 256 00:28:01.880 22:11:21 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:28:02.139 22:11:21 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1528993 root 20 0 20.1t 206080 34944 R 99.9 0.3 0:01.24 reactor_0' 00:28:02.139 22:11:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1528993 root 20 0 20.1t 206080 34944 R 99.9 0.3 0:01.24 reactor_0 00:28:02.139 22:11:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:28:02.139 22:11:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:28:02.139 22:11:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:28:02.139 22:11:21 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:28:02.139 22:11:21 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:28:02.139 22:11:21 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:28:02.139 22:11:21 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:28:02.139 22:11:21 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:28:02.139 22:11:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:28:02.139 22:11:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 1528993 2 00:28:02.139 22:11:21 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 1528993 2 busy 00:28:02.139 22:11:21 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1528993 00:28:02.139 22:11:21 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:28:02.139 22:11:21 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:28:02.139 22:11:21 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:28:02.139 22:11:21 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:28:02.139 22:11:21 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:28:02.139 22:11:21 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:28:02.139 22:11:21 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1528993 -w 256 00:28:02.139 22:11:21 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:28:02.398 22:11:21 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1529072 root 20 0 20.1t 206080 34944 R 99.9 0.3 0:00.35 reactor_2' 00:28:02.398 22:11:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1529072 root 20 0 20.1t 206080 34944 R 99.9 0.3 0:00.35 reactor_2 00:28:02.398 22:11:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:28:02.398 22:11:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:28:02.398 22:11:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:28:02.398 22:11:21 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:28:02.398 22:11:21 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:28:02.398 22:11:21 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:28:02.398 22:11:21 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:28:02.398 22:11:21 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:28:02.398 22:11:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:28:02.398 [2024-07-13 22:11:21.726992] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:28:02.398 [2024-07-13 22:11:21.727129] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:28:02.398 22:11:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:28:02.398 22:11:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 1528993 2 00:28:02.398 22:11:21 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1528993 2 idle 00:28:02.399 22:11:21 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1528993 00:28:02.399 22:11:21 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:28:02.399 22:11:21 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:28:02.399 22:11:21 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:28:02.399 22:11:21 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:28:02.399 22:11:21 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:28:02.399 22:11:21 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:28:02.399 22:11:21 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:28:02.399 22:11:21 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1528993 -w 256 00:28:02.399 22:11:21 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:28:02.658 22:11:21 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1529072 root 20 0 20.1t 206080 34944 S 0.0 0.3 0:00.53 reactor_2' 00:28:02.658 22:11:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1529072 root 20 0 20.1t 206080 34944 S 0.0 0.3 0:00.53 reactor_2 00:28:02.658 22:11:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:28:02.658 22:11:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:28:02.658 22:11:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:28:02.658 22:11:21 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:28:02.658 22:11:21 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:28:02.658 22:11:21 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:28:02.658 22:11:21 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:28:02.658 22:11:21 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:28:02.658 22:11:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:28:02.918 [2024-07-13 22:11:22.083922] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:28:02.918 [2024-07-13 22:11:22.084123] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:28:02.918 [2024-07-13 22:11:22.084158] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:28:02.918 22:11:22 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:28:02.918 22:11:22 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 1528993 0 00:28:02.918 22:11:22 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1528993 0 idle 00:28:02.918 22:11:22 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1528993 00:28:02.918 22:11:22 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:28:02.918 22:11:22 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:28:02.918 22:11:22 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:28:02.918 22:11:22 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:28:02.918 22:11:22 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:28:02.918 22:11:22 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:28:02.918 22:11:22 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:28:02.918 22:11:22 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1528993 -w 256 00:28:02.918 22:11:22 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:28:02.918 22:11:22 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1528993 root 20 0 20.1t 206080 34944 S 0.0 0.3 0:01.95 reactor_0' 00:28:02.918 22:11:22 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1528993 root 20 0 20.1t 206080 34944 S 0.0 0.3 0:01.95 reactor_0 00:28:02.918 22:11:22 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:28:02.918 22:11:22 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:28:02.918 22:11:22 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:28:02.918 22:11:22 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:28:02.918 22:11:22 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:28:02.918 22:11:22 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:28:02.918 22:11:22 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:28:02.918 22:11:22 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:28:02.918 22:11:22 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:28:02.918 22:11:22 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:28:02.918 22:11:22 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:28:02.918 22:11:22 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 1528993 00:28:02.918 22:11:22 reactor_set_interrupt -- common/autotest_common.sh@948 -- # '[' -z 1528993 ']' 00:28:02.918 22:11:22 reactor_set_interrupt -- common/autotest_common.sh@952 -- # kill -0 1528993 00:28:02.918 22:11:22 reactor_set_interrupt -- common/autotest_common.sh@953 -- # uname 00:28:02.918 22:11:22 reactor_set_interrupt -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:02.918 22:11:22 reactor_set_interrupt -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1528993 00:28:03.176 22:11:22 reactor_set_interrupt -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:03.176 22:11:22 reactor_set_interrupt -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:03.176 22:11:22 reactor_set_interrupt -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1528993' 00:28:03.176 killing process with pid 1528993 00:28:03.176 22:11:22 reactor_set_interrupt -- common/autotest_common.sh@967 -- # kill 1528993 00:28:03.176 22:11:22 reactor_set_interrupt -- common/autotest_common.sh@972 -- # wait 1528993 00:28:04.554 22:11:23 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:28:04.554 22:11:23 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:28:04.554 00:28:04.554 real 0m10.990s 00:28:04.554 user 0m10.590s 00:28:04.554 sys 0m2.166s 00:28:04.554 22:11:23 reactor_set_interrupt -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:04.554 22:11:23 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:28:04.554 ************************************ 00:28:04.554 END TEST reactor_set_interrupt 00:28:04.554 ************************************ 00:28:04.554 22:11:23 -- common/autotest_common.sh@1142 -- # return 0 00:28:04.554 22:11:23 -- spdk/autotest.sh@194 -- # run_test reap_unregistered_poller /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:28:04.554 22:11:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:04.554 22:11:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:04.554 22:11:23 -- common/autotest_common.sh@10 -- # set +x 00:28:04.554 ************************************ 00:28:04.554 START TEST reap_unregistered_poller 00:28:04.554 ************************************ 00:28:04.554 22:11:23 reap_unregistered_poller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:28:04.815 * Looking for test storage... 00:28:04.815 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:28:04.815 22:11:23 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/interrupt_common.sh 00:28:04.815 22:11:23 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:28:04.815 22:11:23 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:28:04.815 22:11:23 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # testdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:28:04.815 22:11:23 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/../.. 00:28:04.815 22:11:23 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:28:04.815 22:11:23 reap_unregistered_poller -- interrupt/interrupt_common.sh@7 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh 00:28:04.815 22:11:23 reap_unregistered_poller -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:28:04.815 22:11:23 reap_unregistered_poller -- common/autotest_common.sh@34 -- # set -e 00:28:04.815 22:11:23 reap_unregistered_poller -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:28:04.815 22:11:23 reap_unregistered_poller -- common/autotest_common.sh@36 -- # shopt -s extglob 00:28:04.815 22:11:23 reap_unregistered_poller -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:28:04.815 22:11:23 reap_unregistered_poller -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/crypto-phy-autotest/spdk/../output ']' 00:28:04.815 22:11:23 reap_unregistered_poller -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh ]] 00:28:04.815 22:11:23 reap_unregistered_poller -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh 00:28:04.815 22:11:23 reap_unregistered_poller -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:28:04.815 22:11:23 reap_unregistered_poller -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:28:04.815 22:11:23 reap_unregistered_poller -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=y 00:28:04.815 22:11:23 reap_unregistered_poller -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:28:04.815 22:11:23 reap_unregistered_poller -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:28:04.815 22:11:23 reap_unregistered_poller -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:28:04.815 22:11:23 reap_unregistered_poller -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:28:04.815 22:11:23 reap_unregistered_poller -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:28:04.815 22:11:23 reap_unregistered_poller -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:28:04.815 22:11:23 reap_unregistered_poller -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:28:04.815 22:11:23 reap_unregistered_poller -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:28:04.815 22:11:23 reap_unregistered_poller -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@22 -- # CONFIG_CET=n 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=y 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@37 -- # CONFIG_CRYPTO=y 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@70 -- # CONFIG_FC=n 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=y 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=y 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=y 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:28:04.816 22:11:23 reap_unregistered_poller -- common/build_config.sh@83 -- # CONFIG_URING=n 00:28:04.816 22:11:23 reap_unregistered_poller -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:28:04.816 22:11:23 reap_unregistered_poller -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:28:04.816 22:11:24 reap_unregistered_poller -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:28:04.816 22:11:24 reap_unregistered_poller -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:28:04.816 22:11:24 reap_unregistered_poller -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:28:04.816 22:11:24 reap_unregistered_poller -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:28:04.816 22:11:24 reap_unregistered_poller -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:28:04.816 22:11:24 reap_unregistered_poller -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:28:04.816 22:11:24 reap_unregistered_poller -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:28:04.816 22:11:24 reap_unregistered_poller -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:28:04.816 22:11:24 reap_unregistered_poller -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:28:04.816 22:11:24 reap_unregistered_poller -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:28:04.816 22:11:24 reap_unregistered_poller -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:28:04.816 22:11:24 reap_unregistered_poller -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:28:04.816 22:11:24 reap_unregistered_poller -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/config.h ]] 00:28:04.816 22:11:24 reap_unregistered_poller -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:28:04.816 #define SPDK_CONFIG_H 00:28:04.816 #define SPDK_CONFIG_APPS 1 00:28:04.816 #define SPDK_CONFIG_ARCH native 00:28:04.816 #define SPDK_CONFIG_ASAN 1 00:28:04.816 #undef SPDK_CONFIG_AVAHI 00:28:04.816 #undef SPDK_CONFIG_CET 00:28:04.816 #define SPDK_CONFIG_COVERAGE 1 00:28:04.816 #define SPDK_CONFIG_CROSS_PREFIX 00:28:04.816 #define SPDK_CONFIG_CRYPTO 1 00:28:04.816 #define SPDK_CONFIG_CRYPTO_MLX5 1 00:28:04.816 #undef SPDK_CONFIG_CUSTOMOCF 00:28:04.816 #undef SPDK_CONFIG_DAOS 00:28:04.816 #define SPDK_CONFIG_DAOS_DIR 00:28:04.816 #define SPDK_CONFIG_DEBUG 1 00:28:04.816 #define SPDK_CONFIG_DPDK_COMPRESSDEV 1 00:28:04.816 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:28:04.816 #define SPDK_CONFIG_DPDK_INC_DIR 00:28:04.816 #define SPDK_CONFIG_DPDK_LIB_DIR 00:28:04.816 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:28:04.816 #undef SPDK_CONFIG_DPDK_UADK 00:28:04.816 #define SPDK_CONFIG_ENV /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:28:04.816 #define SPDK_CONFIG_EXAMPLES 1 00:28:04.816 #undef SPDK_CONFIG_FC 00:28:04.816 #define SPDK_CONFIG_FC_PATH 00:28:04.816 #define SPDK_CONFIG_FIO_PLUGIN 1 00:28:04.816 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:28:04.816 #undef SPDK_CONFIG_FUSE 00:28:04.816 #undef SPDK_CONFIG_FUZZER 00:28:04.816 #define SPDK_CONFIG_FUZZER_LIB 00:28:04.816 #undef SPDK_CONFIG_GOLANG 00:28:04.816 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:28:04.816 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:28:04.816 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:28:04.816 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:28:04.816 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:28:04.816 #undef SPDK_CONFIG_HAVE_LIBBSD 00:28:04.816 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:28:04.816 #define SPDK_CONFIG_IDXD 1 00:28:04.816 #define SPDK_CONFIG_IDXD_KERNEL 1 00:28:04.816 #define SPDK_CONFIG_IPSEC_MB 1 00:28:04.816 #define SPDK_CONFIG_IPSEC_MB_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:28:04.816 #define SPDK_CONFIG_ISAL 1 00:28:04.816 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:28:04.816 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:28:04.816 #define SPDK_CONFIG_LIBDIR 00:28:04.816 #undef SPDK_CONFIG_LTO 00:28:04.816 #define SPDK_CONFIG_MAX_LCORES 128 00:28:04.816 #define SPDK_CONFIG_NVME_CUSE 1 00:28:04.816 #undef SPDK_CONFIG_OCF 00:28:04.816 #define SPDK_CONFIG_OCF_PATH 00:28:04.816 #define SPDK_CONFIG_OPENSSL_PATH 00:28:04.816 #undef SPDK_CONFIG_PGO_CAPTURE 00:28:04.816 #define SPDK_CONFIG_PGO_DIR 00:28:04.816 #undef SPDK_CONFIG_PGO_USE 00:28:04.816 #define SPDK_CONFIG_PREFIX /usr/local 00:28:04.816 #undef SPDK_CONFIG_RAID5F 00:28:04.816 #undef SPDK_CONFIG_RBD 00:28:04.816 #define SPDK_CONFIG_RDMA 1 00:28:04.816 #define SPDK_CONFIG_RDMA_PROV verbs 00:28:04.816 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:28:04.816 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:28:04.816 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:28:04.816 #define SPDK_CONFIG_SHARED 1 00:28:04.816 #undef SPDK_CONFIG_SMA 00:28:04.816 #define SPDK_CONFIG_TESTS 1 00:28:04.817 #undef SPDK_CONFIG_TSAN 00:28:04.817 #define SPDK_CONFIG_UBLK 1 00:28:04.817 #define SPDK_CONFIG_UBSAN 1 00:28:04.817 #undef SPDK_CONFIG_UNIT_TESTS 00:28:04.817 #undef SPDK_CONFIG_URING 00:28:04.817 #define SPDK_CONFIG_URING_PATH 00:28:04.817 #undef SPDK_CONFIG_URING_ZNS 00:28:04.817 #undef SPDK_CONFIG_USDT 00:28:04.817 #define SPDK_CONFIG_VBDEV_COMPRESS 1 00:28:04.817 #define SPDK_CONFIG_VBDEV_COMPRESS_MLX5 1 00:28:04.817 #undef SPDK_CONFIG_VFIO_USER 00:28:04.817 #define SPDK_CONFIG_VFIO_USER_DIR 00:28:04.817 #define SPDK_CONFIG_VHOST 1 00:28:04.817 #define SPDK_CONFIG_VIRTIO 1 00:28:04.817 #undef SPDK_CONFIG_VTUNE 00:28:04.817 #define SPDK_CONFIG_VTUNE_DIR 00:28:04.817 #define SPDK_CONFIG_WERROR 1 00:28:04.817 #define SPDK_CONFIG_WPDK_DIR 00:28:04.817 #undef SPDK_CONFIG_XNVME 00:28:04.817 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:28:04.817 22:11:24 reap_unregistered_poller -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:28:04.817 22:11:24 reap_unregistered_poller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:04.817 22:11:24 reap_unregistered_poller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:04.817 22:11:24 reap_unregistered_poller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:04.817 22:11:24 reap_unregistered_poller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.817 22:11:24 reap_unregistered_poller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.817 22:11:24 reap_unregistered_poller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.817 22:11:24 reap_unregistered_poller -- paths/export.sh@5 -- # export PATH 00:28:04.817 22:11:24 reap_unregistered_poller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:28:04.817 22:11:24 reap_unregistered_poller -- pm/common@6 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:28:04.817 22:11:24 reap_unregistered_poller -- pm/common@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:28:04.817 22:11:24 reap_unregistered_poller -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:28:04.817 22:11:24 reap_unregistered_poller -- pm/common@7 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/../../../ 00:28:04.817 22:11:24 reap_unregistered_poller -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:28:04.817 22:11:24 reap_unregistered_poller -- pm/common@64 -- # TEST_TAG=N/A 00:28:04.817 22:11:24 reap_unregistered_poller -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/crypto-phy-autotest/spdk/.run_test_name 00:28:04.817 22:11:24 reap_unregistered_poller -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power 00:28:04.817 22:11:24 reap_unregistered_poller -- pm/common@68 -- # uname -s 00:28:04.817 22:11:24 reap_unregistered_poller -- pm/common@68 -- # PM_OS=Linux 00:28:04.817 22:11:24 reap_unregistered_poller -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:28:04.817 22:11:24 reap_unregistered_poller -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:28:04.817 22:11:24 reap_unregistered_poller -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:28:04.817 22:11:24 reap_unregistered_poller -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:28:04.817 22:11:24 reap_unregistered_poller -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:28:04.817 22:11:24 reap_unregistered_poller -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:28:04.817 22:11:24 reap_unregistered_poller -- pm/common@76 -- # SUDO[0]= 00:28:04.817 22:11:24 reap_unregistered_poller -- pm/common@76 -- # SUDO[1]='sudo -E' 00:28:04.817 22:11:24 reap_unregistered_poller -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:28:04.817 22:11:24 reap_unregistered_poller -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:28:04.817 22:11:24 reap_unregistered_poller -- pm/common@81 -- # [[ Linux == Linux ]] 00:28:04.817 22:11:24 reap_unregistered_poller -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:28:04.817 22:11:24 reap_unregistered_poller -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:28:04.817 22:11:24 reap_unregistered_poller -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:28:04.817 22:11:24 reap_unregistered_poller -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:28:04.817 22:11:24 reap_unregistered_poller -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power ]] 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@58 -- # : 1 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@62 -- # : 0 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@64 -- # : 0 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@66 -- # : 1 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@68 -- # : 0 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@70 -- # : 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@72 -- # : 0 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@74 -- # : 1 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@76 -- # : 0 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@78 -- # : 0 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@80 -- # : 0 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@82 -- # : 0 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@84 -- # : 0 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@86 -- # : 0 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@88 -- # : 0 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@90 -- # : 0 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@92 -- # : 0 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@94 -- # : 0 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@96 -- # : 0 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@98 -- # : 0 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@100 -- # : 0 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@102 -- # : rdma 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@104 -- # : 0 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@106 -- # : 0 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@108 -- # : 1 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@110 -- # : 0 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@112 -- # : 0 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@114 -- # : 0 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@116 -- # : 0 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@118 -- # : 1 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@120 -- # : 1 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@122 -- # : 1 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@124 -- # : 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@126 -- # : 0 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:28:04.817 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@128 -- # : 1 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@130 -- # : 0 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@132 -- # : 0 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@134 -- # : 0 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@136 -- # : 0 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@138 -- # : 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@140 -- # : true 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@142 -- # : 0 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@144 -- # : 0 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@146 -- # : 0 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@148 -- # : 0 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@150 -- # : 0 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@152 -- # : 0 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@154 -- # : 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@156 -- # : 0 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@158 -- # : 0 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@160 -- # : 0 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@162 -- # : 0 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@164 -- # : 0 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@167 -- # : 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@169 -- # : 0 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@171 -- # : 0 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@200 -- # cat 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@263 -- # export valgrind= 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@263 -- # valgrind= 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@269 -- # uname -s 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@272 -- # [[ 1 -eq 1 ]] 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@276 -- # export HUGE_EVEN_ALLOC=yes 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@276 -- # HUGE_EVEN_ALLOC=yes 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@279 -- # MAKE=make 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j112 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@299 -- # TEST_MODE= 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@318 -- # [[ -z 1530024 ]] 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@318 -- # kill -0 1530024 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:28:04.818 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@331 -- # local mount target_dir 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.USikxc 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt /tmp/spdk.USikxc/tests/interrupt /tmp/spdk.USikxc 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@327 -- # df -T 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=954302464 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=4330127360 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=50630787072 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=61742297088 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=11111510016 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=30866337792 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871146496 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=4808704 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=12338610176 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=12348461056 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=9850880 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=30869966848 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871150592 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=1183744 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=6174224384 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=6174228480 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:28:04.819 * Looking for test storage... 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@368 -- # local target_space new_size 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@372 -- # mount=/ 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@374 -- # target_space=50630787072 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@381 -- # new_size=13326102528 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:28:04.819 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@389 -- # return 0 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@1682 -- # set -o errtrace 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@1687 -- # true 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@1689 -- # xtrace_fd 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@27 -- # exec 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@29 -- # exec 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@31 -- # xtrace_restore 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@18 -- # set -x 00:28:04.819 22:11:24 reap_unregistered_poller -- interrupt/interrupt_common.sh@8 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/common.sh 00:28:04.819 22:11:24 reap_unregistered_poller -- interrupt/interrupt_common.sh@10 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:04.819 22:11:24 reap_unregistered_poller -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:28:04.819 22:11:24 reap_unregistered_poller -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:28:04.819 22:11:24 reap_unregistered_poller -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:28:04.819 22:11:24 reap_unregistered_poller -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:28:04.819 22:11:24 reap_unregistered_poller -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:28:04.819 22:11:24 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:28:04.819 22:11:24 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:28:04.819 22:11:24 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:28:04.819 22:11:24 reap_unregistered_poller -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:04.819 22:11:24 reap_unregistered_poller -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:28:04.819 22:11:24 reap_unregistered_poller -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=1530065 00:28:04.819 22:11:24 reap_unregistered_poller -- interrupt/interrupt_common.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:28:04.819 22:11:24 reap_unregistered_poller -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:04.819 22:11:24 reap_unregistered_poller -- interrupt/interrupt_common.sh@26 -- # waitforlisten 1530065 /var/tmp/spdk.sock 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@829 -- # '[' -z 1530065 ']' 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:04.819 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:04.820 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:04.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:04.820 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:04.820 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:28:04.820 [2024-07-13 22:11:24.160328] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:04.820 [2024-07-13 22:11:24.160421] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1530065 ] 00:28:05.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:05.079 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:05.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:05.079 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:05.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:05.079 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:05.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:05.079 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:05.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:05.079 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:05.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:05.079 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:05.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:05.079 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:05.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:05.079 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:05.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:05.079 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:05.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:05.079 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:05.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:05.079 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:05.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:05.079 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:05.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:05.079 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:05.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:05.079 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:05.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:05.079 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:05.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:05.079 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:05.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:05.079 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:05.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:05.079 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:05.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:05.079 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:05.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:05.079 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:05.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:05.079 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:05.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:05.079 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:05.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:05.079 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:05.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:05.079 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:05.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:05.079 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:05.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:05.079 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:05.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:05.079 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:05.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:05.079 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:05.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:05.079 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:05.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:05.079 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:05.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:05.079 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:05.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:05.079 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:05.079 [2024-07-13 22:11:24.321027] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:05.338 [2024-07-13 22:11:24.531262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:05.338 [2024-07-13 22:11:24.531342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:05.338 [2024-07-13 22:11:24.531343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:05.597 [2024-07-13 22:11:24.865439] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:05.597 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:05.597 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@862 -- # return 0 00:28:05.597 22:11:24 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:28:05.597 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.597 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:28:05.597 22:11:24 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:28:05.597 22:11:24 reap_unregistered_poller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.597 22:11:24 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:28:05.597 "name": "app_thread", 00:28:05.597 "id": 1, 00:28:05.597 "active_pollers": [], 00:28:05.597 "timed_pollers": [ 00:28:05.597 { 00:28:05.597 "name": "rpc_subsystem_poll_servers", 00:28:05.597 "id": 1, 00:28:05.598 "state": "waiting", 00:28:05.598 "run_count": 0, 00:28:05.598 "busy_count": 0, 00:28:05.598 "period_ticks": 10000000 00:28:05.598 } 00:28:05.598 ], 00:28:05.598 "paused_pollers": [] 00:28:05.598 }' 00:28:05.598 22:11:24 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:28:05.598 22:11:24 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:28:05.598 22:11:24 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:28:05.598 22:11:24 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:28:05.857 22:11:25 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll_servers 00:28:05.857 22:11:25 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:28:05.857 22:11:25 reap_unregistered_poller -- interrupt/common.sh@75 -- # uname -s 00:28:05.857 22:11:25 reap_unregistered_poller -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:28:05.857 22:11:25 reap_unregistered_poller -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:28:05.857 5000+0 records in 00:28:05.857 5000+0 records out 00:28:05.857 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0254151 s, 403 MB/s 00:28:05.857 22:11:25 reap_unregistered_poller -- interrupt/common.sh@77 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:28:06.116 AIO0 00:28:06.116 22:11:25 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:06.116 22:11:25 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:28:06.376 22:11:25 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:28:06.376 22:11:25 reap_unregistered_poller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.376 22:11:25 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:28:06.376 22:11:25 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:28:06.376 22:11:25 reap_unregistered_poller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.376 22:11:25 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:28:06.376 "name": "app_thread", 00:28:06.376 "id": 1, 00:28:06.376 "active_pollers": [], 00:28:06.376 "timed_pollers": [ 00:28:06.376 { 00:28:06.376 "name": "rpc_subsystem_poll_servers", 00:28:06.376 "id": 1, 00:28:06.376 "state": "waiting", 00:28:06.376 "run_count": 0, 00:28:06.376 "busy_count": 0, 00:28:06.376 "period_ticks": 10000000 00:28:06.376 } 00:28:06.376 ], 00:28:06.376 "paused_pollers": [] 00:28:06.376 }' 00:28:06.376 22:11:25 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:28:06.376 22:11:25 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:28:06.376 22:11:25 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:28:06.376 22:11:25 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:28:06.376 22:11:25 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll_servers 00:28:06.376 22:11:25 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll_servers == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l\_\s\e\r\v\e\r\s ]] 00:28:06.376 22:11:25 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:28:06.376 22:11:25 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 1530065 00:28:06.376 22:11:25 reap_unregistered_poller -- common/autotest_common.sh@948 -- # '[' -z 1530065 ']' 00:28:06.376 22:11:25 reap_unregistered_poller -- common/autotest_common.sh@952 -- # kill -0 1530065 00:28:06.376 22:11:25 reap_unregistered_poller -- common/autotest_common.sh@953 -- # uname 00:28:06.376 22:11:25 reap_unregistered_poller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:06.376 22:11:25 reap_unregistered_poller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1530065 00:28:06.376 22:11:25 reap_unregistered_poller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:06.376 22:11:25 reap_unregistered_poller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:06.376 22:11:25 reap_unregistered_poller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1530065' 00:28:06.376 killing process with pid 1530065 00:28:06.376 22:11:25 reap_unregistered_poller -- common/autotest_common.sh@967 -- # kill 1530065 00:28:06.376 22:11:25 reap_unregistered_poller -- common/autotest_common.sh@972 -- # wait 1530065 00:28:07.755 22:11:26 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:28:07.755 22:11:26 reap_unregistered_poller -- interrupt/common.sh@6 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:28:07.755 00:28:07.755 real 0m3.001s 00:28:07.755 user 0m2.485s 00:28:07.755 sys 0m0.637s 00:28:07.755 22:11:26 reap_unregistered_poller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:07.755 22:11:26 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:28:07.755 ************************************ 00:28:07.755 END TEST reap_unregistered_poller 00:28:07.755 ************************************ 00:28:07.755 22:11:26 -- common/autotest_common.sh@1142 -- # return 0 00:28:07.755 22:11:26 -- spdk/autotest.sh@198 -- # uname -s 00:28:07.755 22:11:26 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:28:07.755 22:11:26 -- spdk/autotest.sh@199 -- # [[ 1 -eq 1 ]] 00:28:07.755 22:11:26 -- spdk/autotest.sh@205 -- # [[ 1 -eq 0 ]] 00:28:07.755 22:11:26 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:28:07.755 22:11:26 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:28:07.755 22:11:26 -- spdk/autotest.sh@260 -- # timing_exit lib 00:28:07.755 22:11:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:07.755 22:11:26 -- common/autotest_common.sh@10 -- # set +x 00:28:07.755 22:11:26 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:28:07.755 22:11:26 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:28:07.755 22:11:26 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:28:07.755 22:11:26 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:28:07.755 22:11:26 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:28:07.755 22:11:26 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:28:07.755 22:11:26 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:28:07.755 22:11:26 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:28:07.755 22:11:26 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:28:07.755 22:11:26 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:28:07.755 22:11:26 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:28:07.755 22:11:26 -- spdk/autotest.sh@347 -- # '[' 1 -eq 1 ']' 00:28:07.755 22:11:26 -- spdk/autotest.sh@348 -- # run_test compress_compdev /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh compdev 00:28:07.755 22:11:26 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:07.755 22:11:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:07.755 22:11:26 -- common/autotest_common.sh@10 -- # set +x 00:28:07.755 ************************************ 00:28:07.755 START TEST compress_compdev 00:28:07.755 ************************************ 00:28:07.755 22:11:27 compress_compdev -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh compdev 00:28:07.755 * Looking for test storage... 00:28:07.755 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress 00:28:07.755 22:11:27 compress_compdev -- compress/compress.sh@13 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:28:07.755 22:11:27 compress_compdev -- nvmf/common.sh@7 -- # uname -s 00:28:08.016 22:11:27 compress_compdev -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:08.016 22:11:27 compress_compdev -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:08.016 22:11:27 compress_compdev -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:08.016 22:11:27 compress_compdev -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:08.016 22:11:27 compress_compdev -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:08.016 22:11:27 compress_compdev -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:08.016 22:11:27 compress_compdev -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:08.016 22:11:27 compress_compdev -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:08.016 22:11:27 compress_compdev -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:08.016 22:11:27 compress_compdev -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:08.016 22:11:27 compress_compdev -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:28:08.016 22:11:27 compress_compdev -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:28:08.016 22:11:27 compress_compdev -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:08.016 22:11:27 compress_compdev -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:08.016 22:11:27 compress_compdev -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:28:08.016 22:11:27 compress_compdev -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:08.016 22:11:27 compress_compdev -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:28:08.016 22:11:27 compress_compdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:08.016 22:11:27 compress_compdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:08.016 22:11:27 compress_compdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:08.016 22:11:27 compress_compdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.016 22:11:27 compress_compdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.016 22:11:27 compress_compdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.016 22:11:27 compress_compdev -- paths/export.sh@5 -- # export PATH 00:28:08.016 22:11:27 compress_compdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.016 22:11:27 compress_compdev -- nvmf/common.sh@47 -- # : 0 00:28:08.016 22:11:27 compress_compdev -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:08.016 22:11:27 compress_compdev -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:08.016 22:11:27 compress_compdev -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:08.016 22:11:27 compress_compdev -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:08.016 22:11:27 compress_compdev -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:08.016 22:11:27 compress_compdev -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:08.016 22:11:27 compress_compdev -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:08.016 22:11:27 compress_compdev -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:08.016 22:11:27 compress_compdev -- compress/compress.sh@17 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:08.016 22:11:27 compress_compdev -- compress/compress.sh@81 -- # mkdir -p /tmp/pmem 00:28:08.016 22:11:27 compress_compdev -- compress/compress.sh@82 -- # test_type=compdev 00:28:08.016 22:11:27 compress_compdev -- compress/compress.sh@86 -- # run_bdevperf 32 4096 3 00:28:08.016 22:11:27 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:28:08.016 22:11:27 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=1530688 00:28:08.016 22:11:27 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:08.016 22:11:27 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 1530688 00:28:08.016 22:11:27 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:28:08.016 22:11:27 compress_compdev -- common/autotest_common.sh@829 -- # '[' -z 1530688 ']' 00:28:08.016 22:11:27 compress_compdev -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:08.016 22:11:27 compress_compdev -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:08.016 22:11:27 compress_compdev -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:08.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:08.017 22:11:27 compress_compdev -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:08.017 22:11:27 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:28:08.017 [2024-07-13 22:11:27.266344] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:08.017 [2024-07-13 22:11:27.266443] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1530688 ] 00:28:08.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:08.017 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:08.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:08.017 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:08.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:08.017 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:08.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:08.017 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:08.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:08.017 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:08.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:08.017 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:08.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:08.017 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:08.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:08.017 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:08.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:08.017 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:08.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:08.017 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:08.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:08.017 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:08.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:08.017 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:08.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:08.017 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:08.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:08.017 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:08.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:08.017 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:08.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:08.017 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:08.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:08.017 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:08.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:08.017 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:08.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:08.017 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:08.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:08.017 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:08.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:08.017 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:08.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:08.017 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:08.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:08.017 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:08.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:08.017 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:08.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:08.017 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:08.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:08.017 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:08.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:08.017 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:08.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:08.017 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:08.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:08.017 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:08.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:08.017 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:08.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:08.017 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:08.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:08.017 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:08.277 [2024-07-13 22:11:27.425049] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:08.277 [2024-07-13 22:11:27.637676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:08.277 [2024-07-13 22:11:27.637681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:09.657 [2024-07-13 22:11:28.645404] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:28:09.657 22:11:28 compress_compdev -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:09.657 22:11:28 compress_compdev -- common/autotest_common.sh@862 -- # return 0 00:28:09.657 22:11:28 compress_compdev -- compress/compress.sh@74 -- # create_vols 00:28:09.657 22:11:28 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:09.657 22:11:28 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:12.958 [2024-07-13 22:11:31.866772] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00001d220 PMD being used: compress_qat 00:28:12.958 22:11:31 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:28:12.958 22:11:31 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:28:12.958 22:11:31 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:28:12.958 22:11:31 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:28:12.958 22:11:31 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:28:12.958 22:11:31 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:28:12.958 22:11:31 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:12.958 22:11:32 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:28:12.958 [ 00:28:12.958 { 00:28:12.958 "name": "Nvme0n1", 00:28:12.958 "aliases": [ 00:28:12.958 "c5535a0d-cebf-4002-96b2-23269a3ccd92" 00:28:12.958 ], 00:28:12.958 "product_name": "NVMe disk", 00:28:12.958 "block_size": 512, 00:28:12.958 "num_blocks": 3907029168, 00:28:12.958 "uuid": "c5535a0d-cebf-4002-96b2-23269a3ccd92", 00:28:12.958 "assigned_rate_limits": { 00:28:12.958 "rw_ios_per_sec": 0, 00:28:12.958 "rw_mbytes_per_sec": 0, 00:28:12.958 "r_mbytes_per_sec": 0, 00:28:12.958 "w_mbytes_per_sec": 0 00:28:12.958 }, 00:28:12.958 "claimed": false, 00:28:12.958 "zoned": false, 00:28:12.958 "supported_io_types": { 00:28:12.958 "read": true, 00:28:12.958 "write": true, 00:28:12.958 "unmap": true, 00:28:12.958 "flush": true, 00:28:12.958 "reset": true, 00:28:12.958 "nvme_admin": true, 00:28:12.958 "nvme_io": true, 00:28:12.958 "nvme_io_md": false, 00:28:12.958 "write_zeroes": true, 00:28:12.958 "zcopy": false, 00:28:12.958 "get_zone_info": false, 00:28:12.958 "zone_management": false, 00:28:12.958 "zone_append": false, 00:28:12.958 "compare": false, 00:28:12.958 "compare_and_write": false, 00:28:12.958 "abort": true, 00:28:12.958 "seek_hole": false, 00:28:12.958 "seek_data": false, 00:28:12.958 "copy": false, 00:28:12.958 "nvme_iov_md": false 00:28:12.958 }, 00:28:12.958 "driver_specific": { 00:28:12.958 "nvme": [ 00:28:12.958 { 00:28:12.958 "pci_address": "0000:d8:00.0", 00:28:12.958 "trid": { 00:28:12.958 "trtype": "PCIe", 00:28:12.958 "traddr": "0000:d8:00.0" 00:28:12.958 }, 00:28:12.958 "ctrlr_data": { 00:28:12.958 "cntlid": 0, 00:28:12.958 "vendor_id": "0x8086", 00:28:12.958 "model_number": "INTEL SSDPE2KX020T8", 00:28:12.958 "serial_number": "BTLJ125505KA2P0BGN", 00:28:12.958 "firmware_revision": "VDV10170", 00:28:12.958 "oacs": { 00:28:12.958 "security": 0, 00:28:12.958 "format": 1, 00:28:12.958 "firmware": 1, 00:28:12.958 "ns_manage": 1 00:28:12.958 }, 00:28:12.958 "multi_ctrlr": false, 00:28:12.958 "ana_reporting": false 00:28:12.958 }, 00:28:12.958 "vs": { 00:28:12.958 "nvme_version": "1.2" 00:28:12.958 }, 00:28:12.958 "ns_data": { 00:28:12.958 "id": 1, 00:28:12.958 "can_share": false 00:28:12.958 } 00:28:12.958 } 00:28:12.958 ], 00:28:12.958 "mp_policy": "active_passive" 00:28:12.958 } 00:28:12.958 } 00:28:12.958 ] 00:28:12.958 22:11:32 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:28:12.958 22:11:32 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:28:13.217 [2024-07-13 22:11:32.413940] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00001d3e0 PMD being used: compress_qat 00:28:14.153 dea7f959-ccc8-499b-b8fa-ba01e01b6624 00:28:14.153 22:11:33 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:28:14.427 94c7e9a6-ac5c-43d5-8784-1bb5b7d5dccd 00:28:14.427 22:11:33 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:28:14.427 22:11:33 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:28:14.427 22:11:33 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:28:14.427 22:11:33 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:28:14.427 22:11:33 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:28:14.427 22:11:33 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:28:14.427 22:11:33 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:14.708 22:11:33 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:28:14.708 [ 00:28:14.708 { 00:28:14.708 "name": "94c7e9a6-ac5c-43d5-8784-1bb5b7d5dccd", 00:28:14.708 "aliases": [ 00:28:14.708 "lvs0/lv0" 00:28:14.708 ], 00:28:14.708 "product_name": "Logical Volume", 00:28:14.708 "block_size": 512, 00:28:14.708 "num_blocks": 204800, 00:28:14.708 "uuid": "94c7e9a6-ac5c-43d5-8784-1bb5b7d5dccd", 00:28:14.708 "assigned_rate_limits": { 00:28:14.708 "rw_ios_per_sec": 0, 00:28:14.708 "rw_mbytes_per_sec": 0, 00:28:14.708 "r_mbytes_per_sec": 0, 00:28:14.708 "w_mbytes_per_sec": 0 00:28:14.708 }, 00:28:14.708 "claimed": false, 00:28:14.708 "zoned": false, 00:28:14.708 "supported_io_types": { 00:28:14.708 "read": true, 00:28:14.708 "write": true, 00:28:14.708 "unmap": true, 00:28:14.708 "flush": false, 00:28:14.708 "reset": true, 00:28:14.708 "nvme_admin": false, 00:28:14.708 "nvme_io": false, 00:28:14.708 "nvme_io_md": false, 00:28:14.708 "write_zeroes": true, 00:28:14.708 "zcopy": false, 00:28:14.708 "get_zone_info": false, 00:28:14.708 "zone_management": false, 00:28:14.708 "zone_append": false, 00:28:14.708 "compare": false, 00:28:14.708 "compare_and_write": false, 00:28:14.708 "abort": false, 00:28:14.708 "seek_hole": true, 00:28:14.708 "seek_data": true, 00:28:14.708 "copy": false, 00:28:14.708 "nvme_iov_md": false 00:28:14.708 }, 00:28:14.708 "driver_specific": { 00:28:14.708 "lvol": { 00:28:14.708 "lvol_store_uuid": "dea7f959-ccc8-499b-b8fa-ba01e01b6624", 00:28:14.708 "base_bdev": "Nvme0n1", 00:28:14.708 "thin_provision": true, 00:28:14.708 "num_allocated_clusters": 0, 00:28:14.708 "snapshot": false, 00:28:14.708 "clone": false, 00:28:14.708 "esnap_clone": false 00:28:14.708 } 00:28:14.708 } 00:28:14.708 } 00:28:14.708 ] 00:28:14.708 22:11:33 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:28:14.708 22:11:34 compress_compdev -- compress/compress.sh@41 -- # '[' -z '' ']' 00:28:14.708 22:11:34 compress_compdev -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:28:14.966 [2024-07-13 22:11:34.169808] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:28:14.966 COMP_lvs0/lv0 00:28:14.966 22:11:34 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:28:14.966 22:11:34 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:28:14.966 22:11:34 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:28:14.966 22:11:34 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:28:14.966 22:11:34 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:28:14.966 22:11:34 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:28:14.966 22:11:34 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:15.225 22:11:34 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:28:15.225 [ 00:28:15.225 { 00:28:15.225 "name": "COMP_lvs0/lv0", 00:28:15.225 "aliases": [ 00:28:15.225 "c0515d98-021b-5e49-a40f-8fec9be59deb" 00:28:15.225 ], 00:28:15.225 "product_name": "compress", 00:28:15.225 "block_size": 512, 00:28:15.225 "num_blocks": 200704, 00:28:15.225 "uuid": "c0515d98-021b-5e49-a40f-8fec9be59deb", 00:28:15.225 "assigned_rate_limits": { 00:28:15.225 "rw_ios_per_sec": 0, 00:28:15.225 "rw_mbytes_per_sec": 0, 00:28:15.225 "r_mbytes_per_sec": 0, 00:28:15.225 "w_mbytes_per_sec": 0 00:28:15.225 }, 00:28:15.225 "claimed": false, 00:28:15.225 "zoned": false, 00:28:15.225 "supported_io_types": { 00:28:15.225 "read": true, 00:28:15.225 "write": true, 00:28:15.225 "unmap": false, 00:28:15.225 "flush": false, 00:28:15.225 "reset": false, 00:28:15.225 "nvme_admin": false, 00:28:15.225 "nvme_io": false, 00:28:15.225 "nvme_io_md": false, 00:28:15.225 "write_zeroes": true, 00:28:15.225 "zcopy": false, 00:28:15.225 "get_zone_info": false, 00:28:15.225 "zone_management": false, 00:28:15.225 "zone_append": false, 00:28:15.225 "compare": false, 00:28:15.225 "compare_and_write": false, 00:28:15.225 "abort": false, 00:28:15.225 "seek_hole": false, 00:28:15.225 "seek_data": false, 00:28:15.225 "copy": false, 00:28:15.225 "nvme_iov_md": false 00:28:15.225 }, 00:28:15.225 "driver_specific": { 00:28:15.225 "compress": { 00:28:15.225 "name": "COMP_lvs0/lv0", 00:28:15.225 "base_bdev_name": "94c7e9a6-ac5c-43d5-8784-1bb5b7d5dccd" 00:28:15.225 } 00:28:15.225 } 00:28:15.225 } 00:28:15.225 ] 00:28:15.225 22:11:34 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:28:15.225 22:11:34 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:28:15.484 [2024-07-13 22:11:34.617254] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e0000101e0 PMD being used: compress_qat 00:28:15.484 [2024-07-13 22:11:34.620020] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00001d5a0 PMD being used: compress_qat 00:28:15.484 Running I/O for 3 seconds... 00:28:18.775 00:28:18.775 Latency(us) 00:28:18.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:18.775 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:28:18.775 Verification LBA range: start 0x0 length 0x3100 00:28:18.775 COMP_lvs0/lv0 : 3.01 3970.86 15.51 0.00 0.00 8003.61 134.35 13002.34 00:28:18.775 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:28:18.775 Verification LBA range: start 0x3100 length 0x3100 00:28:18.775 COMP_lvs0/lv0 : 3.01 4082.28 15.95 0.00 0.00 7799.73 125.34 13002.34 00:28:18.775 =================================================================================================================== 00:28:18.775 Total : 8053.14 31.46 0.00 0.00 7900.35 125.34 13002.34 00:28:18.775 0 00:28:18.775 22:11:37 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:28:18.775 22:11:37 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:28:18.775 22:11:37 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:28:18.775 22:11:38 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:28:18.775 22:11:38 compress_compdev -- compress/compress.sh@78 -- # killprocess 1530688 00:28:18.775 22:11:38 compress_compdev -- common/autotest_common.sh@948 -- # '[' -z 1530688 ']' 00:28:18.775 22:11:38 compress_compdev -- common/autotest_common.sh@952 -- # kill -0 1530688 00:28:18.775 22:11:38 compress_compdev -- common/autotest_common.sh@953 -- # uname 00:28:18.775 22:11:38 compress_compdev -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:18.775 22:11:38 compress_compdev -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1530688 00:28:18.775 22:11:38 compress_compdev -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:18.775 22:11:38 compress_compdev -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:18.775 22:11:38 compress_compdev -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1530688' 00:28:18.775 killing process with pid 1530688 00:28:18.775 22:11:38 compress_compdev -- common/autotest_common.sh@967 -- # kill 1530688 00:28:18.775 Received shutdown signal, test time was about 3.000000 seconds 00:28:18.775 00:28:18.775 Latency(us) 00:28:18.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:18.775 =================================================================================================================== 00:28:18.775 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:18.775 22:11:38 compress_compdev -- common/autotest_common.sh@972 -- # wait 1530688 00:28:22.064 22:11:41 compress_compdev -- compress/compress.sh@87 -- # run_bdevperf 32 4096 3 512 00:28:22.064 22:11:41 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:28:22.064 22:11:41 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=1533014 00:28:22.064 22:11:41 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:22.064 22:11:41 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:28:22.064 22:11:41 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 1533014 00:28:22.064 22:11:41 compress_compdev -- common/autotest_common.sh@829 -- # '[' -z 1533014 ']' 00:28:22.064 22:11:41 compress_compdev -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:22.064 22:11:41 compress_compdev -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:22.064 22:11:41 compress_compdev -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:22.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:22.064 22:11:41 compress_compdev -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:22.064 22:11:41 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:28:22.064 [2024-07-13 22:11:41.325229] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:22.064 [2024-07-13 22:11:41.325334] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1533014 ] 00:28:22.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:22.064 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:22.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:22.064 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:22.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:22.064 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:22.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:22.064 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:22.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:22.064 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:22.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:22.064 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:22.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:22.064 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:22.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:22.064 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:22.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:22.064 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:22.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:22.064 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:22.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:22.064 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:22.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:22.064 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:22.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:22.064 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:22.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:22.064 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:22.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:22.064 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:22.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:22.064 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:22.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:22.064 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:22.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:22.064 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:22.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:22.064 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:22.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:22.064 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:22.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:22.064 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:22.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:22.064 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:22.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:22.064 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:22.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:22.064 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:22.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:22.064 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:22.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:22.064 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:22.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:22.064 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:22.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:22.064 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:22.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:22.064 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:22.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:22.064 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:22.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:22.064 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:22.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:22.064 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:22.323 [2024-07-13 22:11:41.488494] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:22.323 [2024-07-13 22:11:41.690295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:22.323 [2024-07-13 22:11:41.690301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:23.698 [2024-07-13 22:11:42.685712] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:28:23.698 22:11:42 compress_compdev -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:23.698 22:11:42 compress_compdev -- common/autotest_common.sh@862 -- # return 0 00:28:23.698 22:11:42 compress_compdev -- compress/compress.sh@74 -- # create_vols 512 00:28:23.698 22:11:42 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:23.698 22:11:42 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:26.978 [2024-07-13 22:11:45.906595] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00001d220 PMD being used: compress_qat 00:28:26.978 22:11:45 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:28:26.978 22:11:45 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:28:26.978 22:11:45 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:28:26.978 22:11:45 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:28:26.978 22:11:45 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:28:26.978 22:11:45 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:28:26.978 22:11:45 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:26.978 22:11:46 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:28:26.978 [ 00:28:26.978 { 00:28:26.978 "name": "Nvme0n1", 00:28:26.978 "aliases": [ 00:28:26.978 "5614d584-2718-46ca-83ec-b95310c8062a" 00:28:26.978 ], 00:28:26.978 "product_name": "NVMe disk", 00:28:26.978 "block_size": 512, 00:28:26.978 "num_blocks": 3907029168, 00:28:26.978 "uuid": "5614d584-2718-46ca-83ec-b95310c8062a", 00:28:26.978 "assigned_rate_limits": { 00:28:26.978 "rw_ios_per_sec": 0, 00:28:26.978 "rw_mbytes_per_sec": 0, 00:28:26.978 "r_mbytes_per_sec": 0, 00:28:26.978 "w_mbytes_per_sec": 0 00:28:26.978 }, 00:28:26.978 "claimed": false, 00:28:26.978 "zoned": false, 00:28:26.978 "supported_io_types": { 00:28:26.978 "read": true, 00:28:26.978 "write": true, 00:28:26.978 "unmap": true, 00:28:26.978 "flush": true, 00:28:26.978 "reset": true, 00:28:26.978 "nvme_admin": true, 00:28:26.978 "nvme_io": true, 00:28:26.978 "nvme_io_md": false, 00:28:26.978 "write_zeroes": true, 00:28:26.978 "zcopy": false, 00:28:26.978 "get_zone_info": false, 00:28:26.978 "zone_management": false, 00:28:26.978 "zone_append": false, 00:28:26.978 "compare": false, 00:28:26.978 "compare_and_write": false, 00:28:26.978 "abort": true, 00:28:26.978 "seek_hole": false, 00:28:26.978 "seek_data": false, 00:28:26.978 "copy": false, 00:28:26.978 "nvme_iov_md": false 00:28:26.978 }, 00:28:26.978 "driver_specific": { 00:28:26.978 "nvme": [ 00:28:26.978 { 00:28:26.978 "pci_address": "0000:d8:00.0", 00:28:26.978 "trid": { 00:28:26.978 "trtype": "PCIe", 00:28:26.978 "traddr": "0000:d8:00.0" 00:28:26.978 }, 00:28:26.978 "ctrlr_data": { 00:28:26.978 "cntlid": 0, 00:28:26.978 "vendor_id": "0x8086", 00:28:26.978 "model_number": "INTEL SSDPE2KX020T8", 00:28:26.978 "serial_number": "BTLJ125505KA2P0BGN", 00:28:26.978 "firmware_revision": "VDV10170", 00:28:26.978 "oacs": { 00:28:26.978 "security": 0, 00:28:26.978 "format": 1, 00:28:26.978 "firmware": 1, 00:28:26.978 "ns_manage": 1 00:28:26.978 }, 00:28:26.978 "multi_ctrlr": false, 00:28:26.978 "ana_reporting": false 00:28:26.978 }, 00:28:26.978 "vs": { 00:28:26.978 "nvme_version": "1.2" 00:28:26.978 }, 00:28:26.978 "ns_data": { 00:28:26.978 "id": 1, 00:28:26.978 "can_share": false 00:28:26.978 } 00:28:26.978 } 00:28:26.978 ], 00:28:26.978 "mp_policy": "active_passive" 00:28:26.978 } 00:28:26.978 } 00:28:26.978 ] 00:28:26.978 22:11:46 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:28:26.978 22:11:46 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:28:27.236 [2024-07-13 22:11:46.453199] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00001d3e0 PMD being used: compress_qat 00:28:28.167 1f45f3b4-e3bf-4b97-8a02-c59e585921c4 00:28:28.167 22:11:47 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:28:28.425 52c07cb1-3eb5-4e1d-ac0b-ffe50bdb3577 00:28:28.425 22:11:47 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:28:28.425 22:11:47 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:28:28.425 22:11:47 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:28:28.425 22:11:47 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:28:28.425 22:11:47 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:28:28.425 22:11:47 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:28:28.425 22:11:47 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:28.425 22:11:47 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:28:28.682 [ 00:28:28.683 { 00:28:28.683 "name": "52c07cb1-3eb5-4e1d-ac0b-ffe50bdb3577", 00:28:28.683 "aliases": [ 00:28:28.683 "lvs0/lv0" 00:28:28.683 ], 00:28:28.683 "product_name": "Logical Volume", 00:28:28.683 "block_size": 512, 00:28:28.683 "num_blocks": 204800, 00:28:28.683 "uuid": "52c07cb1-3eb5-4e1d-ac0b-ffe50bdb3577", 00:28:28.683 "assigned_rate_limits": { 00:28:28.683 "rw_ios_per_sec": 0, 00:28:28.683 "rw_mbytes_per_sec": 0, 00:28:28.683 "r_mbytes_per_sec": 0, 00:28:28.683 "w_mbytes_per_sec": 0 00:28:28.683 }, 00:28:28.683 "claimed": false, 00:28:28.683 "zoned": false, 00:28:28.683 "supported_io_types": { 00:28:28.683 "read": true, 00:28:28.683 "write": true, 00:28:28.683 "unmap": true, 00:28:28.683 "flush": false, 00:28:28.683 "reset": true, 00:28:28.683 "nvme_admin": false, 00:28:28.683 "nvme_io": false, 00:28:28.683 "nvme_io_md": false, 00:28:28.683 "write_zeroes": true, 00:28:28.683 "zcopy": false, 00:28:28.683 "get_zone_info": false, 00:28:28.683 "zone_management": false, 00:28:28.683 "zone_append": false, 00:28:28.683 "compare": false, 00:28:28.683 "compare_and_write": false, 00:28:28.683 "abort": false, 00:28:28.683 "seek_hole": true, 00:28:28.683 "seek_data": true, 00:28:28.683 "copy": false, 00:28:28.683 "nvme_iov_md": false 00:28:28.683 }, 00:28:28.683 "driver_specific": { 00:28:28.683 "lvol": { 00:28:28.683 "lvol_store_uuid": "1f45f3b4-e3bf-4b97-8a02-c59e585921c4", 00:28:28.683 "base_bdev": "Nvme0n1", 00:28:28.683 "thin_provision": true, 00:28:28.683 "num_allocated_clusters": 0, 00:28:28.683 "snapshot": false, 00:28:28.683 "clone": false, 00:28:28.683 "esnap_clone": false 00:28:28.683 } 00:28:28.683 } 00:28:28.683 } 00:28:28.683 ] 00:28:28.683 22:11:47 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:28:28.683 22:11:47 compress_compdev -- compress/compress.sh@41 -- # '[' -z 512 ']' 00:28:28.683 22:11:47 compress_compdev -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 512 00:28:28.939 [2024-07-13 22:11:48.131447] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:28:28.939 COMP_lvs0/lv0 00:28:28.939 22:11:48 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:28:28.939 22:11:48 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:28:28.939 22:11:48 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:28:28.939 22:11:48 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:28:28.939 22:11:48 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:28:28.939 22:11:48 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:28:28.939 22:11:48 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:29.197 22:11:48 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:28:29.197 [ 00:28:29.197 { 00:28:29.197 "name": "COMP_lvs0/lv0", 00:28:29.197 "aliases": [ 00:28:29.197 "5d518e15-1a72-551c-aa50-77771b3a87c9" 00:28:29.198 ], 00:28:29.198 "product_name": "compress", 00:28:29.198 "block_size": 512, 00:28:29.198 "num_blocks": 200704, 00:28:29.198 "uuid": "5d518e15-1a72-551c-aa50-77771b3a87c9", 00:28:29.198 "assigned_rate_limits": { 00:28:29.198 "rw_ios_per_sec": 0, 00:28:29.198 "rw_mbytes_per_sec": 0, 00:28:29.198 "r_mbytes_per_sec": 0, 00:28:29.198 "w_mbytes_per_sec": 0 00:28:29.198 }, 00:28:29.198 "claimed": false, 00:28:29.198 "zoned": false, 00:28:29.198 "supported_io_types": { 00:28:29.198 "read": true, 00:28:29.198 "write": true, 00:28:29.198 "unmap": false, 00:28:29.198 "flush": false, 00:28:29.198 "reset": false, 00:28:29.198 "nvme_admin": false, 00:28:29.198 "nvme_io": false, 00:28:29.198 "nvme_io_md": false, 00:28:29.198 "write_zeroes": true, 00:28:29.198 "zcopy": false, 00:28:29.198 "get_zone_info": false, 00:28:29.198 "zone_management": false, 00:28:29.198 "zone_append": false, 00:28:29.198 "compare": false, 00:28:29.198 "compare_and_write": false, 00:28:29.198 "abort": false, 00:28:29.198 "seek_hole": false, 00:28:29.198 "seek_data": false, 00:28:29.198 "copy": false, 00:28:29.198 "nvme_iov_md": false 00:28:29.198 }, 00:28:29.198 "driver_specific": { 00:28:29.198 "compress": { 00:28:29.198 "name": "COMP_lvs0/lv0", 00:28:29.198 "base_bdev_name": "52c07cb1-3eb5-4e1d-ac0b-ffe50bdb3577" 00:28:29.198 } 00:28:29.198 } 00:28:29.198 } 00:28:29.198 ] 00:28:29.198 22:11:48 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:28:29.198 22:11:48 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:28:29.456 [2024-07-13 22:11:48.622753] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e0000101e0 PMD being used: compress_qat 00:28:29.456 [2024-07-13 22:11:48.625840] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00001d4c0 PMD being used: compress_qat 00:28:29.456 Running I/O for 3 seconds... 00:28:32.735 00:28:32.736 Latency(us) 00:28:32.736 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:32.736 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:28:32.736 Verification LBA range: start 0x0 length 0x3100 00:28:32.736 COMP_lvs0/lv0 : 3.01 3947.02 15.42 0.00 0.00 8071.17 135.17 14470.35 00:28:32.736 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:28:32.736 Verification LBA range: start 0x3100 length 0x3100 00:28:32.736 COMP_lvs0/lv0 : 3.01 4006.84 15.65 0.00 0.00 7951.48 126.16 13421.77 00:28:32.736 =================================================================================================================== 00:28:32.736 Total : 7953.86 31.07 0.00 0.00 8010.87 126.16 14470.35 00:28:32.736 0 00:28:32.736 22:11:51 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:28:32.736 22:11:51 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:28:32.736 22:11:51 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:28:32.736 22:11:52 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:28:32.736 22:11:52 compress_compdev -- compress/compress.sh@78 -- # killprocess 1533014 00:28:32.736 22:11:52 compress_compdev -- common/autotest_common.sh@948 -- # '[' -z 1533014 ']' 00:28:32.736 22:11:52 compress_compdev -- common/autotest_common.sh@952 -- # kill -0 1533014 00:28:32.736 22:11:52 compress_compdev -- common/autotest_common.sh@953 -- # uname 00:28:32.736 22:11:52 compress_compdev -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:32.736 22:11:52 compress_compdev -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1533014 00:28:32.736 22:11:52 compress_compdev -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:32.736 22:11:52 compress_compdev -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:32.736 22:11:52 compress_compdev -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1533014' 00:28:32.736 killing process with pid 1533014 00:28:32.736 22:11:52 compress_compdev -- common/autotest_common.sh@967 -- # kill 1533014 00:28:32.736 Received shutdown signal, test time was about 3.000000 seconds 00:28:32.736 00:28:32.736 Latency(us) 00:28:32.736 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:32.736 =================================================================================================================== 00:28:32.736 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:32.736 22:11:52 compress_compdev -- common/autotest_common.sh@972 -- # wait 1533014 00:28:36.016 22:11:55 compress_compdev -- compress/compress.sh@88 -- # run_bdevperf 32 4096 3 4096 00:28:36.016 22:11:55 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:28:36.016 22:11:55 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=1535230 00:28:36.016 22:11:55 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:36.016 22:11:55 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 1535230 00:28:36.016 22:11:55 compress_compdev -- common/autotest_common.sh@829 -- # '[' -z 1535230 ']' 00:28:36.016 22:11:55 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:28:36.016 22:11:55 compress_compdev -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:36.016 22:11:55 compress_compdev -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:36.016 22:11:55 compress_compdev -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:36.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:36.016 22:11:55 compress_compdev -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:36.016 22:11:55 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:28:36.016 [2024-07-13 22:11:55.389770] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:36.016 [2024-07-13 22:11:55.389872] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1535230 ] 00:28:36.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:36.274 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:36.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:36.274 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:36.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:36.274 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:36.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:36.274 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:36.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:36.274 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:36.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:36.274 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:36.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:36.274 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:36.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:36.274 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:36.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:36.274 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:36.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:36.274 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:36.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:36.274 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:36.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:36.274 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:36.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:36.274 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:36.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:36.274 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:36.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:36.274 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:36.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:36.275 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:36.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:36.275 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:36.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:36.275 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:36.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:36.275 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:36.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:36.275 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:36.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:36.275 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:36.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:36.275 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:36.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:36.275 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:36.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:36.275 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:36.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:36.275 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:36.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:36.275 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:36.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:36.275 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:36.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:36.275 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:36.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:36.275 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:36.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:36.275 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:36.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:36.275 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:36.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:36.275 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:36.275 [2024-07-13 22:11:55.552182] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:36.558 [2024-07-13 22:11:55.763283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:36.558 [2024-07-13 22:11:55.763289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:37.500 [2024-07-13 22:11:56.765791] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:28:37.758 22:11:56 compress_compdev -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:37.758 22:11:56 compress_compdev -- common/autotest_common.sh@862 -- # return 0 00:28:37.758 22:11:56 compress_compdev -- compress/compress.sh@74 -- # create_vols 4096 00:28:37.758 22:11:56 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:37.758 22:11:56 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:41.039 [2024-07-13 22:11:59.986216] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00001d220 PMD being used: compress_qat 00:28:41.039 22:12:00 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:28:41.039 22:12:00 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:28:41.039 22:12:00 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:28:41.039 22:12:00 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:28:41.039 22:12:00 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:28:41.039 22:12:00 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:28:41.039 22:12:00 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:41.039 22:12:00 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:28:41.039 [ 00:28:41.039 { 00:28:41.039 "name": "Nvme0n1", 00:28:41.039 "aliases": [ 00:28:41.039 "c589538d-e80c-4802-b002-22ac15399835" 00:28:41.039 ], 00:28:41.039 "product_name": "NVMe disk", 00:28:41.039 "block_size": 512, 00:28:41.039 "num_blocks": 3907029168, 00:28:41.039 "uuid": "c589538d-e80c-4802-b002-22ac15399835", 00:28:41.039 "assigned_rate_limits": { 00:28:41.039 "rw_ios_per_sec": 0, 00:28:41.039 "rw_mbytes_per_sec": 0, 00:28:41.039 "r_mbytes_per_sec": 0, 00:28:41.039 "w_mbytes_per_sec": 0 00:28:41.039 }, 00:28:41.039 "claimed": false, 00:28:41.039 "zoned": false, 00:28:41.039 "supported_io_types": { 00:28:41.039 "read": true, 00:28:41.039 "write": true, 00:28:41.039 "unmap": true, 00:28:41.039 "flush": true, 00:28:41.039 "reset": true, 00:28:41.039 "nvme_admin": true, 00:28:41.039 "nvme_io": true, 00:28:41.039 "nvme_io_md": false, 00:28:41.039 "write_zeroes": true, 00:28:41.039 "zcopy": false, 00:28:41.039 "get_zone_info": false, 00:28:41.039 "zone_management": false, 00:28:41.039 "zone_append": false, 00:28:41.039 "compare": false, 00:28:41.039 "compare_and_write": false, 00:28:41.039 "abort": true, 00:28:41.039 "seek_hole": false, 00:28:41.039 "seek_data": false, 00:28:41.039 "copy": false, 00:28:41.039 "nvme_iov_md": false 00:28:41.039 }, 00:28:41.039 "driver_specific": { 00:28:41.039 "nvme": [ 00:28:41.039 { 00:28:41.039 "pci_address": "0000:d8:00.0", 00:28:41.039 "trid": { 00:28:41.039 "trtype": "PCIe", 00:28:41.039 "traddr": "0000:d8:00.0" 00:28:41.039 }, 00:28:41.039 "ctrlr_data": { 00:28:41.039 "cntlid": 0, 00:28:41.039 "vendor_id": "0x8086", 00:28:41.039 "model_number": "INTEL SSDPE2KX020T8", 00:28:41.039 "serial_number": "BTLJ125505KA2P0BGN", 00:28:41.039 "firmware_revision": "VDV10170", 00:28:41.039 "oacs": { 00:28:41.039 "security": 0, 00:28:41.039 "format": 1, 00:28:41.039 "firmware": 1, 00:28:41.039 "ns_manage": 1 00:28:41.039 }, 00:28:41.039 "multi_ctrlr": false, 00:28:41.039 "ana_reporting": false 00:28:41.039 }, 00:28:41.039 "vs": { 00:28:41.039 "nvme_version": "1.2" 00:28:41.039 }, 00:28:41.039 "ns_data": { 00:28:41.039 "id": 1, 00:28:41.039 "can_share": false 00:28:41.039 } 00:28:41.039 } 00:28:41.039 ], 00:28:41.039 "mp_policy": "active_passive" 00:28:41.039 } 00:28:41.039 } 00:28:41.039 ] 00:28:41.039 22:12:00 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:28:41.039 22:12:00 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:28:41.296 [2024-07-13 22:12:00.524721] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00001d3e0 PMD being used: compress_qat 00:28:42.229 e5d82488-5e35-4fe8-880c-72b6850f9855 00:28:42.229 22:12:01 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:28:42.486 8ba38658-3781-40b6-a957-18a867ee31d0 00:28:42.486 22:12:01 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:28:42.486 22:12:01 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:28:42.486 22:12:01 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:28:42.486 22:12:01 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:28:42.486 22:12:01 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:28:42.486 22:12:01 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:28:42.486 22:12:01 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:42.743 22:12:01 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:28:42.743 [ 00:28:42.743 { 00:28:42.743 "name": "8ba38658-3781-40b6-a957-18a867ee31d0", 00:28:42.743 "aliases": [ 00:28:42.743 "lvs0/lv0" 00:28:42.743 ], 00:28:42.743 "product_name": "Logical Volume", 00:28:42.743 "block_size": 512, 00:28:42.743 "num_blocks": 204800, 00:28:42.743 "uuid": "8ba38658-3781-40b6-a957-18a867ee31d0", 00:28:42.743 "assigned_rate_limits": { 00:28:42.743 "rw_ios_per_sec": 0, 00:28:42.743 "rw_mbytes_per_sec": 0, 00:28:42.743 "r_mbytes_per_sec": 0, 00:28:42.743 "w_mbytes_per_sec": 0 00:28:42.743 }, 00:28:42.743 "claimed": false, 00:28:42.743 "zoned": false, 00:28:42.743 "supported_io_types": { 00:28:42.743 "read": true, 00:28:42.743 "write": true, 00:28:42.743 "unmap": true, 00:28:42.743 "flush": false, 00:28:42.743 "reset": true, 00:28:42.743 "nvme_admin": false, 00:28:42.743 "nvme_io": false, 00:28:42.743 "nvme_io_md": false, 00:28:42.743 "write_zeroes": true, 00:28:42.743 "zcopy": false, 00:28:42.743 "get_zone_info": false, 00:28:42.743 "zone_management": false, 00:28:42.743 "zone_append": false, 00:28:42.743 "compare": false, 00:28:42.743 "compare_and_write": false, 00:28:42.743 "abort": false, 00:28:42.743 "seek_hole": true, 00:28:42.743 "seek_data": true, 00:28:42.743 "copy": false, 00:28:42.743 "nvme_iov_md": false 00:28:42.743 }, 00:28:42.743 "driver_specific": { 00:28:42.743 "lvol": { 00:28:42.743 "lvol_store_uuid": "e5d82488-5e35-4fe8-880c-72b6850f9855", 00:28:42.743 "base_bdev": "Nvme0n1", 00:28:42.743 "thin_provision": true, 00:28:42.743 "num_allocated_clusters": 0, 00:28:42.743 "snapshot": false, 00:28:42.743 "clone": false, 00:28:42.743 "esnap_clone": false 00:28:42.743 } 00:28:42.743 } 00:28:42.743 } 00:28:42.743 ] 00:28:42.743 22:12:02 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:28:42.744 22:12:02 compress_compdev -- compress/compress.sh@41 -- # '[' -z 4096 ']' 00:28:42.744 22:12:02 compress_compdev -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 4096 00:28:43.001 [2024-07-13 22:12:02.247879] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:28:43.001 COMP_lvs0/lv0 00:28:43.001 22:12:02 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:28:43.001 22:12:02 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:28:43.001 22:12:02 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:28:43.001 22:12:02 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:28:43.001 22:12:02 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:28:43.001 22:12:02 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:28:43.001 22:12:02 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:43.257 22:12:02 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:28:43.257 [ 00:28:43.257 { 00:28:43.257 "name": "COMP_lvs0/lv0", 00:28:43.257 "aliases": [ 00:28:43.257 "ba3aab9f-cce7-5d4e-92dd-16e13716545e" 00:28:43.257 ], 00:28:43.257 "product_name": "compress", 00:28:43.257 "block_size": 4096, 00:28:43.257 "num_blocks": 25088, 00:28:43.257 "uuid": "ba3aab9f-cce7-5d4e-92dd-16e13716545e", 00:28:43.257 "assigned_rate_limits": { 00:28:43.257 "rw_ios_per_sec": 0, 00:28:43.257 "rw_mbytes_per_sec": 0, 00:28:43.257 "r_mbytes_per_sec": 0, 00:28:43.257 "w_mbytes_per_sec": 0 00:28:43.257 }, 00:28:43.257 "claimed": false, 00:28:43.257 "zoned": false, 00:28:43.257 "supported_io_types": { 00:28:43.257 "read": true, 00:28:43.257 "write": true, 00:28:43.257 "unmap": false, 00:28:43.257 "flush": false, 00:28:43.257 "reset": false, 00:28:43.257 "nvme_admin": false, 00:28:43.257 "nvme_io": false, 00:28:43.257 "nvme_io_md": false, 00:28:43.257 "write_zeroes": true, 00:28:43.257 "zcopy": false, 00:28:43.257 "get_zone_info": false, 00:28:43.257 "zone_management": false, 00:28:43.257 "zone_append": false, 00:28:43.257 "compare": false, 00:28:43.257 "compare_and_write": false, 00:28:43.257 "abort": false, 00:28:43.257 "seek_hole": false, 00:28:43.257 "seek_data": false, 00:28:43.257 "copy": false, 00:28:43.257 "nvme_iov_md": false 00:28:43.257 }, 00:28:43.257 "driver_specific": { 00:28:43.257 "compress": { 00:28:43.257 "name": "COMP_lvs0/lv0", 00:28:43.257 "base_bdev_name": "8ba38658-3781-40b6-a957-18a867ee31d0" 00:28:43.257 } 00:28:43.257 } 00:28:43.257 } 00:28:43.257 ] 00:28:43.257 22:12:02 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:28:43.257 22:12:02 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:28:43.515 [2024-07-13 22:12:02.687224] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e0000101e0 PMD being used: compress_qat 00:28:43.515 [2024-07-13 22:12:02.690253] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00001d4c0 PMD being used: compress_qat 00:28:43.515 Running I/O for 3 seconds... 00:28:46.797 00:28:46.797 Latency(us) 00:28:46.797 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:46.797 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:28:46.797 Verification LBA range: start 0x0 length 0x3100 00:28:46.797 COMP_lvs0/lv0 : 3.01 3755.93 14.67 0.00 0.00 8479.47 182.68 14155.78 00:28:46.797 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:28:46.797 Verification LBA range: start 0x3100 length 0x3100 00:28:46.797 COMP_lvs0/lv0 : 3.01 3855.11 15.06 0.00 0.00 8260.41 172.85 13631.49 00:28:46.797 =================================================================================================================== 00:28:46.797 Total : 7611.03 29.73 0.00 0.00 8368.48 172.85 14155.78 00:28:46.797 0 00:28:46.797 22:12:05 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:28:46.797 22:12:05 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:28:46.797 22:12:05 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:28:46.797 22:12:06 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:28:46.797 22:12:06 compress_compdev -- compress/compress.sh@78 -- # killprocess 1535230 00:28:46.797 22:12:06 compress_compdev -- common/autotest_common.sh@948 -- # '[' -z 1535230 ']' 00:28:46.797 22:12:06 compress_compdev -- common/autotest_common.sh@952 -- # kill -0 1535230 00:28:46.797 22:12:06 compress_compdev -- common/autotest_common.sh@953 -- # uname 00:28:46.797 22:12:06 compress_compdev -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:46.797 22:12:06 compress_compdev -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1535230 00:28:46.797 22:12:06 compress_compdev -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:46.797 22:12:06 compress_compdev -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:46.797 22:12:06 compress_compdev -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1535230' 00:28:46.797 killing process with pid 1535230 00:28:46.797 22:12:06 compress_compdev -- common/autotest_common.sh@967 -- # kill 1535230 00:28:46.797 Received shutdown signal, test time was about 3.000000 seconds 00:28:46.797 00:28:46.797 Latency(us) 00:28:46.797 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:46.797 =================================================================================================================== 00:28:46.797 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:46.797 22:12:06 compress_compdev -- common/autotest_common.sh@972 -- # wait 1535230 00:28:50.079 22:12:09 compress_compdev -- compress/compress.sh@89 -- # run_bdevio 00:28:50.080 22:12:09 compress_compdev -- compress/compress.sh@50 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:28:50.080 22:12:09 compress_compdev -- compress/compress.sh@55 -- # bdevio_pid=1538161 00:28:50.080 22:12:09 compress_compdev -- compress/compress.sh@56 -- # trap 'killprocess $bdevio_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:50.080 22:12:09 compress_compdev -- compress/compress.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json -w 00:28:50.080 22:12:09 compress_compdev -- compress/compress.sh@57 -- # waitforlisten 1538161 00:28:50.080 22:12:09 compress_compdev -- common/autotest_common.sh@829 -- # '[' -z 1538161 ']' 00:28:50.080 22:12:09 compress_compdev -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:50.080 22:12:09 compress_compdev -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:50.080 22:12:09 compress_compdev -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:50.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:50.080 22:12:09 compress_compdev -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:50.080 22:12:09 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:28:50.080 [2024-07-13 22:12:09.396568] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:50.080 [2024-07-13 22:12:09.396666] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1538161 ] 00:28:50.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:50.338 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:50.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:50.338 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:50.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:50.338 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:50.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:50.338 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:50.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:50.338 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:50.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:50.338 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:50.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:50.338 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:50.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:50.338 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:50.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:50.338 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:50.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:50.338 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:50.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:50.338 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:50.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:50.338 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:50.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:50.338 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:50.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:50.338 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:50.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:50.338 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:50.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:50.338 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:50.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:50.338 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:50.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:50.338 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:50.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:50.338 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:50.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:50.338 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:50.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:50.338 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:50.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:50.338 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:50.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:50.338 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:50.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:50.338 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:50.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:50.338 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:50.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:50.338 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:50.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:50.338 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:50.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:50.339 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:50.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:50.339 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:50.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:50.339 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:50.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:50.339 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:50.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:50.339 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:50.339 [2024-07-13 22:12:09.558735] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:50.596 [2024-07-13 22:12:09.764517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:50.596 [2024-07-13 22:12:09.764582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:50.596 [2024-07-13 22:12:09.764590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:51.531 [2024-07-13 22:12:10.775708] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:28:51.788 22:12:10 compress_compdev -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:51.788 22:12:10 compress_compdev -- common/autotest_common.sh@862 -- # return 0 00:28:51.788 22:12:10 compress_compdev -- compress/compress.sh@58 -- # create_vols 00:28:51.788 22:12:10 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:51.788 22:12:10 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:55.067 [2024-07-13 22:12:13.994846] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000026280 PMD being used: compress_qat 00:28:55.067 22:12:14 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:28:55.067 22:12:14 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:28:55.067 22:12:14 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:28:55.067 22:12:14 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:28:55.067 22:12:14 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:28:55.067 22:12:14 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:28:55.067 22:12:14 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:55.067 22:12:14 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:28:55.067 [ 00:28:55.067 { 00:28:55.067 "name": "Nvme0n1", 00:28:55.067 "aliases": [ 00:28:55.067 "95d18337-00b8-4c9e-9b4a-72c9643e8c8b" 00:28:55.067 ], 00:28:55.067 "product_name": "NVMe disk", 00:28:55.067 "block_size": 512, 00:28:55.067 "num_blocks": 3907029168, 00:28:55.067 "uuid": "95d18337-00b8-4c9e-9b4a-72c9643e8c8b", 00:28:55.067 "assigned_rate_limits": { 00:28:55.067 "rw_ios_per_sec": 0, 00:28:55.067 "rw_mbytes_per_sec": 0, 00:28:55.067 "r_mbytes_per_sec": 0, 00:28:55.067 "w_mbytes_per_sec": 0 00:28:55.067 }, 00:28:55.067 "claimed": false, 00:28:55.067 "zoned": false, 00:28:55.067 "supported_io_types": { 00:28:55.067 "read": true, 00:28:55.067 "write": true, 00:28:55.067 "unmap": true, 00:28:55.067 "flush": true, 00:28:55.067 "reset": true, 00:28:55.067 "nvme_admin": true, 00:28:55.067 "nvme_io": true, 00:28:55.067 "nvme_io_md": false, 00:28:55.067 "write_zeroes": true, 00:28:55.067 "zcopy": false, 00:28:55.067 "get_zone_info": false, 00:28:55.067 "zone_management": false, 00:28:55.067 "zone_append": false, 00:28:55.067 "compare": false, 00:28:55.067 "compare_and_write": false, 00:28:55.067 "abort": true, 00:28:55.067 "seek_hole": false, 00:28:55.067 "seek_data": false, 00:28:55.067 "copy": false, 00:28:55.067 "nvme_iov_md": false 00:28:55.067 }, 00:28:55.067 "driver_specific": { 00:28:55.067 "nvme": [ 00:28:55.067 { 00:28:55.067 "pci_address": "0000:d8:00.0", 00:28:55.067 "trid": { 00:28:55.067 "trtype": "PCIe", 00:28:55.067 "traddr": "0000:d8:00.0" 00:28:55.067 }, 00:28:55.067 "ctrlr_data": { 00:28:55.067 "cntlid": 0, 00:28:55.067 "vendor_id": "0x8086", 00:28:55.067 "model_number": "INTEL SSDPE2KX020T8", 00:28:55.067 "serial_number": "BTLJ125505KA2P0BGN", 00:28:55.067 "firmware_revision": "VDV10170", 00:28:55.067 "oacs": { 00:28:55.067 "security": 0, 00:28:55.068 "format": 1, 00:28:55.068 "firmware": 1, 00:28:55.068 "ns_manage": 1 00:28:55.068 }, 00:28:55.068 "multi_ctrlr": false, 00:28:55.068 "ana_reporting": false 00:28:55.068 }, 00:28:55.068 "vs": { 00:28:55.068 "nvme_version": "1.2" 00:28:55.068 }, 00:28:55.068 "ns_data": { 00:28:55.068 "id": 1, 00:28:55.068 "can_share": false 00:28:55.068 } 00:28:55.068 } 00:28:55.068 ], 00:28:55.068 "mp_policy": "active_passive" 00:28:55.068 } 00:28:55.068 } 00:28:55.068 ] 00:28:55.068 22:12:14 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:28:55.068 22:12:14 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:28:55.326 [2024-07-13 22:12:14.526195] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000026440 PMD being used: compress_qat 00:28:56.260 63b594ad-7b1f-4b5d-8602-0c23448e6ee2 00:28:56.260 22:12:15 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:28:56.533 4a00fd74-e17c-467b-b9cb-13af4165a5ab 00:28:56.533 22:12:15 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:28:56.533 22:12:15 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:28:56.533 22:12:15 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:28:56.533 22:12:15 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:28:56.533 22:12:15 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:28:56.533 22:12:15 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:28:56.533 22:12:15 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:56.533 22:12:15 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:28:56.804 [ 00:28:56.804 { 00:28:56.804 "name": "4a00fd74-e17c-467b-b9cb-13af4165a5ab", 00:28:56.804 "aliases": [ 00:28:56.804 "lvs0/lv0" 00:28:56.804 ], 00:28:56.804 "product_name": "Logical Volume", 00:28:56.804 "block_size": 512, 00:28:56.804 "num_blocks": 204800, 00:28:56.804 "uuid": "4a00fd74-e17c-467b-b9cb-13af4165a5ab", 00:28:56.804 "assigned_rate_limits": { 00:28:56.804 "rw_ios_per_sec": 0, 00:28:56.804 "rw_mbytes_per_sec": 0, 00:28:56.804 "r_mbytes_per_sec": 0, 00:28:56.804 "w_mbytes_per_sec": 0 00:28:56.804 }, 00:28:56.804 "claimed": false, 00:28:56.804 "zoned": false, 00:28:56.804 "supported_io_types": { 00:28:56.804 "read": true, 00:28:56.804 "write": true, 00:28:56.804 "unmap": true, 00:28:56.804 "flush": false, 00:28:56.804 "reset": true, 00:28:56.804 "nvme_admin": false, 00:28:56.804 "nvme_io": false, 00:28:56.804 "nvme_io_md": false, 00:28:56.804 "write_zeroes": true, 00:28:56.804 "zcopy": false, 00:28:56.804 "get_zone_info": false, 00:28:56.804 "zone_management": false, 00:28:56.804 "zone_append": false, 00:28:56.804 "compare": false, 00:28:56.804 "compare_and_write": false, 00:28:56.804 "abort": false, 00:28:56.804 "seek_hole": true, 00:28:56.804 "seek_data": true, 00:28:56.804 "copy": false, 00:28:56.804 "nvme_iov_md": false 00:28:56.804 }, 00:28:56.804 "driver_specific": { 00:28:56.804 "lvol": { 00:28:56.804 "lvol_store_uuid": "63b594ad-7b1f-4b5d-8602-0c23448e6ee2", 00:28:56.804 "base_bdev": "Nvme0n1", 00:28:56.804 "thin_provision": true, 00:28:56.804 "num_allocated_clusters": 0, 00:28:56.805 "snapshot": false, 00:28:56.805 "clone": false, 00:28:56.805 "esnap_clone": false 00:28:56.805 } 00:28:56.805 } 00:28:56.805 } 00:28:56.805 ] 00:28:56.805 22:12:16 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:28:56.805 22:12:16 compress_compdev -- compress/compress.sh@41 -- # '[' -z '' ']' 00:28:56.805 22:12:16 compress_compdev -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:28:56.805 [2024-07-13 22:12:16.187872] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:28:56.805 COMP_lvs0/lv0 00:28:57.063 22:12:16 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:28:57.063 22:12:16 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:28:57.063 22:12:16 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:28:57.063 22:12:16 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:28:57.063 22:12:16 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:28:57.063 22:12:16 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:28:57.063 22:12:16 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:57.063 22:12:16 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:28:57.321 [ 00:28:57.321 { 00:28:57.321 "name": "COMP_lvs0/lv0", 00:28:57.321 "aliases": [ 00:28:57.321 "85f45d29-eb4b-5f5a-8a62-a3e1ec943888" 00:28:57.321 ], 00:28:57.321 "product_name": "compress", 00:28:57.321 "block_size": 512, 00:28:57.321 "num_blocks": 200704, 00:28:57.321 "uuid": "85f45d29-eb4b-5f5a-8a62-a3e1ec943888", 00:28:57.321 "assigned_rate_limits": { 00:28:57.321 "rw_ios_per_sec": 0, 00:28:57.321 "rw_mbytes_per_sec": 0, 00:28:57.321 "r_mbytes_per_sec": 0, 00:28:57.321 "w_mbytes_per_sec": 0 00:28:57.322 }, 00:28:57.322 "claimed": false, 00:28:57.322 "zoned": false, 00:28:57.322 "supported_io_types": { 00:28:57.322 "read": true, 00:28:57.322 "write": true, 00:28:57.322 "unmap": false, 00:28:57.322 "flush": false, 00:28:57.322 "reset": false, 00:28:57.322 "nvme_admin": false, 00:28:57.322 "nvme_io": false, 00:28:57.322 "nvme_io_md": false, 00:28:57.322 "write_zeroes": true, 00:28:57.322 "zcopy": false, 00:28:57.322 "get_zone_info": false, 00:28:57.322 "zone_management": false, 00:28:57.322 "zone_append": false, 00:28:57.322 "compare": false, 00:28:57.322 "compare_and_write": false, 00:28:57.322 "abort": false, 00:28:57.322 "seek_hole": false, 00:28:57.322 "seek_data": false, 00:28:57.322 "copy": false, 00:28:57.322 "nvme_iov_md": false 00:28:57.322 }, 00:28:57.322 "driver_specific": { 00:28:57.322 "compress": { 00:28:57.322 "name": "COMP_lvs0/lv0", 00:28:57.322 "base_bdev_name": "4a00fd74-e17c-467b-b9cb-13af4165a5ab" 00:28:57.322 } 00:28:57.322 } 00:28:57.322 } 00:28:57.322 ] 00:28:57.322 22:12:16 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:28:57.322 22:12:16 compress_compdev -- compress/compress.sh@59 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:28:57.322 [2024-07-13 22:12:16.656388] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e0000171e0 PMD being used: compress_qat 00:28:57.322 I/O targets: 00:28:57.322 COMP_lvs0/lv0: 200704 blocks of 512 bytes (98 MiB) 00:28:57.322 00:28:57.322 00:28:57.322 CUnit - A unit testing framework for C - Version 2.1-3 00:28:57.322 http://cunit.sourceforge.net/ 00:28:57.322 00:28:57.322 00:28:57.322 Suite: bdevio tests on: COMP_lvs0/lv0 00:28:57.322 Test: blockdev write read block ...passed 00:28:57.322 Test: blockdev write zeroes read block ...passed 00:28:57.322 Test: blockdev write zeroes read no split ...passed 00:28:57.580 Test: blockdev write zeroes read split ...passed 00:28:57.580 Test: blockdev write zeroes read split partial ...passed 00:28:57.580 Test: blockdev reset ...[2024-07-13 22:12:16.772431] vbdev_compress.c: 252:vbdev_compress_submit_request: *ERROR*: Unknown I/O type 5 00:28:57.580 passed 00:28:57.580 Test: blockdev write read 8 blocks ...passed 00:28:57.580 Test: blockdev write read size > 128k ...passed 00:28:57.580 Test: blockdev write read invalid size ...passed 00:28:57.580 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:57.580 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:57.580 Test: blockdev write read max offset ...passed 00:28:57.580 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:57.580 Test: blockdev writev readv 8 blocks ...passed 00:28:57.580 Test: blockdev writev readv 30 x 1block ...passed 00:28:57.580 Test: blockdev writev readv block ...passed 00:28:57.580 Test: blockdev writev readv size > 128k ...passed 00:28:57.580 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:57.580 Test: blockdev comparev and writev ...passed 00:28:57.580 Test: blockdev nvme passthru rw ...passed 00:28:57.580 Test: blockdev nvme passthru vendor specific ...passed 00:28:57.580 Test: blockdev nvme admin passthru ...passed 00:28:57.580 Test: blockdev copy ...passed 00:28:57.580 00:28:57.580 Run Summary: Type Total Ran Passed Failed Inactive 00:28:57.580 suites 1 1 n/a 0 0 00:28:57.580 tests 23 23 23 0 0 00:28:57.580 asserts 130 130 130 0 n/a 00:28:57.580 00:28:57.580 Elapsed time = 0.348 seconds 00:28:57.580 0 00:28:57.580 22:12:16 compress_compdev -- compress/compress.sh@60 -- # destroy_vols 00:28:57.580 22:12:16 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:28:57.839 22:12:17 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:28:57.839 22:12:17 compress_compdev -- compress/compress.sh@61 -- # trap - SIGINT SIGTERM EXIT 00:28:57.839 22:12:17 compress_compdev -- compress/compress.sh@62 -- # killprocess 1538161 00:28:57.839 22:12:17 compress_compdev -- common/autotest_common.sh@948 -- # '[' -z 1538161 ']' 00:28:57.839 22:12:17 compress_compdev -- common/autotest_common.sh@952 -- # kill -0 1538161 00:28:57.839 22:12:17 compress_compdev -- common/autotest_common.sh@953 -- # uname 00:28:57.839 22:12:17 compress_compdev -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:57.839 22:12:17 compress_compdev -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1538161 00:28:58.097 22:12:17 compress_compdev -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:58.097 22:12:17 compress_compdev -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:58.097 22:12:17 compress_compdev -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1538161' 00:28:58.097 killing process with pid 1538161 00:28:58.097 22:12:17 compress_compdev -- common/autotest_common.sh@967 -- # kill 1538161 00:28:58.097 22:12:17 compress_compdev -- common/autotest_common.sh@972 -- # wait 1538161 00:29:01.382 22:12:20 compress_compdev -- compress/compress.sh@91 -- # '[' 1 -eq 1 ']' 00:29:01.382 22:12:20 compress_compdev -- compress/compress.sh@92 -- # run_bdevperf 64 16384 30 00:29:01.382 22:12:20 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:29:01.382 22:12:20 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=1540039 00:29:01.382 22:12:20 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:01.382 22:12:20 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 64 -o 16384 -w verify -t 30 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:29:01.382 22:12:20 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 1540039 00:29:01.382 22:12:20 compress_compdev -- common/autotest_common.sh@829 -- # '[' -z 1540039 ']' 00:29:01.382 22:12:20 compress_compdev -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:01.382 22:12:20 compress_compdev -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:01.382 22:12:20 compress_compdev -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:01.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:01.382 22:12:20 compress_compdev -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:01.382 22:12:20 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:29:01.382 [2024-07-13 22:12:20.426374] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:01.382 [2024-07-13 22:12:20.426485] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1540039 ] 00:29:01.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.382 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:01.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.382 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:01.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.382 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:01.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.382 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:01.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.382 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:01.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.382 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.383 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.383 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.383 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.383 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.383 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.383 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.383 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.383 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.383 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.383 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.383 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.383 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.383 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.383 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.383 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.383 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.383 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.383 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.383 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.383 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.383 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.383 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.383 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.383 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.383 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.383 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:01.383 [2024-07-13 22:12:20.584899] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:01.642 [2024-07-13 22:12:20.783389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:01.642 [2024-07-13 22:12:20.783395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:02.580 [2024-07-13 22:12:21.786564] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:29:02.580 22:12:21 compress_compdev -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:02.580 22:12:21 compress_compdev -- common/autotest_common.sh@862 -- # return 0 00:29:02.580 22:12:21 compress_compdev -- compress/compress.sh@74 -- # create_vols 00:29:02.580 22:12:21 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:02.580 22:12:21 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:29:05.870 [2024-07-13 22:12:25.002098] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00001d220 PMD being used: compress_qat 00:29:05.870 22:12:25 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:29:05.870 22:12:25 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:29:05.870 22:12:25 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:29:05.870 22:12:25 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:29:05.870 22:12:25 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:29:05.870 22:12:25 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:29:05.870 22:12:25 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:05.870 22:12:25 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:29:06.128 [ 00:29:06.128 { 00:29:06.128 "name": "Nvme0n1", 00:29:06.128 "aliases": [ 00:29:06.128 "cda9ec32-cf9f-4d6d-b4b5-fe6b4e105294" 00:29:06.128 ], 00:29:06.128 "product_name": "NVMe disk", 00:29:06.128 "block_size": 512, 00:29:06.128 "num_blocks": 3907029168, 00:29:06.128 "uuid": "cda9ec32-cf9f-4d6d-b4b5-fe6b4e105294", 00:29:06.128 "assigned_rate_limits": { 00:29:06.128 "rw_ios_per_sec": 0, 00:29:06.128 "rw_mbytes_per_sec": 0, 00:29:06.128 "r_mbytes_per_sec": 0, 00:29:06.128 "w_mbytes_per_sec": 0 00:29:06.128 }, 00:29:06.128 "claimed": false, 00:29:06.128 "zoned": false, 00:29:06.128 "supported_io_types": { 00:29:06.128 "read": true, 00:29:06.128 "write": true, 00:29:06.128 "unmap": true, 00:29:06.128 "flush": true, 00:29:06.128 "reset": true, 00:29:06.128 "nvme_admin": true, 00:29:06.128 "nvme_io": true, 00:29:06.128 "nvme_io_md": false, 00:29:06.128 "write_zeroes": true, 00:29:06.128 "zcopy": false, 00:29:06.128 "get_zone_info": false, 00:29:06.128 "zone_management": false, 00:29:06.128 "zone_append": false, 00:29:06.128 "compare": false, 00:29:06.128 "compare_and_write": false, 00:29:06.128 "abort": true, 00:29:06.128 "seek_hole": false, 00:29:06.128 "seek_data": false, 00:29:06.128 "copy": false, 00:29:06.128 "nvme_iov_md": false 00:29:06.128 }, 00:29:06.128 "driver_specific": { 00:29:06.128 "nvme": [ 00:29:06.128 { 00:29:06.128 "pci_address": "0000:d8:00.0", 00:29:06.128 "trid": { 00:29:06.128 "trtype": "PCIe", 00:29:06.128 "traddr": "0000:d8:00.0" 00:29:06.128 }, 00:29:06.128 "ctrlr_data": { 00:29:06.128 "cntlid": 0, 00:29:06.128 "vendor_id": "0x8086", 00:29:06.128 "model_number": "INTEL SSDPE2KX020T8", 00:29:06.128 "serial_number": "BTLJ125505KA2P0BGN", 00:29:06.128 "firmware_revision": "VDV10170", 00:29:06.128 "oacs": { 00:29:06.128 "security": 0, 00:29:06.128 "format": 1, 00:29:06.128 "firmware": 1, 00:29:06.128 "ns_manage": 1 00:29:06.128 }, 00:29:06.128 "multi_ctrlr": false, 00:29:06.128 "ana_reporting": false 00:29:06.128 }, 00:29:06.128 "vs": { 00:29:06.128 "nvme_version": "1.2" 00:29:06.128 }, 00:29:06.128 "ns_data": { 00:29:06.128 "id": 1, 00:29:06.128 "can_share": false 00:29:06.128 } 00:29:06.128 } 00:29:06.128 ], 00:29:06.128 "mp_policy": "active_passive" 00:29:06.128 } 00:29:06.128 } 00:29:06.128 ] 00:29:06.128 22:12:25 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:29:06.129 22:12:25 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:29:06.387 [2024-07-13 22:12:25.532647] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00001d3e0 PMD being used: compress_qat 00:29:07.324 e1f4c384-28ed-434f-8781-38b8cfb934cc 00:29:07.324 22:12:26 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:29:07.583 f4d86486-1e65-4abd-ac83-29f352efc158 00:29:07.583 22:12:26 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:29:07.583 22:12:26 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:29:07.583 22:12:26 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:29:07.583 22:12:26 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:29:07.583 22:12:26 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:29:07.583 22:12:26 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:29:07.583 22:12:26 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:07.583 22:12:26 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:29:07.842 [ 00:29:07.842 { 00:29:07.842 "name": "f4d86486-1e65-4abd-ac83-29f352efc158", 00:29:07.842 "aliases": [ 00:29:07.842 "lvs0/lv0" 00:29:07.842 ], 00:29:07.842 "product_name": "Logical Volume", 00:29:07.842 "block_size": 512, 00:29:07.842 "num_blocks": 204800, 00:29:07.842 "uuid": "f4d86486-1e65-4abd-ac83-29f352efc158", 00:29:07.842 "assigned_rate_limits": { 00:29:07.842 "rw_ios_per_sec": 0, 00:29:07.842 "rw_mbytes_per_sec": 0, 00:29:07.842 "r_mbytes_per_sec": 0, 00:29:07.842 "w_mbytes_per_sec": 0 00:29:07.842 }, 00:29:07.842 "claimed": false, 00:29:07.842 "zoned": false, 00:29:07.842 "supported_io_types": { 00:29:07.842 "read": true, 00:29:07.842 "write": true, 00:29:07.842 "unmap": true, 00:29:07.842 "flush": false, 00:29:07.842 "reset": true, 00:29:07.842 "nvme_admin": false, 00:29:07.842 "nvme_io": false, 00:29:07.842 "nvme_io_md": false, 00:29:07.842 "write_zeroes": true, 00:29:07.842 "zcopy": false, 00:29:07.842 "get_zone_info": false, 00:29:07.842 "zone_management": false, 00:29:07.842 "zone_append": false, 00:29:07.842 "compare": false, 00:29:07.842 "compare_and_write": false, 00:29:07.842 "abort": false, 00:29:07.842 "seek_hole": true, 00:29:07.842 "seek_data": true, 00:29:07.842 "copy": false, 00:29:07.842 "nvme_iov_md": false 00:29:07.842 }, 00:29:07.842 "driver_specific": { 00:29:07.842 "lvol": { 00:29:07.842 "lvol_store_uuid": "e1f4c384-28ed-434f-8781-38b8cfb934cc", 00:29:07.842 "base_bdev": "Nvme0n1", 00:29:07.842 "thin_provision": true, 00:29:07.842 "num_allocated_clusters": 0, 00:29:07.842 "snapshot": false, 00:29:07.842 "clone": false, 00:29:07.842 "esnap_clone": false 00:29:07.842 } 00:29:07.842 } 00:29:07.842 } 00:29:07.842 ] 00:29:07.842 22:12:27 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:29:07.842 22:12:27 compress_compdev -- compress/compress.sh@41 -- # '[' -z '' ']' 00:29:07.842 22:12:27 compress_compdev -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:29:07.842 [2024-07-13 22:12:27.226096] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:29:07.842 COMP_lvs0/lv0 00:29:08.101 22:12:27 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:29:08.101 22:12:27 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:29:08.101 22:12:27 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:29:08.101 22:12:27 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:29:08.101 22:12:27 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:29:08.101 22:12:27 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:29:08.101 22:12:27 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:08.101 22:12:27 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:29:08.359 [ 00:29:08.359 { 00:29:08.359 "name": "COMP_lvs0/lv0", 00:29:08.359 "aliases": [ 00:29:08.359 "820e9fdf-44e1-519e-b0f6-e173d9685dc1" 00:29:08.359 ], 00:29:08.359 "product_name": "compress", 00:29:08.359 "block_size": 512, 00:29:08.359 "num_blocks": 200704, 00:29:08.359 "uuid": "820e9fdf-44e1-519e-b0f6-e173d9685dc1", 00:29:08.359 "assigned_rate_limits": { 00:29:08.359 "rw_ios_per_sec": 0, 00:29:08.359 "rw_mbytes_per_sec": 0, 00:29:08.359 "r_mbytes_per_sec": 0, 00:29:08.359 "w_mbytes_per_sec": 0 00:29:08.359 }, 00:29:08.359 "claimed": false, 00:29:08.359 "zoned": false, 00:29:08.359 "supported_io_types": { 00:29:08.359 "read": true, 00:29:08.359 "write": true, 00:29:08.359 "unmap": false, 00:29:08.359 "flush": false, 00:29:08.359 "reset": false, 00:29:08.359 "nvme_admin": false, 00:29:08.359 "nvme_io": false, 00:29:08.359 "nvme_io_md": false, 00:29:08.359 "write_zeroes": true, 00:29:08.359 "zcopy": false, 00:29:08.359 "get_zone_info": false, 00:29:08.359 "zone_management": false, 00:29:08.359 "zone_append": false, 00:29:08.359 "compare": false, 00:29:08.359 "compare_and_write": false, 00:29:08.359 "abort": false, 00:29:08.359 "seek_hole": false, 00:29:08.359 "seek_data": false, 00:29:08.359 "copy": false, 00:29:08.359 "nvme_iov_md": false 00:29:08.359 }, 00:29:08.359 "driver_specific": { 00:29:08.359 "compress": { 00:29:08.359 "name": "COMP_lvs0/lv0", 00:29:08.359 "base_bdev_name": "f4d86486-1e65-4abd-ac83-29f352efc158" 00:29:08.359 } 00:29:08.359 } 00:29:08.359 } 00:29:08.360 ] 00:29:08.360 22:12:27 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:29:08.360 22:12:27 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:29:08.360 [2024-07-13 22:12:27.647404] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e0000101e0 PMD being used: compress_qat 00:29:08.360 [2024-07-13 22:12:27.650211] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00001d5a0 PMD being used: compress_qat 00:29:08.360 Running I/O for 30 seconds... 00:29:40.436 00:29:40.436 Latency(us) 00:29:40.436 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:40.436 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 64, IO size: 16384) 00:29:40.436 Verification LBA range: start 0x0 length 0xc40 00:29:40.436 COMP_lvs0/lv0 : 30.01 1773.42 27.71 0.00 0.00 35913.08 355.53 30408.70 00:29:40.436 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 64, IO size: 16384) 00:29:40.436 Verification LBA range: start 0xc40 length 0xc40 00:29:40.436 COMP_lvs0/lv0 : 30.01 5513.32 86.15 0.00 0.00 11506.73 339.15 24222.11 00:29:40.436 =================================================================================================================== 00:29:40.436 Total : 7286.74 113.86 0.00 0.00 17446.92 339.15 30408.70 00:29:40.436 0 00:29:40.436 22:12:57 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:29:40.436 22:12:57 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:29:40.436 22:12:57 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:29:40.436 22:12:58 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:29:40.436 22:12:58 compress_compdev -- compress/compress.sh@78 -- # killprocess 1540039 00:29:40.436 22:12:58 compress_compdev -- common/autotest_common.sh@948 -- # '[' -z 1540039 ']' 00:29:40.436 22:12:58 compress_compdev -- common/autotest_common.sh@952 -- # kill -0 1540039 00:29:40.436 22:12:58 compress_compdev -- common/autotest_common.sh@953 -- # uname 00:29:40.436 22:12:58 compress_compdev -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:40.436 22:12:58 compress_compdev -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1540039 00:29:40.436 22:12:58 compress_compdev -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:40.436 22:12:58 compress_compdev -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:40.436 22:12:58 compress_compdev -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1540039' 00:29:40.436 killing process with pid 1540039 00:29:40.436 22:12:58 compress_compdev -- common/autotest_common.sh@967 -- # kill 1540039 00:29:40.436 Received shutdown signal, test time was about 30.000000 seconds 00:29:40.436 00:29:40.436 Latency(us) 00:29:40.436 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:40.436 =================================================================================================================== 00:29:40.436 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:40.436 22:12:58 compress_compdev -- common/autotest_common.sh@972 -- # wait 1540039 00:29:42.377 22:13:01 compress_compdev -- compress/compress.sh@95 -- # export TEST_TRANSPORT=tcp 00:29:42.377 22:13:01 compress_compdev -- compress/compress.sh@95 -- # TEST_TRANSPORT=tcp 00:29:42.377 22:13:01 compress_compdev -- compress/compress.sh@96 -- # NET_TYPE=virt 00:29:42.377 22:13:01 compress_compdev -- compress/compress.sh@96 -- # nvmftestinit 00:29:42.377 22:13:01 compress_compdev -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:42.377 22:13:01 compress_compdev -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:42.377 22:13:01 compress_compdev -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:42.377 22:13:01 compress_compdev -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:42.377 22:13:01 compress_compdev -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:42.377 22:13:01 compress_compdev -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:42.377 22:13:01 compress_compdev -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:42.377 22:13:01 compress_compdev -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:42.377 22:13:01 compress_compdev -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:29:42.377 22:13:01 compress_compdev -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:29:42.377 22:13:01 compress_compdev -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:29:42.377 22:13:01 compress_compdev -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:29:42.377 22:13:01 compress_compdev -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:29:42.377 22:13:01 compress_compdev -- nvmf/common.sh@432 -- # nvmf_veth_init 00:29:42.377 22:13:01 compress_compdev -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:42.377 22:13:01 compress_compdev -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:42.377 22:13:01 compress_compdev -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:42.377 22:13:01 compress_compdev -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:29:42.377 22:13:01 compress_compdev -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:42.377 22:13:01 compress_compdev -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:42.377 22:13:01 compress_compdev -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:42.377 22:13:01 compress_compdev -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:42.377 22:13:01 compress_compdev -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:42.377 22:13:01 compress_compdev -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:42.377 22:13:01 compress_compdev -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:42.377 22:13:01 compress_compdev -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:42.377 22:13:01 compress_compdev -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:29:42.377 22:13:01 compress_compdev -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:29:42.377 Cannot find device "nvmf_tgt_br" 00:29:42.377 22:13:01 compress_compdev -- nvmf/common.sh@155 -- # true 00:29:42.377 22:13:01 compress_compdev -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:29:42.377 Cannot find device "nvmf_tgt_br2" 00:29:42.377 22:13:01 compress_compdev -- nvmf/common.sh@156 -- # true 00:29:42.377 22:13:01 compress_compdev -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:29:42.377 22:13:01 compress_compdev -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:29:42.377 Cannot find device "nvmf_tgt_br" 00:29:42.377 22:13:01 compress_compdev -- nvmf/common.sh@158 -- # true 00:29:42.377 22:13:01 compress_compdev -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:29:42.377 Cannot find device "nvmf_tgt_br2" 00:29:42.377 22:13:01 compress_compdev -- nvmf/common.sh@159 -- # true 00:29:42.377 22:13:01 compress_compdev -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:29:42.377 22:13:01 compress_compdev -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:29:42.377 22:13:01 compress_compdev -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:42.377 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:42.377 22:13:01 compress_compdev -- nvmf/common.sh@162 -- # true 00:29:42.377 22:13:01 compress_compdev -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:42.377 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:42.377 22:13:01 compress_compdev -- nvmf/common.sh@163 -- # true 00:29:42.377 22:13:01 compress_compdev -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:29:42.377 22:13:01 compress_compdev -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:42.377 22:13:01 compress_compdev -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:42.378 22:13:01 compress_compdev -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:42.378 22:13:01 compress_compdev -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:42.378 22:13:01 compress_compdev -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:42.378 22:13:01 compress_compdev -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:42.378 22:13:01 compress_compdev -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:42.378 22:13:01 compress_compdev -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:42.378 22:13:01 compress_compdev -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:29:42.378 22:13:01 compress_compdev -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:29:42.378 22:13:01 compress_compdev -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:29:42.378 22:13:01 compress_compdev -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:29:42.378 22:13:01 compress_compdev -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:42.378 22:13:01 compress_compdev -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:42.378 22:13:01 compress_compdev -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:42.378 22:13:01 compress_compdev -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:29:42.378 22:13:01 compress_compdev -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:29:42.378 22:13:01 compress_compdev -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:29:42.378 22:13:01 compress_compdev -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:42.378 22:13:01 compress_compdev -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:42.637 22:13:01 compress_compdev -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:42.637 22:13:01 compress_compdev -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:42.637 22:13:01 compress_compdev -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:29:42.637 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:42.637 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:29:42.637 00:29:42.637 --- 10.0.0.2 ping statistics --- 00:29:42.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:42.637 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:29:42.637 22:13:01 compress_compdev -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:29:42.637 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:42.637 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:29:42.637 00:29:42.637 --- 10.0.0.3 ping statistics --- 00:29:42.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:42.637 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:29:42.637 22:13:01 compress_compdev -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:42.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:42.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:29:42.637 00:29:42.637 --- 10.0.0.1 ping statistics --- 00:29:42.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:42.637 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:29:42.637 22:13:01 compress_compdev -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:42.637 22:13:01 compress_compdev -- nvmf/common.sh@433 -- # return 0 00:29:42.637 22:13:01 compress_compdev -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:42.637 22:13:01 compress_compdev -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:42.637 22:13:01 compress_compdev -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:42.637 22:13:01 compress_compdev -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:42.637 22:13:01 compress_compdev -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:42.637 22:13:01 compress_compdev -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:42.637 22:13:01 compress_compdev -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:42.637 22:13:01 compress_compdev -- compress/compress.sh@97 -- # nvmfappstart -m 0x7 00:29:42.637 22:13:01 compress_compdev -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:42.637 22:13:01 compress_compdev -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:42.637 22:13:01 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:29:42.637 22:13:01 compress_compdev -- nvmf/common.sh@481 -- # nvmfpid=1547057 00:29:42.637 22:13:01 compress_compdev -- nvmf/common.sh@482 -- # waitforlisten 1547057 00:29:42.637 22:13:01 compress_compdev -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:29:42.637 22:13:01 compress_compdev -- common/autotest_common.sh@829 -- # '[' -z 1547057 ']' 00:29:42.637 22:13:01 compress_compdev -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:42.637 22:13:01 compress_compdev -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:42.637 22:13:01 compress_compdev -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:42.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:42.637 22:13:01 compress_compdev -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:42.637 22:13:01 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:29:42.637 [2024-07-13 22:13:01.957621] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:42.638 [2024-07-13 22:13:01.957712] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:42.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:42.898 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:42.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:42.898 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:42.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:42.898 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:42.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:42.898 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:42.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:42.898 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:42.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:42.898 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:42.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:42.898 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:42.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:42.898 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:42.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:42.898 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:42.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:42.898 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:42.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:42.898 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:42.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:42.898 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:42.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:42.898 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:42.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:42.898 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:42.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:42.898 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:42.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:42.898 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:42.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:42.898 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:42.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:42.898 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:42.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:42.898 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:42.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:42.898 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:42.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:42.898 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:42.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:42.898 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:42.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:42.898 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:42.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:42.898 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:42.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:42.898 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:42.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:42.898 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:42.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:42.898 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:42.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:42.898 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:42.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:42.898 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:42.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:42.898 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:42.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:42.898 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:42.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:42.898 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:42.898 [2024-07-13 22:13:02.131174] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:43.158 [2024-07-13 22:13:02.337911] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:43.158 [2024-07-13 22:13:02.337958] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:43.158 [2024-07-13 22:13:02.337972] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:43.158 [2024-07-13 22:13:02.337983] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:43.158 [2024-07-13 22:13:02.337994] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:43.158 [2024-07-13 22:13:02.338128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:43.158 [2024-07-13 22:13:02.338228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:43.158 [2024-07-13 22:13:02.338236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:43.417 22:13:02 compress_compdev -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:43.417 22:13:02 compress_compdev -- common/autotest_common.sh@862 -- # return 0 00:29:43.417 22:13:02 compress_compdev -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:43.417 22:13:02 compress_compdev -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:43.417 22:13:02 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:29:43.417 22:13:02 compress_compdev -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:43.417 22:13:02 compress_compdev -- compress/compress.sh@98 -- # trap 'nvmftestfini; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:43.417 22:13:02 compress_compdev -- compress/compress.sh@101 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -u 8192 00:29:43.676 [2024-07-13 22:13:02.922459] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:43.676 22:13:02 compress_compdev -- compress/compress.sh@102 -- # create_vols 00:29:43.676 22:13:02 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:43.676 22:13:02 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:29:46.964 22:13:06 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:29:46.964 22:13:06 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:29:46.964 22:13:06 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:29:46.965 22:13:06 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:29:46.965 22:13:06 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:29:46.965 22:13:06 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:29:46.965 22:13:06 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:46.965 22:13:06 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:29:47.224 [ 00:29:47.224 { 00:29:47.224 "name": "Nvme0n1", 00:29:47.224 "aliases": [ 00:29:47.224 "8b13196b-e884-47ad-b3a2-58701a6dcfe0" 00:29:47.224 ], 00:29:47.224 "product_name": "NVMe disk", 00:29:47.224 "block_size": 512, 00:29:47.224 "num_blocks": 3907029168, 00:29:47.224 "uuid": "8b13196b-e884-47ad-b3a2-58701a6dcfe0", 00:29:47.224 "assigned_rate_limits": { 00:29:47.224 "rw_ios_per_sec": 0, 00:29:47.224 "rw_mbytes_per_sec": 0, 00:29:47.224 "r_mbytes_per_sec": 0, 00:29:47.224 "w_mbytes_per_sec": 0 00:29:47.224 }, 00:29:47.224 "claimed": false, 00:29:47.224 "zoned": false, 00:29:47.224 "supported_io_types": { 00:29:47.224 "read": true, 00:29:47.224 "write": true, 00:29:47.224 "unmap": true, 00:29:47.224 "flush": true, 00:29:47.224 "reset": true, 00:29:47.224 "nvme_admin": true, 00:29:47.224 "nvme_io": true, 00:29:47.224 "nvme_io_md": false, 00:29:47.224 "write_zeroes": true, 00:29:47.224 "zcopy": false, 00:29:47.224 "get_zone_info": false, 00:29:47.224 "zone_management": false, 00:29:47.224 "zone_append": false, 00:29:47.224 "compare": false, 00:29:47.224 "compare_and_write": false, 00:29:47.224 "abort": true, 00:29:47.224 "seek_hole": false, 00:29:47.224 "seek_data": false, 00:29:47.224 "copy": false, 00:29:47.224 "nvme_iov_md": false 00:29:47.224 }, 00:29:47.224 "driver_specific": { 00:29:47.224 "nvme": [ 00:29:47.224 { 00:29:47.224 "pci_address": "0000:d8:00.0", 00:29:47.224 "trid": { 00:29:47.224 "trtype": "PCIe", 00:29:47.224 "traddr": "0000:d8:00.0" 00:29:47.224 }, 00:29:47.224 "ctrlr_data": { 00:29:47.224 "cntlid": 0, 00:29:47.224 "vendor_id": "0x8086", 00:29:47.224 "model_number": "INTEL SSDPE2KX020T8", 00:29:47.224 "serial_number": "BTLJ125505KA2P0BGN", 00:29:47.224 "firmware_revision": "VDV10170", 00:29:47.224 "oacs": { 00:29:47.224 "security": 0, 00:29:47.224 "format": 1, 00:29:47.224 "firmware": 1, 00:29:47.224 "ns_manage": 1 00:29:47.224 }, 00:29:47.224 "multi_ctrlr": false, 00:29:47.224 "ana_reporting": false 00:29:47.224 }, 00:29:47.224 "vs": { 00:29:47.224 "nvme_version": "1.2" 00:29:47.224 }, 00:29:47.224 "ns_data": { 00:29:47.224 "id": 1, 00:29:47.224 "can_share": false 00:29:47.224 } 00:29:47.224 } 00:29:47.224 ], 00:29:47.224 "mp_policy": "active_passive" 00:29:47.224 } 00:29:47.224 } 00:29:47.224 ] 00:29:47.224 22:13:06 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:29:47.224 22:13:06 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:29:48.603 ba52f564-3a48-43ed-9d90-a15b9cd88f6e 00:29:48.603 22:13:07 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:29:48.603 1cd8ba5a-0002-406f-8e7e-fe835fa2308a 00:29:48.603 22:13:07 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:29:48.603 22:13:07 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:29:48.603 22:13:07 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:29:48.603 22:13:07 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:29:48.603 22:13:07 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:29:48.603 22:13:07 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:29:48.603 22:13:07 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:48.603 22:13:07 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:29:48.863 [ 00:29:48.863 { 00:29:48.863 "name": "1cd8ba5a-0002-406f-8e7e-fe835fa2308a", 00:29:48.863 "aliases": [ 00:29:48.863 "lvs0/lv0" 00:29:48.863 ], 00:29:48.863 "product_name": "Logical Volume", 00:29:48.863 "block_size": 512, 00:29:48.863 "num_blocks": 204800, 00:29:48.863 "uuid": "1cd8ba5a-0002-406f-8e7e-fe835fa2308a", 00:29:48.863 "assigned_rate_limits": { 00:29:48.863 "rw_ios_per_sec": 0, 00:29:48.863 "rw_mbytes_per_sec": 0, 00:29:48.863 "r_mbytes_per_sec": 0, 00:29:48.863 "w_mbytes_per_sec": 0 00:29:48.863 }, 00:29:48.863 "claimed": false, 00:29:48.863 "zoned": false, 00:29:48.863 "supported_io_types": { 00:29:48.863 "read": true, 00:29:48.863 "write": true, 00:29:48.863 "unmap": true, 00:29:48.863 "flush": false, 00:29:48.863 "reset": true, 00:29:48.863 "nvme_admin": false, 00:29:48.863 "nvme_io": false, 00:29:48.863 "nvme_io_md": false, 00:29:48.863 "write_zeroes": true, 00:29:48.863 "zcopy": false, 00:29:48.863 "get_zone_info": false, 00:29:48.863 "zone_management": false, 00:29:48.863 "zone_append": false, 00:29:48.863 "compare": false, 00:29:48.863 "compare_and_write": false, 00:29:48.863 "abort": false, 00:29:48.863 "seek_hole": true, 00:29:48.863 "seek_data": true, 00:29:48.863 "copy": false, 00:29:48.863 "nvme_iov_md": false 00:29:48.863 }, 00:29:48.863 "driver_specific": { 00:29:48.863 "lvol": { 00:29:48.863 "lvol_store_uuid": "ba52f564-3a48-43ed-9d90-a15b9cd88f6e", 00:29:48.863 "base_bdev": "Nvme0n1", 00:29:48.863 "thin_provision": true, 00:29:48.863 "num_allocated_clusters": 0, 00:29:48.863 "snapshot": false, 00:29:48.863 "clone": false, 00:29:48.863 "esnap_clone": false 00:29:48.863 } 00:29:48.863 } 00:29:48.863 } 00:29:48.863 ] 00:29:48.863 22:13:08 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:29:48.863 22:13:08 compress_compdev -- compress/compress.sh@41 -- # '[' -z '' ']' 00:29:48.863 22:13:08 compress_compdev -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:29:48.863 [2024-07-13 22:13:08.233198] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:29:48.863 COMP_lvs0/lv0 00:29:48.863 22:13:08 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:29:48.863 22:13:08 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:29:48.863 22:13:08 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:29:48.863 22:13:08 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:29:48.863 22:13:08 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:29:48.863 22:13:08 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:29:48.863 22:13:08 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:49.122 22:13:08 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:29:49.381 [ 00:29:49.381 { 00:29:49.381 "name": "COMP_lvs0/lv0", 00:29:49.381 "aliases": [ 00:29:49.381 "6203cc22-da85-50ba-8400-62559320a090" 00:29:49.381 ], 00:29:49.381 "product_name": "compress", 00:29:49.381 "block_size": 512, 00:29:49.381 "num_blocks": 200704, 00:29:49.381 "uuid": "6203cc22-da85-50ba-8400-62559320a090", 00:29:49.381 "assigned_rate_limits": { 00:29:49.381 "rw_ios_per_sec": 0, 00:29:49.381 "rw_mbytes_per_sec": 0, 00:29:49.381 "r_mbytes_per_sec": 0, 00:29:49.381 "w_mbytes_per_sec": 0 00:29:49.381 }, 00:29:49.381 "claimed": false, 00:29:49.381 "zoned": false, 00:29:49.381 "supported_io_types": { 00:29:49.381 "read": true, 00:29:49.381 "write": true, 00:29:49.381 "unmap": false, 00:29:49.381 "flush": false, 00:29:49.381 "reset": false, 00:29:49.381 "nvme_admin": false, 00:29:49.381 "nvme_io": false, 00:29:49.381 "nvme_io_md": false, 00:29:49.381 "write_zeroes": true, 00:29:49.381 "zcopy": false, 00:29:49.381 "get_zone_info": false, 00:29:49.381 "zone_management": false, 00:29:49.381 "zone_append": false, 00:29:49.381 "compare": false, 00:29:49.381 "compare_and_write": false, 00:29:49.381 "abort": false, 00:29:49.381 "seek_hole": false, 00:29:49.381 "seek_data": false, 00:29:49.381 "copy": false, 00:29:49.381 "nvme_iov_md": false 00:29:49.381 }, 00:29:49.381 "driver_specific": { 00:29:49.381 "compress": { 00:29:49.381 "name": "COMP_lvs0/lv0", 00:29:49.381 "base_bdev_name": "1cd8ba5a-0002-406f-8e7e-fe835fa2308a" 00:29:49.381 } 00:29:49.381 } 00:29:49.381 } 00:29:49.381 ] 00:29:49.381 22:13:08 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:29:49.381 22:13:08 compress_compdev -- compress/compress.sh@103 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:49.640 22:13:08 compress_compdev -- compress/compress.sh@104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 COMP_lvs0/lv0 00:29:49.640 22:13:08 compress_compdev -- compress/compress.sh@105 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:49.899 [2024-07-13 22:13:09.090143] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:49.899 22:13:09 compress_compdev -- compress/compress.sh@108 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 64 -s 512 -w randrw -t 30 -c 0x18 -M 50 00:29:49.899 22:13:09 compress_compdev -- compress/compress.sh@109 -- # perf_pid=1548342 00:29:49.899 22:13:09 compress_compdev -- compress/compress.sh@112 -- # trap 'killprocess $perf_pid; compress_err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:49.899 22:13:09 compress_compdev -- compress/compress.sh@113 -- # wait 1548342 00:29:50.158 [2024-07-13 22:13:09.367986] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:22.253 Initializing NVMe Controllers 00:30:22.253 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:30:22.253 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:30:22.253 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:30:22.253 Initialization complete. Launching workers. 00:30:22.253 ======================================================== 00:30:22.253 Latency(us) 00:30:22.253 Device Information : IOPS MiB/s Average min max 00:30:22.253 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 5725.33 22.36 11179.10 1565.47 28965.59 00:30:22.253 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 3600.97 14.07 17774.92 3441.25 33927.25 00:30:22.253 ======================================================== 00:30:22.253 Total : 9326.30 36.43 13725.81 1565.47 33927.25 00:30:22.253 00:30:22.253 22:13:39 compress_compdev -- compress/compress.sh@114 -- # destroy_vols 00:30:22.253 22:13:39 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:30:22.253 22:13:39 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:30:22.253 22:13:39 compress_compdev -- compress/compress.sh@116 -- # trap - SIGINT SIGTERM EXIT 00:30:22.253 22:13:39 compress_compdev -- compress/compress.sh@117 -- # nvmftestfini 00:30:22.253 22:13:39 compress_compdev -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:22.253 22:13:39 compress_compdev -- nvmf/common.sh@117 -- # sync 00:30:22.253 22:13:39 compress_compdev -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:22.253 22:13:39 compress_compdev -- nvmf/common.sh@120 -- # set +e 00:30:22.253 22:13:39 compress_compdev -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:22.253 22:13:39 compress_compdev -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:22.253 rmmod nvme_tcp 00:30:22.253 rmmod nvme_fabrics 00:30:22.253 rmmod nvme_keyring 00:30:22.253 22:13:39 compress_compdev -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:22.253 22:13:39 compress_compdev -- nvmf/common.sh@124 -- # set -e 00:30:22.253 22:13:39 compress_compdev -- nvmf/common.sh@125 -- # return 0 00:30:22.253 22:13:39 compress_compdev -- nvmf/common.sh@489 -- # '[' -n 1547057 ']' 00:30:22.253 22:13:39 compress_compdev -- nvmf/common.sh@490 -- # killprocess 1547057 00:30:22.253 22:13:39 compress_compdev -- common/autotest_common.sh@948 -- # '[' -z 1547057 ']' 00:30:22.253 22:13:39 compress_compdev -- common/autotest_common.sh@952 -- # kill -0 1547057 00:30:22.253 22:13:39 compress_compdev -- common/autotest_common.sh@953 -- # uname 00:30:22.253 22:13:39 compress_compdev -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:22.253 22:13:39 compress_compdev -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1547057 00:30:22.253 22:13:40 compress_compdev -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:22.253 22:13:40 compress_compdev -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:22.253 22:13:40 compress_compdev -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1547057' 00:30:22.253 killing process with pid 1547057 00:30:22.253 22:13:40 compress_compdev -- common/autotest_common.sh@967 -- # kill 1547057 00:30:22.253 22:13:40 compress_compdev -- common/autotest_common.sh@972 -- # wait 1547057 00:30:24.158 22:13:43 compress_compdev -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:24.158 22:13:43 compress_compdev -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:24.158 22:13:43 compress_compdev -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:24.158 22:13:43 compress_compdev -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:24.158 22:13:43 compress_compdev -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:24.158 22:13:43 compress_compdev -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:24.158 22:13:43 compress_compdev -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:24.158 22:13:43 compress_compdev -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:24.158 22:13:43 compress_compdev -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:30:24.158 22:13:43 compress_compdev -- compress/compress.sh@120 -- # rm -rf /tmp/pmem 00:30:24.158 00:30:24.158 real 2m16.284s 00:30:24.158 user 6m6.563s 00:30:24.158 sys 0m19.103s 00:30:24.158 22:13:43 compress_compdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:24.158 22:13:43 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:30:24.158 ************************************ 00:30:24.158 END TEST compress_compdev 00:30:24.158 ************************************ 00:30:24.158 22:13:43 -- common/autotest_common.sh@1142 -- # return 0 00:30:24.158 22:13:43 -- spdk/autotest.sh@349 -- # run_test compress_isal /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh isal 00:30:24.158 22:13:43 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:24.158 22:13:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:24.158 22:13:43 -- common/autotest_common.sh@10 -- # set +x 00:30:24.158 ************************************ 00:30:24.158 START TEST compress_isal 00:30:24.158 ************************************ 00:30:24.158 22:13:43 compress_isal -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh isal 00:30:24.158 * Looking for test storage... 00:30:24.158 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress 00:30:24.158 22:13:43 compress_isal -- compress/compress.sh@13 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:30:24.158 22:13:43 compress_isal -- nvmf/common.sh@7 -- # uname -s 00:30:24.158 22:13:43 compress_isal -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:24.158 22:13:43 compress_isal -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:24.158 22:13:43 compress_isal -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:24.158 22:13:43 compress_isal -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:24.158 22:13:43 compress_isal -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:24.158 22:13:43 compress_isal -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:24.158 22:13:43 compress_isal -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:24.158 22:13:43 compress_isal -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:24.158 22:13:43 compress_isal -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:24.158 22:13:43 compress_isal -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:24.158 22:13:43 compress_isal -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:30:24.158 22:13:43 compress_isal -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:30:24.158 22:13:43 compress_isal -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:24.158 22:13:43 compress_isal -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:24.158 22:13:43 compress_isal -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:30:24.158 22:13:43 compress_isal -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:24.158 22:13:43 compress_isal -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:30:24.158 22:13:43 compress_isal -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:24.158 22:13:43 compress_isal -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:24.158 22:13:43 compress_isal -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:24.418 22:13:43 compress_isal -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.418 22:13:43 compress_isal -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.418 22:13:43 compress_isal -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.418 22:13:43 compress_isal -- paths/export.sh@5 -- # export PATH 00:30:24.418 22:13:43 compress_isal -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.418 22:13:43 compress_isal -- nvmf/common.sh@47 -- # : 0 00:30:24.418 22:13:43 compress_isal -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:24.418 22:13:43 compress_isal -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:24.418 22:13:43 compress_isal -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:24.418 22:13:43 compress_isal -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:24.418 22:13:43 compress_isal -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:24.418 22:13:43 compress_isal -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:24.418 22:13:43 compress_isal -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:24.418 22:13:43 compress_isal -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:24.418 22:13:43 compress_isal -- compress/compress.sh@17 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:30:24.418 22:13:43 compress_isal -- compress/compress.sh@81 -- # mkdir -p /tmp/pmem 00:30:24.418 22:13:43 compress_isal -- compress/compress.sh@82 -- # test_type=isal 00:30:24.418 22:13:43 compress_isal -- compress/compress.sh@86 -- # run_bdevperf 32 4096 3 00:30:24.418 22:13:43 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:30:24.418 22:13:43 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=1553926 00:30:24.418 22:13:43 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:24.418 22:13:43 compress_isal -- compress/compress.sh@73 -- # waitforlisten 1553926 00:30:24.418 22:13:43 compress_isal -- common/autotest_common.sh@829 -- # '[' -z 1553926 ']' 00:30:24.418 22:13:43 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 00:30:24.418 22:13:43 compress_isal -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:24.418 22:13:43 compress_isal -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:24.418 22:13:43 compress_isal -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:24.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:24.418 22:13:43 compress_isal -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:24.418 22:13:43 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:30:24.418 [2024-07-13 22:13:43.646623] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:24.418 [2024-07-13 22:13:43.646722] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1553926 ] 00:30:24.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.418 EAL: Requested device 0000:3d:01.0 cannot be used 00:30:24.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.418 EAL: Requested device 0000:3d:01.1 cannot be used 00:30:24.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.418 EAL: Requested device 0000:3d:01.2 cannot be used 00:30:24.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.418 EAL: Requested device 0000:3d:01.3 cannot be used 00:30:24.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.418 EAL: Requested device 0000:3d:01.4 cannot be used 00:30:24.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.418 EAL: Requested device 0000:3d:01.5 cannot be used 00:30:24.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.418 EAL: Requested device 0000:3d:01.6 cannot be used 00:30:24.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.418 EAL: Requested device 0000:3d:01.7 cannot be used 00:30:24.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.418 EAL: Requested device 0000:3d:02.0 cannot be used 00:30:24.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.418 EAL: Requested device 0000:3d:02.1 cannot be used 00:30:24.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.418 EAL: Requested device 0000:3d:02.2 cannot be used 00:30:24.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.418 EAL: Requested device 0000:3d:02.3 cannot be used 00:30:24.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.418 EAL: Requested device 0000:3d:02.4 cannot be used 00:30:24.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.418 EAL: Requested device 0000:3d:02.5 cannot be used 00:30:24.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.418 EAL: Requested device 0000:3d:02.6 cannot be used 00:30:24.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.418 EAL: Requested device 0000:3d:02.7 cannot be used 00:30:24.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.418 EAL: Requested device 0000:3f:01.0 cannot be used 00:30:24.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.418 EAL: Requested device 0000:3f:01.1 cannot be used 00:30:24.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.418 EAL: Requested device 0000:3f:01.2 cannot be used 00:30:24.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.418 EAL: Requested device 0000:3f:01.3 cannot be used 00:30:24.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.418 EAL: Requested device 0000:3f:01.4 cannot be used 00:30:24.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.418 EAL: Requested device 0000:3f:01.5 cannot be used 00:30:24.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.418 EAL: Requested device 0000:3f:01.6 cannot be used 00:30:24.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.418 EAL: Requested device 0000:3f:01.7 cannot be used 00:30:24.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.418 EAL: Requested device 0000:3f:02.0 cannot be used 00:30:24.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.418 EAL: Requested device 0000:3f:02.1 cannot be used 00:30:24.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.418 EAL: Requested device 0000:3f:02.2 cannot be used 00:30:24.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.418 EAL: Requested device 0000:3f:02.3 cannot be used 00:30:24.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.418 EAL: Requested device 0000:3f:02.4 cannot be used 00:30:24.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.418 EAL: Requested device 0000:3f:02.5 cannot be used 00:30:24.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.418 EAL: Requested device 0000:3f:02.6 cannot be used 00:30:24.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.418 EAL: Requested device 0000:3f:02.7 cannot be used 00:30:24.677 [2024-07-13 22:13:43.810587] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:24.677 [2024-07-13 22:13:44.015083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:24.677 [2024-07-13 22:13:44.015087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:25.243 22:13:44 compress_isal -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:25.243 22:13:44 compress_isal -- common/autotest_common.sh@862 -- # return 0 00:30:25.243 22:13:44 compress_isal -- compress/compress.sh@74 -- # create_vols 00:30:25.243 22:13:44 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:25.243 22:13:44 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:28.605 22:13:47 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:30:28.605 22:13:47 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:30:28.605 22:13:47 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:30:28.605 22:13:47 compress_isal -- common/autotest_common.sh@899 -- # local i 00:30:28.605 22:13:47 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:30:28.605 22:13:47 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:30:28.605 22:13:47 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:28.605 22:13:47 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:30:28.605 [ 00:30:28.605 { 00:30:28.605 "name": "Nvme0n1", 00:30:28.605 "aliases": [ 00:30:28.605 "049f9684-49cb-4ed7-bb0f-10c1ab9d1760" 00:30:28.605 ], 00:30:28.605 "product_name": "NVMe disk", 00:30:28.605 "block_size": 512, 00:30:28.605 "num_blocks": 3907029168, 00:30:28.605 "uuid": "049f9684-49cb-4ed7-bb0f-10c1ab9d1760", 00:30:28.605 "assigned_rate_limits": { 00:30:28.605 "rw_ios_per_sec": 0, 00:30:28.605 "rw_mbytes_per_sec": 0, 00:30:28.605 "r_mbytes_per_sec": 0, 00:30:28.605 "w_mbytes_per_sec": 0 00:30:28.605 }, 00:30:28.605 "claimed": false, 00:30:28.605 "zoned": false, 00:30:28.605 "supported_io_types": { 00:30:28.605 "read": true, 00:30:28.605 "write": true, 00:30:28.605 "unmap": true, 00:30:28.605 "flush": true, 00:30:28.605 "reset": true, 00:30:28.605 "nvme_admin": true, 00:30:28.605 "nvme_io": true, 00:30:28.605 "nvme_io_md": false, 00:30:28.605 "write_zeroes": true, 00:30:28.605 "zcopy": false, 00:30:28.605 "get_zone_info": false, 00:30:28.605 "zone_management": false, 00:30:28.605 "zone_append": false, 00:30:28.605 "compare": false, 00:30:28.605 "compare_and_write": false, 00:30:28.605 "abort": true, 00:30:28.605 "seek_hole": false, 00:30:28.605 "seek_data": false, 00:30:28.605 "copy": false, 00:30:28.605 "nvme_iov_md": false 00:30:28.605 }, 00:30:28.605 "driver_specific": { 00:30:28.605 "nvme": [ 00:30:28.605 { 00:30:28.605 "pci_address": "0000:d8:00.0", 00:30:28.605 "trid": { 00:30:28.605 "trtype": "PCIe", 00:30:28.605 "traddr": "0000:d8:00.0" 00:30:28.605 }, 00:30:28.605 "ctrlr_data": { 00:30:28.605 "cntlid": 0, 00:30:28.605 "vendor_id": "0x8086", 00:30:28.605 "model_number": "INTEL SSDPE2KX020T8", 00:30:28.605 "serial_number": "BTLJ125505KA2P0BGN", 00:30:28.605 "firmware_revision": "VDV10170", 00:30:28.605 "oacs": { 00:30:28.605 "security": 0, 00:30:28.605 "format": 1, 00:30:28.605 "firmware": 1, 00:30:28.605 "ns_manage": 1 00:30:28.605 }, 00:30:28.605 "multi_ctrlr": false, 00:30:28.605 "ana_reporting": false 00:30:28.605 }, 00:30:28.605 "vs": { 00:30:28.605 "nvme_version": "1.2" 00:30:28.605 }, 00:30:28.605 "ns_data": { 00:30:28.605 "id": 1, 00:30:28.605 "can_share": false 00:30:28.605 } 00:30:28.605 } 00:30:28.605 ], 00:30:28.605 "mp_policy": "active_passive" 00:30:28.605 } 00:30:28.605 } 00:30:28.605 ] 00:30:28.605 22:13:47 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:30:28.605 22:13:47 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:30:29.984 fb1b9334-380b-4346-ab81-0821c101d6dc 00:30:29.984 22:13:49 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:30:29.984 742cf03e-5c37-4420-baa8-f454bdc6cd67 00:30:29.984 22:13:49 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:30:29.984 22:13:49 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:30:29.984 22:13:49 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:30:29.984 22:13:49 compress_isal -- common/autotest_common.sh@899 -- # local i 00:30:29.984 22:13:49 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:30:29.984 22:13:49 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:30:29.984 22:13:49 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:29.984 22:13:49 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:30:30.243 [ 00:30:30.243 { 00:30:30.243 "name": "742cf03e-5c37-4420-baa8-f454bdc6cd67", 00:30:30.243 "aliases": [ 00:30:30.243 "lvs0/lv0" 00:30:30.243 ], 00:30:30.243 "product_name": "Logical Volume", 00:30:30.243 "block_size": 512, 00:30:30.243 "num_blocks": 204800, 00:30:30.243 "uuid": "742cf03e-5c37-4420-baa8-f454bdc6cd67", 00:30:30.243 "assigned_rate_limits": { 00:30:30.243 "rw_ios_per_sec": 0, 00:30:30.243 "rw_mbytes_per_sec": 0, 00:30:30.243 "r_mbytes_per_sec": 0, 00:30:30.243 "w_mbytes_per_sec": 0 00:30:30.243 }, 00:30:30.243 "claimed": false, 00:30:30.243 "zoned": false, 00:30:30.243 "supported_io_types": { 00:30:30.243 "read": true, 00:30:30.243 "write": true, 00:30:30.243 "unmap": true, 00:30:30.243 "flush": false, 00:30:30.243 "reset": true, 00:30:30.243 "nvme_admin": false, 00:30:30.243 "nvme_io": false, 00:30:30.243 "nvme_io_md": false, 00:30:30.243 "write_zeroes": true, 00:30:30.243 "zcopy": false, 00:30:30.243 "get_zone_info": false, 00:30:30.243 "zone_management": false, 00:30:30.243 "zone_append": false, 00:30:30.243 "compare": false, 00:30:30.243 "compare_and_write": false, 00:30:30.243 "abort": false, 00:30:30.243 "seek_hole": true, 00:30:30.243 "seek_data": true, 00:30:30.243 "copy": false, 00:30:30.243 "nvme_iov_md": false 00:30:30.243 }, 00:30:30.243 "driver_specific": { 00:30:30.243 "lvol": { 00:30:30.243 "lvol_store_uuid": "fb1b9334-380b-4346-ab81-0821c101d6dc", 00:30:30.243 "base_bdev": "Nvme0n1", 00:30:30.243 "thin_provision": true, 00:30:30.243 "num_allocated_clusters": 0, 00:30:30.243 "snapshot": false, 00:30:30.243 "clone": false, 00:30:30.244 "esnap_clone": false 00:30:30.244 } 00:30:30.244 } 00:30:30.244 } 00:30:30.244 ] 00:30:30.244 22:13:49 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:30:30.244 22:13:49 compress_isal -- compress/compress.sh@41 -- # '[' -z '' ']' 00:30:30.244 22:13:49 compress_isal -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:30:30.503 [2024-07-13 22:13:49.696324] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:30:30.503 COMP_lvs0/lv0 00:30:30.503 22:13:49 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:30:30.503 22:13:49 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:30:30.503 22:13:49 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:30:30.503 22:13:49 compress_isal -- common/autotest_common.sh@899 -- # local i 00:30:30.503 22:13:49 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:30:30.503 22:13:49 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:30:30.503 22:13:49 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:30.503 22:13:49 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:30:30.762 [ 00:30:30.762 { 00:30:30.762 "name": "COMP_lvs0/lv0", 00:30:30.762 "aliases": [ 00:30:30.762 "8d7a5c97-cd72-5ea2-b99b-3a8e067688fe" 00:30:30.762 ], 00:30:30.762 "product_name": "compress", 00:30:30.762 "block_size": 512, 00:30:30.762 "num_blocks": 200704, 00:30:30.762 "uuid": "8d7a5c97-cd72-5ea2-b99b-3a8e067688fe", 00:30:30.762 "assigned_rate_limits": { 00:30:30.762 "rw_ios_per_sec": 0, 00:30:30.762 "rw_mbytes_per_sec": 0, 00:30:30.762 "r_mbytes_per_sec": 0, 00:30:30.762 "w_mbytes_per_sec": 0 00:30:30.762 }, 00:30:30.762 "claimed": false, 00:30:30.762 "zoned": false, 00:30:30.762 "supported_io_types": { 00:30:30.762 "read": true, 00:30:30.762 "write": true, 00:30:30.762 "unmap": false, 00:30:30.762 "flush": false, 00:30:30.762 "reset": false, 00:30:30.762 "nvme_admin": false, 00:30:30.762 "nvme_io": false, 00:30:30.762 "nvme_io_md": false, 00:30:30.762 "write_zeroes": true, 00:30:30.762 "zcopy": false, 00:30:30.762 "get_zone_info": false, 00:30:30.762 "zone_management": false, 00:30:30.762 "zone_append": false, 00:30:30.762 "compare": false, 00:30:30.762 "compare_and_write": false, 00:30:30.762 "abort": false, 00:30:30.762 "seek_hole": false, 00:30:30.762 "seek_data": false, 00:30:30.762 "copy": false, 00:30:30.762 "nvme_iov_md": false 00:30:30.762 }, 00:30:30.762 "driver_specific": { 00:30:30.762 "compress": { 00:30:30.762 "name": "COMP_lvs0/lv0", 00:30:30.762 "base_bdev_name": "742cf03e-5c37-4420-baa8-f454bdc6cd67" 00:30:30.762 } 00:30:30.762 } 00:30:30.762 } 00:30:30.762 ] 00:30:30.762 22:13:50 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:30:30.762 22:13:50 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:30:30.762 Running I/O for 3 seconds... 00:30:34.049 00:30:34.049 Latency(us) 00:30:34.049 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:34.049 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:30:34.049 Verification LBA range: start 0x0 length 0x3100 00:30:34.049 COMP_lvs0/lv0 : 3.01 3308.26 12.92 0.00 0.00 9629.29 61.03 14994.64 00:30:34.049 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:30:34.049 Verification LBA range: start 0x3100 length 0x3100 00:30:34.049 COMP_lvs0/lv0 : 3.01 3327.98 13.00 0.00 0.00 9569.77 61.03 14994.64 00:30:34.049 =================================================================================================================== 00:30:34.049 Total : 6636.25 25.92 0.00 0.00 9599.43 61.03 14994.64 00:30:34.049 0 00:30:34.049 22:13:53 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:30:34.049 22:13:53 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:30:34.049 22:13:53 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:30:34.308 22:13:53 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:30:34.308 22:13:53 compress_isal -- compress/compress.sh@78 -- # killprocess 1553926 00:30:34.308 22:13:53 compress_isal -- common/autotest_common.sh@948 -- # '[' -z 1553926 ']' 00:30:34.308 22:13:53 compress_isal -- common/autotest_common.sh@952 -- # kill -0 1553926 00:30:34.308 22:13:53 compress_isal -- common/autotest_common.sh@953 -- # uname 00:30:34.308 22:13:53 compress_isal -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:34.308 22:13:53 compress_isal -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1553926 00:30:34.308 22:13:53 compress_isal -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:34.308 22:13:53 compress_isal -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:34.308 22:13:53 compress_isal -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1553926' 00:30:34.308 killing process with pid 1553926 00:30:34.308 22:13:53 compress_isal -- common/autotest_common.sh@967 -- # kill 1553926 00:30:34.308 Received shutdown signal, test time was about 3.000000 seconds 00:30:34.308 00:30:34.308 Latency(us) 00:30:34.308 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:34.308 =================================================================================================================== 00:30:34.308 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:34.308 22:13:53 compress_isal -- common/autotest_common.sh@972 -- # wait 1553926 00:30:38.501 22:13:57 compress_isal -- compress/compress.sh@87 -- # run_bdevperf 32 4096 3 512 00:30:38.501 22:13:57 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:30:38.501 22:13:57 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=1556225 00:30:38.501 22:13:57 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:38.501 22:13:57 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 00:30:38.501 22:13:57 compress_isal -- compress/compress.sh@73 -- # waitforlisten 1556225 00:30:38.501 22:13:57 compress_isal -- common/autotest_common.sh@829 -- # '[' -z 1556225 ']' 00:30:38.501 22:13:57 compress_isal -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:38.501 22:13:57 compress_isal -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:38.501 22:13:57 compress_isal -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:38.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:38.501 22:13:57 compress_isal -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:38.501 22:13:57 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:30:38.501 [2024-07-13 22:13:57.189662] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:38.501 [2024-07-13 22:13:57.189764] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1556225 ] 00:30:38.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:38.501 EAL: Requested device 0000:3d:01.0 cannot be used 00:30:38.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:38.501 EAL: Requested device 0000:3d:01.1 cannot be used 00:30:38.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:38.501 EAL: Requested device 0000:3d:01.2 cannot be used 00:30:38.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:38.501 EAL: Requested device 0000:3d:01.3 cannot be used 00:30:38.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:38.501 EAL: Requested device 0000:3d:01.4 cannot be used 00:30:38.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:38.501 EAL: Requested device 0000:3d:01.5 cannot be used 00:30:38.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:38.501 EAL: Requested device 0000:3d:01.6 cannot be used 00:30:38.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:38.501 EAL: Requested device 0000:3d:01.7 cannot be used 00:30:38.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:38.501 EAL: Requested device 0000:3d:02.0 cannot be used 00:30:38.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:38.501 EAL: Requested device 0000:3d:02.1 cannot be used 00:30:38.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:38.501 EAL: Requested device 0000:3d:02.2 cannot be used 00:30:38.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:38.501 EAL: Requested device 0000:3d:02.3 cannot be used 00:30:38.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:38.501 EAL: Requested device 0000:3d:02.4 cannot be used 00:30:38.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:38.501 EAL: Requested device 0000:3d:02.5 cannot be used 00:30:38.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:38.501 EAL: Requested device 0000:3d:02.6 cannot be used 00:30:38.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:38.501 EAL: Requested device 0000:3d:02.7 cannot be used 00:30:38.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:38.501 EAL: Requested device 0000:3f:01.0 cannot be used 00:30:38.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:38.501 EAL: Requested device 0000:3f:01.1 cannot be used 00:30:38.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:38.501 EAL: Requested device 0000:3f:01.2 cannot be used 00:30:38.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:38.501 EAL: Requested device 0000:3f:01.3 cannot be used 00:30:38.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:38.501 EAL: Requested device 0000:3f:01.4 cannot be used 00:30:38.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:38.501 EAL: Requested device 0000:3f:01.5 cannot be used 00:30:38.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:38.501 EAL: Requested device 0000:3f:01.6 cannot be used 00:30:38.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:38.501 EAL: Requested device 0000:3f:01.7 cannot be used 00:30:38.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:38.501 EAL: Requested device 0000:3f:02.0 cannot be used 00:30:38.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:38.501 EAL: Requested device 0000:3f:02.1 cannot be used 00:30:38.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:38.501 EAL: Requested device 0000:3f:02.2 cannot be used 00:30:38.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:38.501 EAL: Requested device 0000:3f:02.3 cannot be used 00:30:38.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:38.501 EAL: Requested device 0000:3f:02.4 cannot be used 00:30:38.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:38.501 EAL: Requested device 0000:3f:02.5 cannot be used 00:30:38.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:38.501 EAL: Requested device 0000:3f:02.6 cannot be used 00:30:38.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:38.501 EAL: Requested device 0000:3f:02.7 cannot be used 00:30:38.501 [2024-07-13 22:13:57.354656] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:38.501 [2024-07-13 22:13:57.556458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:38.501 [2024-07-13 22:13:57.556464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:38.760 22:13:57 compress_isal -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:38.760 22:13:57 compress_isal -- common/autotest_common.sh@862 -- # return 0 00:30:38.760 22:13:57 compress_isal -- compress/compress.sh@74 -- # create_vols 512 00:30:38.760 22:13:57 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:38.760 22:13:57 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:42.050 22:14:01 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:30:42.050 22:14:01 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:30:42.050 22:14:01 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:30:42.050 22:14:01 compress_isal -- common/autotest_common.sh@899 -- # local i 00:30:42.050 22:14:01 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:30:42.050 22:14:01 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:30:42.050 22:14:01 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:42.050 22:14:01 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:30:42.050 [ 00:30:42.050 { 00:30:42.050 "name": "Nvme0n1", 00:30:42.050 "aliases": [ 00:30:42.050 "37a48529-2ddd-4f8c-88db-897dbda55d32" 00:30:42.050 ], 00:30:42.050 "product_name": "NVMe disk", 00:30:42.050 "block_size": 512, 00:30:42.050 "num_blocks": 3907029168, 00:30:42.050 "uuid": "37a48529-2ddd-4f8c-88db-897dbda55d32", 00:30:42.050 "assigned_rate_limits": { 00:30:42.050 "rw_ios_per_sec": 0, 00:30:42.050 "rw_mbytes_per_sec": 0, 00:30:42.050 "r_mbytes_per_sec": 0, 00:30:42.050 "w_mbytes_per_sec": 0 00:30:42.050 }, 00:30:42.050 "claimed": false, 00:30:42.050 "zoned": false, 00:30:42.050 "supported_io_types": { 00:30:42.050 "read": true, 00:30:42.050 "write": true, 00:30:42.050 "unmap": true, 00:30:42.050 "flush": true, 00:30:42.050 "reset": true, 00:30:42.050 "nvme_admin": true, 00:30:42.050 "nvme_io": true, 00:30:42.050 "nvme_io_md": false, 00:30:42.050 "write_zeroes": true, 00:30:42.050 "zcopy": false, 00:30:42.050 "get_zone_info": false, 00:30:42.050 "zone_management": false, 00:30:42.050 "zone_append": false, 00:30:42.050 "compare": false, 00:30:42.050 "compare_and_write": false, 00:30:42.050 "abort": true, 00:30:42.050 "seek_hole": false, 00:30:42.050 "seek_data": false, 00:30:42.050 "copy": false, 00:30:42.050 "nvme_iov_md": false 00:30:42.050 }, 00:30:42.050 "driver_specific": { 00:30:42.050 "nvme": [ 00:30:42.050 { 00:30:42.050 "pci_address": "0000:d8:00.0", 00:30:42.050 "trid": { 00:30:42.050 "trtype": "PCIe", 00:30:42.050 "traddr": "0000:d8:00.0" 00:30:42.050 }, 00:30:42.050 "ctrlr_data": { 00:30:42.050 "cntlid": 0, 00:30:42.050 "vendor_id": "0x8086", 00:30:42.050 "model_number": "INTEL SSDPE2KX020T8", 00:30:42.050 "serial_number": "BTLJ125505KA2P0BGN", 00:30:42.050 "firmware_revision": "VDV10170", 00:30:42.050 "oacs": { 00:30:42.050 "security": 0, 00:30:42.050 "format": 1, 00:30:42.050 "firmware": 1, 00:30:42.050 "ns_manage": 1 00:30:42.050 }, 00:30:42.050 "multi_ctrlr": false, 00:30:42.050 "ana_reporting": false 00:30:42.050 }, 00:30:42.050 "vs": { 00:30:42.050 "nvme_version": "1.2" 00:30:42.050 }, 00:30:42.050 "ns_data": { 00:30:42.050 "id": 1, 00:30:42.050 "can_share": false 00:30:42.050 } 00:30:42.050 } 00:30:42.050 ], 00:30:42.050 "mp_policy": "active_passive" 00:30:42.050 } 00:30:42.050 } 00:30:42.050 ] 00:30:42.050 22:14:01 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:30:42.050 22:14:01 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:30:43.427 2519f041-176a-4a1f-804b-842981e0d209 00:30:43.427 22:14:02 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:30:43.427 b29a7fa3-d052-4b83-9fee-afb01db9556f 00:30:43.427 22:14:02 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:30:43.427 22:14:02 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:30:43.427 22:14:02 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:30:43.427 22:14:02 compress_isal -- common/autotest_common.sh@899 -- # local i 00:30:43.427 22:14:02 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:30:43.427 22:14:02 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:30:43.427 22:14:02 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:43.686 22:14:02 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:30:43.944 [ 00:30:43.944 { 00:30:43.944 "name": "b29a7fa3-d052-4b83-9fee-afb01db9556f", 00:30:43.944 "aliases": [ 00:30:43.944 "lvs0/lv0" 00:30:43.944 ], 00:30:43.944 "product_name": "Logical Volume", 00:30:43.944 "block_size": 512, 00:30:43.944 "num_blocks": 204800, 00:30:43.944 "uuid": "b29a7fa3-d052-4b83-9fee-afb01db9556f", 00:30:43.944 "assigned_rate_limits": { 00:30:43.944 "rw_ios_per_sec": 0, 00:30:43.944 "rw_mbytes_per_sec": 0, 00:30:43.944 "r_mbytes_per_sec": 0, 00:30:43.944 "w_mbytes_per_sec": 0 00:30:43.944 }, 00:30:43.944 "claimed": false, 00:30:43.944 "zoned": false, 00:30:43.944 "supported_io_types": { 00:30:43.944 "read": true, 00:30:43.944 "write": true, 00:30:43.944 "unmap": true, 00:30:43.944 "flush": false, 00:30:43.944 "reset": true, 00:30:43.944 "nvme_admin": false, 00:30:43.944 "nvme_io": false, 00:30:43.944 "nvme_io_md": false, 00:30:43.944 "write_zeroes": true, 00:30:43.944 "zcopy": false, 00:30:43.944 "get_zone_info": false, 00:30:43.944 "zone_management": false, 00:30:43.944 "zone_append": false, 00:30:43.944 "compare": false, 00:30:43.944 "compare_and_write": false, 00:30:43.944 "abort": false, 00:30:43.944 "seek_hole": true, 00:30:43.944 "seek_data": true, 00:30:43.944 "copy": false, 00:30:43.944 "nvme_iov_md": false 00:30:43.944 }, 00:30:43.944 "driver_specific": { 00:30:43.944 "lvol": { 00:30:43.944 "lvol_store_uuid": "2519f041-176a-4a1f-804b-842981e0d209", 00:30:43.944 "base_bdev": "Nvme0n1", 00:30:43.944 "thin_provision": true, 00:30:43.945 "num_allocated_clusters": 0, 00:30:43.945 "snapshot": false, 00:30:43.945 "clone": false, 00:30:43.945 "esnap_clone": false 00:30:43.945 } 00:30:43.945 } 00:30:43.945 } 00:30:43.945 ] 00:30:43.945 22:14:03 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:30:43.945 22:14:03 compress_isal -- compress/compress.sh@41 -- # '[' -z 512 ']' 00:30:43.945 22:14:03 compress_isal -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 512 00:30:43.945 [2024-07-13 22:14:03.281850] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:30:43.945 COMP_lvs0/lv0 00:30:43.945 22:14:03 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:30:43.945 22:14:03 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:30:43.945 22:14:03 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:30:43.945 22:14:03 compress_isal -- common/autotest_common.sh@899 -- # local i 00:30:43.945 22:14:03 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:30:43.945 22:14:03 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:30:43.945 22:14:03 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:44.253 22:14:03 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:30:44.253 [ 00:30:44.253 { 00:30:44.253 "name": "COMP_lvs0/lv0", 00:30:44.253 "aliases": [ 00:30:44.253 "e67b30e0-a1a1-5026-a9d1-d752b2030988" 00:30:44.253 ], 00:30:44.253 "product_name": "compress", 00:30:44.253 "block_size": 512, 00:30:44.253 "num_blocks": 200704, 00:30:44.253 "uuid": "e67b30e0-a1a1-5026-a9d1-d752b2030988", 00:30:44.253 "assigned_rate_limits": { 00:30:44.253 "rw_ios_per_sec": 0, 00:30:44.253 "rw_mbytes_per_sec": 0, 00:30:44.253 "r_mbytes_per_sec": 0, 00:30:44.253 "w_mbytes_per_sec": 0 00:30:44.254 }, 00:30:44.254 "claimed": false, 00:30:44.254 "zoned": false, 00:30:44.254 "supported_io_types": { 00:30:44.254 "read": true, 00:30:44.254 "write": true, 00:30:44.254 "unmap": false, 00:30:44.254 "flush": false, 00:30:44.254 "reset": false, 00:30:44.254 "nvme_admin": false, 00:30:44.254 "nvme_io": false, 00:30:44.254 "nvme_io_md": false, 00:30:44.254 "write_zeroes": true, 00:30:44.254 "zcopy": false, 00:30:44.254 "get_zone_info": false, 00:30:44.254 "zone_management": false, 00:30:44.254 "zone_append": false, 00:30:44.254 "compare": false, 00:30:44.254 "compare_and_write": false, 00:30:44.254 "abort": false, 00:30:44.254 "seek_hole": false, 00:30:44.254 "seek_data": false, 00:30:44.254 "copy": false, 00:30:44.254 "nvme_iov_md": false 00:30:44.254 }, 00:30:44.254 "driver_specific": { 00:30:44.254 "compress": { 00:30:44.254 "name": "COMP_lvs0/lv0", 00:30:44.254 "base_bdev_name": "b29a7fa3-d052-4b83-9fee-afb01db9556f" 00:30:44.254 } 00:30:44.254 } 00:30:44.254 } 00:30:44.254 ] 00:30:44.512 22:14:03 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:30:44.512 22:14:03 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:30:44.512 Running I/O for 3 seconds... 00:30:47.797 00:30:47.797 Latency(us) 00:30:47.797 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:47.797 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:30:47.797 Verification LBA range: start 0x0 length 0x3100 00:30:47.797 COMP_lvs0/lv0 : 3.01 3271.13 12.78 0.00 0.00 9738.01 61.85 15204.35 00:30:47.797 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:30:47.797 Verification LBA range: start 0x3100 length 0x3100 00:30:47.797 COMP_lvs0/lv0 : 3.01 3250.18 12.70 0.00 0.00 9797.84 60.62 15518.92 00:30:47.797 =================================================================================================================== 00:30:47.797 Total : 6521.30 25.47 0.00 0.00 9767.82 60.62 15518.92 00:30:47.797 0 00:30:47.797 22:14:06 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:30:47.797 22:14:06 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:30:47.797 22:14:06 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:30:47.797 22:14:07 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:30:47.797 22:14:07 compress_isal -- compress/compress.sh@78 -- # killprocess 1556225 00:30:47.797 22:14:07 compress_isal -- common/autotest_common.sh@948 -- # '[' -z 1556225 ']' 00:30:47.797 22:14:07 compress_isal -- common/autotest_common.sh@952 -- # kill -0 1556225 00:30:47.797 22:14:07 compress_isal -- common/autotest_common.sh@953 -- # uname 00:30:47.797 22:14:07 compress_isal -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:47.797 22:14:07 compress_isal -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1556225 00:30:48.056 22:14:07 compress_isal -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:48.056 22:14:07 compress_isal -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:48.056 22:14:07 compress_isal -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1556225' 00:30:48.056 killing process with pid 1556225 00:30:48.056 22:14:07 compress_isal -- common/autotest_common.sh@967 -- # kill 1556225 00:30:48.056 Received shutdown signal, test time was about 3.000000 seconds 00:30:48.056 00:30:48.056 Latency(us) 00:30:48.056 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:48.056 =================================================================================================================== 00:30:48.056 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:48.056 22:14:07 compress_isal -- common/autotest_common.sh@972 -- # wait 1556225 00:30:51.413 22:14:10 compress_isal -- compress/compress.sh@88 -- # run_bdevperf 32 4096 3 4096 00:30:51.413 22:14:10 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:30:51.413 22:14:10 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=1558458 00:30:51.413 22:14:10 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:51.413 22:14:10 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 00:30:51.413 22:14:10 compress_isal -- compress/compress.sh@73 -- # waitforlisten 1558458 00:30:51.413 22:14:10 compress_isal -- common/autotest_common.sh@829 -- # '[' -z 1558458 ']' 00:30:51.413 22:14:10 compress_isal -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:51.413 22:14:10 compress_isal -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:51.413 22:14:10 compress_isal -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:51.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:51.413 22:14:10 compress_isal -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:51.413 22:14:10 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:30:51.672 [2024-07-13 22:14:10.822487] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:51.672 [2024-07-13 22:14:10.822598] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1558458 ] 00:30:51.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:51.672 EAL: Requested device 0000:3d:01.0 cannot be used 00:30:51.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:51.672 EAL: Requested device 0000:3d:01.1 cannot be used 00:30:51.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:51.672 EAL: Requested device 0000:3d:01.2 cannot be used 00:30:51.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:51.672 EAL: Requested device 0000:3d:01.3 cannot be used 00:30:51.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:51.672 EAL: Requested device 0000:3d:01.4 cannot be used 00:30:51.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:51.672 EAL: Requested device 0000:3d:01.5 cannot be used 00:30:51.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:51.672 EAL: Requested device 0000:3d:01.6 cannot be used 00:30:51.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:51.672 EAL: Requested device 0000:3d:01.7 cannot be used 00:30:51.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:51.672 EAL: Requested device 0000:3d:02.0 cannot be used 00:30:51.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:51.672 EAL: Requested device 0000:3d:02.1 cannot be used 00:30:51.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:51.672 EAL: Requested device 0000:3d:02.2 cannot be used 00:30:51.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:51.672 EAL: Requested device 0000:3d:02.3 cannot be used 00:30:51.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:51.672 EAL: Requested device 0000:3d:02.4 cannot be used 00:30:51.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:51.672 EAL: Requested device 0000:3d:02.5 cannot be used 00:30:51.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:51.672 EAL: Requested device 0000:3d:02.6 cannot be used 00:30:51.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:51.672 EAL: Requested device 0000:3d:02.7 cannot be used 00:30:51.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:51.672 EAL: Requested device 0000:3f:01.0 cannot be used 00:30:51.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:51.672 EAL: Requested device 0000:3f:01.1 cannot be used 00:30:51.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:51.672 EAL: Requested device 0000:3f:01.2 cannot be used 00:30:51.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:51.672 EAL: Requested device 0000:3f:01.3 cannot be used 00:30:51.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:51.672 EAL: Requested device 0000:3f:01.4 cannot be used 00:30:51.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:51.672 EAL: Requested device 0000:3f:01.5 cannot be used 00:30:51.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:51.672 EAL: Requested device 0000:3f:01.6 cannot be used 00:30:51.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:51.672 EAL: Requested device 0000:3f:01.7 cannot be used 00:30:51.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:51.672 EAL: Requested device 0000:3f:02.0 cannot be used 00:30:51.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:51.672 EAL: Requested device 0000:3f:02.1 cannot be used 00:30:51.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:51.672 EAL: Requested device 0000:3f:02.2 cannot be used 00:30:51.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:51.672 EAL: Requested device 0000:3f:02.3 cannot be used 00:30:51.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:51.672 EAL: Requested device 0000:3f:02.4 cannot be used 00:30:51.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:51.672 EAL: Requested device 0000:3f:02.5 cannot be used 00:30:51.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:51.672 EAL: Requested device 0000:3f:02.6 cannot be used 00:30:51.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:51.672 EAL: Requested device 0000:3f:02.7 cannot be used 00:30:51.672 [2024-07-13 22:14:10.986129] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:51.931 [2024-07-13 22:14:11.191099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:51.931 [2024-07-13 22:14:11.191105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:52.499 22:14:11 compress_isal -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:52.499 22:14:11 compress_isal -- common/autotest_common.sh@862 -- # return 0 00:30:52.499 22:14:11 compress_isal -- compress/compress.sh@74 -- # create_vols 4096 00:30:52.499 22:14:11 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:52.499 22:14:11 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:55.786 22:14:14 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:30:55.786 22:14:14 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:30:55.786 22:14:14 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:30:55.786 22:14:14 compress_isal -- common/autotest_common.sh@899 -- # local i 00:30:55.786 22:14:14 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:30:55.786 22:14:14 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:30:55.786 22:14:14 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:55.786 22:14:14 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:30:55.786 [ 00:30:55.786 { 00:30:55.786 "name": "Nvme0n1", 00:30:55.786 "aliases": [ 00:30:55.786 "ee2a7af8-1cf0-44cc-9b90-dca8a5186fbc" 00:30:55.786 ], 00:30:55.786 "product_name": "NVMe disk", 00:30:55.786 "block_size": 512, 00:30:55.786 "num_blocks": 3907029168, 00:30:55.786 "uuid": "ee2a7af8-1cf0-44cc-9b90-dca8a5186fbc", 00:30:55.786 "assigned_rate_limits": { 00:30:55.786 "rw_ios_per_sec": 0, 00:30:55.786 "rw_mbytes_per_sec": 0, 00:30:55.786 "r_mbytes_per_sec": 0, 00:30:55.786 "w_mbytes_per_sec": 0 00:30:55.786 }, 00:30:55.786 "claimed": false, 00:30:55.786 "zoned": false, 00:30:55.786 "supported_io_types": { 00:30:55.786 "read": true, 00:30:55.786 "write": true, 00:30:55.786 "unmap": true, 00:30:55.786 "flush": true, 00:30:55.786 "reset": true, 00:30:55.786 "nvme_admin": true, 00:30:55.786 "nvme_io": true, 00:30:55.786 "nvme_io_md": false, 00:30:55.786 "write_zeroes": true, 00:30:55.786 "zcopy": false, 00:30:55.786 "get_zone_info": false, 00:30:55.786 "zone_management": false, 00:30:55.786 "zone_append": false, 00:30:55.786 "compare": false, 00:30:55.786 "compare_and_write": false, 00:30:55.786 "abort": true, 00:30:55.786 "seek_hole": false, 00:30:55.786 "seek_data": false, 00:30:55.786 "copy": false, 00:30:55.786 "nvme_iov_md": false 00:30:55.786 }, 00:30:55.786 "driver_specific": { 00:30:55.786 "nvme": [ 00:30:55.786 { 00:30:55.786 "pci_address": "0000:d8:00.0", 00:30:55.786 "trid": { 00:30:55.786 "trtype": "PCIe", 00:30:55.786 "traddr": "0000:d8:00.0" 00:30:55.786 }, 00:30:55.786 "ctrlr_data": { 00:30:55.786 "cntlid": 0, 00:30:55.786 "vendor_id": "0x8086", 00:30:55.786 "model_number": "INTEL SSDPE2KX020T8", 00:30:55.786 "serial_number": "BTLJ125505KA2P0BGN", 00:30:55.786 "firmware_revision": "VDV10170", 00:30:55.786 "oacs": { 00:30:55.786 "security": 0, 00:30:55.786 "format": 1, 00:30:55.786 "firmware": 1, 00:30:55.786 "ns_manage": 1 00:30:55.786 }, 00:30:55.786 "multi_ctrlr": false, 00:30:55.786 "ana_reporting": false 00:30:55.786 }, 00:30:55.786 "vs": { 00:30:55.786 "nvme_version": "1.2" 00:30:55.786 }, 00:30:55.786 "ns_data": { 00:30:55.786 "id": 1, 00:30:55.786 "can_share": false 00:30:55.786 } 00:30:55.786 } 00:30:55.786 ], 00:30:55.786 "mp_policy": "active_passive" 00:30:55.786 } 00:30:55.786 } 00:30:55.786 ] 00:30:55.786 22:14:15 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:30:55.786 22:14:15 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:30:57.163 fad97aab-9ae7-4462-b93d-fa51e3676c6b 00:30:57.163 22:14:16 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:30:57.163 1f56d633-b984-422a-a908-669f23b45481 00:30:57.163 22:14:16 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:30:57.163 22:14:16 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:30:57.163 22:14:16 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:30:57.163 22:14:16 compress_isal -- common/autotest_common.sh@899 -- # local i 00:30:57.163 22:14:16 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:30:57.163 22:14:16 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:30:57.163 22:14:16 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:57.163 22:14:16 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:30:57.422 [ 00:30:57.422 { 00:30:57.422 "name": "1f56d633-b984-422a-a908-669f23b45481", 00:30:57.422 "aliases": [ 00:30:57.422 "lvs0/lv0" 00:30:57.422 ], 00:30:57.422 "product_name": "Logical Volume", 00:30:57.422 "block_size": 512, 00:30:57.422 "num_blocks": 204800, 00:30:57.422 "uuid": "1f56d633-b984-422a-a908-669f23b45481", 00:30:57.422 "assigned_rate_limits": { 00:30:57.422 "rw_ios_per_sec": 0, 00:30:57.422 "rw_mbytes_per_sec": 0, 00:30:57.422 "r_mbytes_per_sec": 0, 00:30:57.422 "w_mbytes_per_sec": 0 00:30:57.422 }, 00:30:57.422 "claimed": false, 00:30:57.422 "zoned": false, 00:30:57.422 "supported_io_types": { 00:30:57.422 "read": true, 00:30:57.422 "write": true, 00:30:57.422 "unmap": true, 00:30:57.422 "flush": false, 00:30:57.422 "reset": true, 00:30:57.422 "nvme_admin": false, 00:30:57.422 "nvme_io": false, 00:30:57.422 "nvme_io_md": false, 00:30:57.422 "write_zeroes": true, 00:30:57.422 "zcopy": false, 00:30:57.422 "get_zone_info": false, 00:30:57.422 "zone_management": false, 00:30:57.422 "zone_append": false, 00:30:57.422 "compare": false, 00:30:57.422 "compare_and_write": false, 00:30:57.422 "abort": false, 00:30:57.422 "seek_hole": true, 00:30:57.422 "seek_data": true, 00:30:57.422 "copy": false, 00:30:57.422 "nvme_iov_md": false 00:30:57.422 }, 00:30:57.422 "driver_specific": { 00:30:57.422 "lvol": { 00:30:57.422 "lvol_store_uuid": "fad97aab-9ae7-4462-b93d-fa51e3676c6b", 00:30:57.422 "base_bdev": "Nvme0n1", 00:30:57.422 "thin_provision": true, 00:30:57.422 "num_allocated_clusters": 0, 00:30:57.422 "snapshot": false, 00:30:57.422 "clone": false, 00:30:57.422 "esnap_clone": false 00:30:57.422 } 00:30:57.422 } 00:30:57.422 } 00:30:57.422 ] 00:30:57.422 22:14:16 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:30:57.422 22:14:16 compress_isal -- compress/compress.sh@41 -- # '[' -z 4096 ']' 00:30:57.423 22:14:16 compress_isal -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 4096 00:30:57.682 [2024-07-13 22:14:16.834742] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:30:57.682 COMP_lvs0/lv0 00:30:57.682 22:14:16 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:30:57.682 22:14:16 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:30:57.682 22:14:16 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:30:57.682 22:14:16 compress_isal -- common/autotest_common.sh@899 -- # local i 00:30:57.682 22:14:16 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:30:57.682 22:14:16 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:30:57.682 22:14:16 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:57.682 22:14:17 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:30:57.941 [ 00:30:57.941 { 00:30:57.941 "name": "COMP_lvs0/lv0", 00:30:57.941 "aliases": [ 00:30:57.941 "34099ee8-cd60-5237-8c68-b22f865aa266" 00:30:57.941 ], 00:30:57.941 "product_name": "compress", 00:30:57.941 "block_size": 4096, 00:30:57.941 "num_blocks": 25088, 00:30:57.941 "uuid": "34099ee8-cd60-5237-8c68-b22f865aa266", 00:30:57.941 "assigned_rate_limits": { 00:30:57.941 "rw_ios_per_sec": 0, 00:30:57.941 "rw_mbytes_per_sec": 0, 00:30:57.941 "r_mbytes_per_sec": 0, 00:30:57.941 "w_mbytes_per_sec": 0 00:30:57.941 }, 00:30:57.941 "claimed": false, 00:30:57.941 "zoned": false, 00:30:57.941 "supported_io_types": { 00:30:57.941 "read": true, 00:30:57.941 "write": true, 00:30:57.941 "unmap": false, 00:30:57.941 "flush": false, 00:30:57.941 "reset": false, 00:30:57.941 "nvme_admin": false, 00:30:57.941 "nvme_io": false, 00:30:57.941 "nvme_io_md": false, 00:30:57.941 "write_zeroes": true, 00:30:57.941 "zcopy": false, 00:30:57.941 "get_zone_info": false, 00:30:57.941 "zone_management": false, 00:30:57.941 "zone_append": false, 00:30:57.941 "compare": false, 00:30:57.941 "compare_and_write": false, 00:30:57.941 "abort": false, 00:30:57.941 "seek_hole": false, 00:30:57.941 "seek_data": false, 00:30:57.941 "copy": false, 00:30:57.941 "nvme_iov_md": false 00:30:57.941 }, 00:30:57.941 "driver_specific": { 00:30:57.941 "compress": { 00:30:57.941 "name": "COMP_lvs0/lv0", 00:30:57.941 "base_bdev_name": "1f56d633-b984-422a-a908-669f23b45481" 00:30:57.941 } 00:30:57.941 } 00:30:57.941 } 00:30:57.941 ] 00:30:57.941 22:14:17 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:30:57.941 22:14:17 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:30:57.941 Running I/O for 3 seconds... 00:31:01.231 00:31:01.231 Latency(us) 00:31:01.231 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:01.231 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:31:01.231 Verification LBA range: start 0x0 length 0x3100 00:31:01.231 COMP_lvs0/lv0 : 3.01 3373.09 13.18 0.00 0.00 9446.52 61.44 14575.21 00:31:01.231 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:31:01.231 Verification LBA range: start 0x3100 length 0x3100 00:31:01.231 COMP_lvs0/lv0 : 3.01 3359.00 13.12 0.00 0.00 9476.00 61.44 14575.21 00:31:01.231 =================================================================================================================== 00:31:01.231 Total : 6732.10 26.30 0.00 0.00 9461.24 61.44 14575.21 00:31:01.231 0 00:31:01.231 22:14:20 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:31:01.231 22:14:20 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:31:01.231 22:14:20 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:31:01.490 22:14:20 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:31:01.490 22:14:20 compress_isal -- compress/compress.sh@78 -- # killprocess 1558458 00:31:01.490 22:14:20 compress_isal -- common/autotest_common.sh@948 -- # '[' -z 1558458 ']' 00:31:01.490 22:14:20 compress_isal -- common/autotest_common.sh@952 -- # kill -0 1558458 00:31:01.490 22:14:20 compress_isal -- common/autotest_common.sh@953 -- # uname 00:31:01.490 22:14:20 compress_isal -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:01.490 22:14:20 compress_isal -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1558458 00:31:01.490 22:14:20 compress_isal -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:01.490 22:14:20 compress_isal -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:01.490 22:14:20 compress_isal -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1558458' 00:31:01.490 killing process with pid 1558458 00:31:01.490 22:14:20 compress_isal -- common/autotest_common.sh@967 -- # kill 1558458 00:31:01.490 Received shutdown signal, test time was about 3.000000 seconds 00:31:01.490 00:31:01.490 Latency(us) 00:31:01.490 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:01.490 =================================================================================================================== 00:31:01.490 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:01.490 22:14:20 compress_isal -- common/autotest_common.sh@972 -- # wait 1558458 00:31:05.697 22:14:24 compress_isal -- compress/compress.sh@89 -- # run_bdevio 00:31:05.697 22:14:24 compress_isal -- compress/compress.sh@50 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:31:05.697 22:14:24 compress_isal -- compress/compress.sh@55 -- # bdevio_pid=1560649 00:31:05.697 22:14:24 compress_isal -- compress/compress.sh@56 -- # trap 'killprocess $bdevio_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:05.697 22:14:24 compress_isal -- compress/compress.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w 00:31:05.697 22:14:24 compress_isal -- compress/compress.sh@57 -- # waitforlisten 1560649 00:31:05.697 22:14:24 compress_isal -- common/autotest_common.sh@829 -- # '[' -z 1560649 ']' 00:31:05.697 22:14:24 compress_isal -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:05.697 22:14:24 compress_isal -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:05.697 22:14:24 compress_isal -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:05.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:05.697 22:14:24 compress_isal -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:05.697 22:14:24 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:31:05.697 [2024-07-13 22:14:24.296724] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:31:05.697 [2024-07-13 22:14:24.296821] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1560649 ] 00:31:05.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.697 EAL: Requested device 0000:3d:01.0 cannot be used 00:31:05.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.697 EAL: Requested device 0000:3d:01.1 cannot be used 00:31:05.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.697 EAL: Requested device 0000:3d:01.2 cannot be used 00:31:05.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.697 EAL: Requested device 0000:3d:01.3 cannot be used 00:31:05.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.697 EAL: Requested device 0000:3d:01.4 cannot be used 00:31:05.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.697 EAL: Requested device 0000:3d:01.5 cannot be used 00:31:05.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.697 EAL: Requested device 0000:3d:01.6 cannot be used 00:31:05.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.697 EAL: Requested device 0000:3d:01.7 cannot be used 00:31:05.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.697 EAL: Requested device 0000:3d:02.0 cannot be used 00:31:05.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.697 EAL: Requested device 0000:3d:02.1 cannot be used 00:31:05.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.697 EAL: Requested device 0000:3d:02.2 cannot be used 00:31:05.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.697 EAL: Requested device 0000:3d:02.3 cannot be used 00:31:05.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.697 EAL: Requested device 0000:3d:02.4 cannot be used 00:31:05.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.697 EAL: Requested device 0000:3d:02.5 cannot be used 00:31:05.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.697 EAL: Requested device 0000:3d:02.6 cannot be used 00:31:05.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.697 EAL: Requested device 0000:3d:02.7 cannot be used 00:31:05.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.697 EAL: Requested device 0000:3f:01.0 cannot be used 00:31:05.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.697 EAL: Requested device 0000:3f:01.1 cannot be used 00:31:05.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.697 EAL: Requested device 0000:3f:01.2 cannot be used 00:31:05.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.697 EAL: Requested device 0000:3f:01.3 cannot be used 00:31:05.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.697 EAL: Requested device 0000:3f:01.4 cannot be used 00:31:05.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.697 EAL: Requested device 0000:3f:01.5 cannot be used 00:31:05.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.697 EAL: Requested device 0000:3f:01.6 cannot be used 00:31:05.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.697 EAL: Requested device 0000:3f:01.7 cannot be used 00:31:05.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.697 EAL: Requested device 0000:3f:02.0 cannot be used 00:31:05.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.697 EAL: Requested device 0000:3f:02.1 cannot be used 00:31:05.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.697 EAL: Requested device 0000:3f:02.2 cannot be used 00:31:05.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.697 EAL: Requested device 0000:3f:02.3 cannot be used 00:31:05.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.697 EAL: Requested device 0000:3f:02.4 cannot be used 00:31:05.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.697 EAL: Requested device 0000:3f:02.5 cannot be used 00:31:05.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.697 EAL: Requested device 0000:3f:02.6 cannot be used 00:31:05.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:05.697 EAL: Requested device 0000:3f:02.7 cannot be used 00:31:05.697 [2024-07-13 22:14:24.459329] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:05.697 [2024-07-13 22:14:24.665109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:05.697 [2024-07-13 22:14:24.665175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:05.697 [2024-07-13 22:14:24.665189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:05.697 22:14:25 compress_isal -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:05.697 22:14:25 compress_isal -- common/autotest_common.sh@862 -- # return 0 00:31:05.697 22:14:25 compress_isal -- compress/compress.sh@58 -- # create_vols 00:31:05.697 22:14:25 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:31:05.697 22:14:25 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:08.985 22:14:28 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:31:08.985 22:14:28 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:31:08.985 22:14:28 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:31:08.985 22:14:28 compress_isal -- common/autotest_common.sh@899 -- # local i 00:31:08.985 22:14:28 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:31:08.985 22:14:28 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:31:08.985 22:14:28 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:08.985 22:14:28 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:31:09.244 [ 00:31:09.244 { 00:31:09.244 "name": "Nvme0n1", 00:31:09.244 "aliases": [ 00:31:09.244 "bb1aeef3-ac2a-48bb-887a-afffd704498c" 00:31:09.244 ], 00:31:09.244 "product_name": "NVMe disk", 00:31:09.244 "block_size": 512, 00:31:09.244 "num_blocks": 3907029168, 00:31:09.244 "uuid": "bb1aeef3-ac2a-48bb-887a-afffd704498c", 00:31:09.244 "assigned_rate_limits": { 00:31:09.244 "rw_ios_per_sec": 0, 00:31:09.244 "rw_mbytes_per_sec": 0, 00:31:09.244 "r_mbytes_per_sec": 0, 00:31:09.244 "w_mbytes_per_sec": 0 00:31:09.244 }, 00:31:09.244 "claimed": false, 00:31:09.244 "zoned": false, 00:31:09.244 "supported_io_types": { 00:31:09.244 "read": true, 00:31:09.244 "write": true, 00:31:09.244 "unmap": true, 00:31:09.244 "flush": true, 00:31:09.244 "reset": true, 00:31:09.244 "nvme_admin": true, 00:31:09.244 "nvme_io": true, 00:31:09.244 "nvme_io_md": false, 00:31:09.244 "write_zeroes": true, 00:31:09.244 "zcopy": false, 00:31:09.244 "get_zone_info": false, 00:31:09.244 "zone_management": false, 00:31:09.244 "zone_append": false, 00:31:09.244 "compare": false, 00:31:09.244 "compare_and_write": false, 00:31:09.244 "abort": true, 00:31:09.244 "seek_hole": false, 00:31:09.244 "seek_data": false, 00:31:09.244 "copy": false, 00:31:09.244 "nvme_iov_md": false 00:31:09.244 }, 00:31:09.244 "driver_specific": { 00:31:09.244 "nvme": [ 00:31:09.244 { 00:31:09.244 "pci_address": "0000:d8:00.0", 00:31:09.244 "trid": { 00:31:09.244 "trtype": "PCIe", 00:31:09.244 "traddr": "0000:d8:00.0" 00:31:09.244 }, 00:31:09.244 "ctrlr_data": { 00:31:09.244 "cntlid": 0, 00:31:09.244 "vendor_id": "0x8086", 00:31:09.244 "model_number": "INTEL SSDPE2KX020T8", 00:31:09.244 "serial_number": "BTLJ125505KA2P0BGN", 00:31:09.244 "firmware_revision": "VDV10170", 00:31:09.244 "oacs": { 00:31:09.244 "security": 0, 00:31:09.244 "format": 1, 00:31:09.244 "firmware": 1, 00:31:09.244 "ns_manage": 1 00:31:09.244 }, 00:31:09.244 "multi_ctrlr": false, 00:31:09.244 "ana_reporting": false 00:31:09.244 }, 00:31:09.244 "vs": { 00:31:09.244 "nvme_version": "1.2" 00:31:09.244 }, 00:31:09.244 "ns_data": { 00:31:09.244 "id": 1, 00:31:09.244 "can_share": false 00:31:09.244 } 00:31:09.244 } 00:31:09.244 ], 00:31:09.244 "mp_policy": "active_passive" 00:31:09.244 } 00:31:09.244 } 00:31:09.244 ] 00:31:09.244 22:14:28 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:31:09.244 22:14:28 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:31:10.620 74601a17-fa21-4eb2-9a8d-25a385a127d5 00:31:10.620 22:14:29 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:31:10.620 7efe0e11-d379-464c-9f47-38d545db2c0c 00:31:10.620 22:14:29 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:31:10.620 22:14:29 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:31:10.620 22:14:29 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:31:10.620 22:14:29 compress_isal -- common/autotest_common.sh@899 -- # local i 00:31:10.620 22:14:29 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:31:10.620 22:14:29 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:31:10.620 22:14:29 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:10.879 22:14:30 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:31:11.137 [ 00:31:11.137 { 00:31:11.137 "name": "7efe0e11-d379-464c-9f47-38d545db2c0c", 00:31:11.137 "aliases": [ 00:31:11.137 "lvs0/lv0" 00:31:11.137 ], 00:31:11.137 "product_name": "Logical Volume", 00:31:11.137 "block_size": 512, 00:31:11.137 "num_blocks": 204800, 00:31:11.137 "uuid": "7efe0e11-d379-464c-9f47-38d545db2c0c", 00:31:11.137 "assigned_rate_limits": { 00:31:11.137 "rw_ios_per_sec": 0, 00:31:11.138 "rw_mbytes_per_sec": 0, 00:31:11.138 "r_mbytes_per_sec": 0, 00:31:11.138 "w_mbytes_per_sec": 0 00:31:11.138 }, 00:31:11.138 "claimed": false, 00:31:11.138 "zoned": false, 00:31:11.138 "supported_io_types": { 00:31:11.138 "read": true, 00:31:11.138 "write": true, 00:31:11.138 "unmap": true, 00:31:11.138 "flush": false, 00:31:11.138 "reset": true, 00:31:11.138 "nvme_admin": false, 00:31:11.138 "nvme_io": false, 00:31:11.138 "nvme_io_md": false, 00:31:11.138 "write_zeroes": true, 00:31:11.138 "zcopy": false, 00:31:11.138 "get_zone_info": false, 00:31:11.138 "zone_management": false, 00:31:11.138 "zone_append": false, 00:31:11.138 "compare": false, 00:31:11.138 "compare_and_write": false, 00:31:11.138 "abort": false, 00:31:11.138 "seek_hole": true, 00:31:11.138 "seek_data": true, 00:31:11.138 "copy": false, 00:31:11.138 "nvme_iov_md": false 00:31:11.138 }, 00:31:11.138 "driver_specific": { 00:31:11.138 "lvol": { 00:31:11.138 "lvol_store_uuid": "74601a17-fa21-4eb2-9a8d-25a385a127d5", 00:31:11.138 "base_bdev": "Nvme0n1", 00:31:11.138 "thin_provision": true, 00:31:11.138 "num_allocated_clusters": 0, 00:31:11.138 "snapshot": false, 00:31:11.138 "clone": false, 00:31:11.138 "esnap_clone": false 00:31:11.138 } 00:31:11.138 } 00:31:11.138 } 00:31:11.138 ] 00:31:11.138 22:14:30 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:31:11.138 22:14:30 compress_isal -- compress/compress.sh@41 -- # '[' -z '' ']' 00:31:11.138 22:14:30 compress_isal -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:31:11.138 [2024-07-13 22:14:30.448676] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:31:11.138 COMP_lvs0/lv0 00:31:11.138 22:14:30 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:31:11.138 22:14:30 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:31:11.138 22:14:30 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:31:11.138 22:14:30 compress_isal -- common/autotest_common.sh@899 -- # local i 00:31:11.138 22:14:30 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:31:11.138 22:14:30 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:31:11.138 22:14:30 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:11.434 22:14:30 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:31:11.747 [ 00:31:11.747 { 00:31:11.747 "name": "COMP_lvs0/lv0", 00:31:11.747 "aliases": [ 00:31:11.747 "d430c7b8-257f-51cc-8938-e43daafe6644" 00:31:11.747 ], 00:31:11.747 "product_name": "compress", 00:31:11.747 "block_size": 512, 00:31:11.747 "num_blocks": 200704, 00:31:11.747 "uuid": "d430c7b8-257f-51cc-8938-e43daafe6644", 00:31:11.747 "assigned_rate_limits": { 00:31:11.747 "rw_ios_per_sec": 0, 00:31:11.747 "rw_mbytes_per_sec": 0, 00:31:11.747 "r_mbytes_per_sec": 0, 00:31:11.747 "w_mbytes_per_sec": 0 00:31:11.747 }, 00:31:11.747 "claimed": false, 00:31:11.747 "zoned": false, 00:31:11.747 "supported_io_types": { 00:31:11.747 "read": true, 00:31:11.747 "write": true, 00:31:11.747 "unmap": false, 00:31:11.747 "flush": false, 00:31:11.747 "reset": false, 00:31:11.747 "nvme_admin": false, 00:31:11.747 "nvme_io": false, 00:31:11.747 "nvme_io_md": false, 00:31:11.747 "write_zeroes": true, 00:31:11.747 "zcopy": false, 00:31:11.747 "get_zone_info": false, 00:31:11.747 "zone_management": false, 00:31:11.747 "zone_append": false, 00:31:11.747 "compare": false, 00:31:11.747 "compare_and_write": false, 00:31:11.747 "abort": false, 00:31:11.747 "seek_hole": false, 00:31:11.747 "seek_data": false, 00:31:11.747 "copy": false, 00:31:11.747 "nvme_iov_md": false 00:31:11.747 }, 00:31:11.747 "driver_specific": { 00:31:11.747 "compress": { 00:31:11.747 "name": "COMP_lvs0/lv0", 00:31:11.747 "base_bdev_name": "7efe0e11-d379-464c-9f47-38d545db2c0c" 00:31:11.747 } 00:31:11.747 } 00:31:11.747 } 00:31:11.747 ] 00:31:11.747 22:14:30 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:31:11.747 22:14:30 compress_isal -- compress/compress.sh@59 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:31:11.747 I/O targets: 00:31:11.747 COMP_lvs0/lv0: 200704 blocks of 512 bytes (98 MiB) 00:31:11.747 00:31:11.747 00:31:11.747 CUnit - A unit testing framework for C - Version 2.1-3 00:31:11.747 http://cunit.sourceforge.net/ 00:31:11.747 00:31:11.747 00:31:11.747 Suite: bdevio tests on: COMP_lvs0/lv0 00:31:11.747 Test: blockdev write read block ...passed 00:31:11.747 Test: blockdev write zeroes read block ...passed 00:31:11.747 Test: blockdev write zeroes read no split ...passed 00:31:11.747 Test: blockdev write zeroes read split ...passed 00:31:11.747 Test: blockdev write zeroes read split partial ...passed 00:31:11.747 Test: blockdev reset ...[2024-07-13 22:14:31.054280] vbdev_compress.c: 252:vbdev_compress_submit_request: *ERROR*: Unknown I/O type 5 00:31:11.747 passed 00:31:11.747 Test: blockdev write read 8 blocks ...passed 00:31:11.747 Test: blockdev write read size > 128k ...passed 00:31:11.747 Test: blockdev write read invalid size ...passed 00:31:11.747 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:11.748 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:11.748 Test: blockdev write read max offset ...passed 00:31:11.748 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:11.748 Test: blockdev writev readv 8 blocks ...passed 00:31:11.748 Test: blockdev writev readv 30 x 1block ...passed 00:31:11.748 Test: blockdev writev readv block ...passed 00:31:11.748 Test: blockdev writev readv size > 128k ...passed 00:31:11.748 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:11.748 Test: blockdev comparev and writev ...passed 00:31:11.748 Test: blockdev nvme passthru rw ...passed 00:31:11.748 Test: blockdev nvme passthru vendor specific ...passed 00:31:11.748 Test: blockdev nvme admin passthru ...passed 00:31:11.748 Test: blockdev copy ...passed 00:31:11.748 00:31:11.748 Run Summary: Type Total Ran Passed Failed Inactive 00:31:11.748 suites 1 1 n/a 0 0 00:31:11.748 tests 23 23 23 0 0 00:31:11.748 asserts 130 130 130 0 n/a 00:31:11.748 00:31:11.748 Elapsed time = 0.371 seconds 00:31:11.748 0 00:31:11.748 22:14:31 compress_isal -- compress/compress.sh@60 -- # destroy_vols 00:31:11.748 22:14:31 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:31:12.006 22:14:31 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:31:12.265 22:14:31 compress_isal -- compress/compress.sh@61 -- # trap - SIGINT SIGTERM EXIT 00:31:12.266 22:14:31 compress_isal -- compress/compress.sh@62 -- # killprocess 1560649 00:31:12.266 22:14:31 compress_isal -- common/autotest_common.sh@948 -- # '[' -z 1560649 ']' 00:31:12.266 22:14:31 compress_isal -- common/autotest_common.sh@952 -- # kill -0 1560649 00:31:12.266 22:14:31 compress_isal -- common/autotest_common.sh@953 -- # uname 00:31:12.266 22:14:31 compress_isal -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:12.266 22:14:31 compress_isal -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1560649 00:31:12.266 22:14:31 compress_isal -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:12.266 22:14:31 compress_isal -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:12.266 22:14:31 compress_isal -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1560649' 00:31:12.266 killing process with pid 1560649 00:31:12.266 22:14:31 compress_isal -- common/autotest_common.sh@967 -- # kill 1560649 00:31:12.266 22:14:31 compress_isal -- common/autotest_common.sh@972 -- # wait 1560649 00:31:16.455 22:14:35 compress_isal -- compress/compress.sh@91 -- # '[' 1 -eq 1 ']' 00:31:16.455 22:14:35 compress_isal -- compress/compress.sh@92 -- # run_bdevperf 64 16384 30 00:31:16.455 22:14:35 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:31:16.455 22:14:35 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=1562468 00:31:16.455 22:14:35 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:16.455 22:14:35 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 64 -o 16384 -w verify -t 30 -C -m 0x6 00:31:16.455 22:14:35 compress_isal -- compress/compress.sh@73 -- # waitforlisten 1562468 00:31:16.455 22:14:35 compress_isal -- common/autotest_common.sh@829 -- # '[' -z 1562468 ']' 00:31:16.456 22:14:35 compress_isal -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:16.456 22:14:35 compress_isal -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:16.456 22:14:35 compress_isal -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:16.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:16.456 22:14:35 compress_isal -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:16.456 22:14:35 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:31:16.456 [2024-07-13 22:14:35.144943] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:31:16.456 [2024-07-13 22:14:35.145039] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1562468 ] 00:31:16.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:16.456 EAL: Requested device 0000:3d:01.0 cannot be used 00:31:16.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:16.456 EAL: Requested device 0000:3d:01.1 cannot be used 00:31:16.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:16.456 EAL: Requested device 0000:3d:01.2 cannot be used 00:31:16.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:16.456 EAL: Requested device 0000:3d:01.3 cannot be used 00:31:16.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:16.456 EAL: Requested device 0000:3d:01.4 cannot be used 00:31:16.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:16.456 EAL: Requested device 0000:3d:01.5 cannot be used 00:31:16.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:16.456 EAL: Requested device 0000:3d:01.6 cannot be used 00:31:16.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:16.456 EAL: Requested device 0000:3d:01.7 cannot be used 00:31:16.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:16.456 EAL: Requested device 0000:3d:02.0 cannot be used 00:31:16.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:16.456 EAL: Requested device 0000:3d:02.1 cannot be used 00:31:16.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:16.456 EAL: Requested device 0000:3d:02.2 cannot be used 00:31:16.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:16.456 EAL: Requested device 0000:3d:02.3 cannot be used 00:31:16.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:16.456 EAL: Requested device 0000:3d:02.4 cannot be used 00:31:16.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:16.456 EAL: Requested device 0000:3d:02.5 cannot be used 00:31:16.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:16.456 EAL: Requested device 0000:3d:02.6 cannot be used 00:31:16.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:16.456 EAL: Requested device 0000:3d:02.7 cannot be used 00:31:16.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:16.456 EAL: Requested device 0000:3f:01.0 cannot be used 00:31:16.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:16.456 EAL: Requested device 0000:3f:01.1 cannot be used 00:31:16.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:16.456 EAL: Requested device 0000:3f:01.2 cannot be used 00:31:16.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:16.456 EAL: Requested device 0000:3f:01.3 cannot be used 00:31:16.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:16.456 EAL: Requested device 0000:3f:01.4 cannot be used 00:31:16.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:16.456 EAL: Requested device 0000:3f:01.5 cannot be used 00:31:16.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:16.456 EAL: Requested device 0000:3f:01.6 cannot be used 00:31:16.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:16.456 EAL: Requested device 0000:3f:01.7 cannot be used 00:31:16.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:16.456 EAL: Requested device 0000:3f:02.0 cannot be used 00:31:16.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:16.456 EAL: Requested device 0000:3f:02.1 cannot be used 00:31:16.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:16.456 EAL: Requested device 0000:3f:02.2 cannot be used 00:31:16.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:16.456 EAL: Requested device 0000:3f:02.3 cannot be used 00:31:16.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:16.456 EAL: Requested device 0000:3f:02.4 cannot be used 00:31:16.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:16.456 EAL: Requested device 0000:3f:02.5 cannot be used 00:31:16.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:16.456 EAL: Requested device 0000:3f:02.6 cannot be used 00:31:16.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:16.456 EAL: Requested device 0000:3f:02.7 cannot be used 00:31:16.456 [2024-07-13 22:14:35.302291] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:16.456 [2024-07-13 22:14:35.511754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:16.456 [2024-07-13 22:14:35.511763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:16.715 22:14:35 compress_isal -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:16.715 22:14:35 compress_isal -- common/autotest_common.sh@862 -- # return 0 00:31:16.715 22:14:35 compress_isal -- compress/compress.sh@74 -- # create_vols 00:31:16.715 22:14:35 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:16.715 22:14:35 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:31:20.007 22:14:38 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:31:20.007 22:14:38 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:31:20.007 22:14:38 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:31:20.007 22:14:38 compress_isal -- common/autotest_common.sh@899 -- # local i 00:31:20.007 22:14:38 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:31:20.007 22:14:38 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:31:20.007 22:14:38 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:20.007 22:14:39 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:31:20.007 [ 00:31:20.007 { 00:31:20.007 "name": "Nvme0n1", 00:31:20.007 "aliases": [ 00:31:20.007 "06efe02a-b5a5-462d-a39e-d80073a4bf19" 00:31:20.007 ], 00:31:20.007 "product_name": "NVMe disk", 00:31:20.007 "block_size": 512, 00:31:20.007 "num_blocks": 3907029168, 00:31:20.007 "uuid": "06efe02a-b5a5-462d-a39e-d80073a4bf19", 00:31:20.007 "assigned_rate_limits": { 00:31:20.007 "rw_ios_per_sec": 0, 00:31:20.007 "rw_mbytes_per_sec": 0, 00:31:20.007 "r_mbytes_per_sec": 0, 00:31:20.007 "w_mbytes_per_sec": 0 00:31:20.007 }, 00:31:20.007 "claimed": false, 00:31:20.007 "zoned": false, 00:31:20.007 "supported_io_types": { 00:31:20.007 "read": true, 00:31:20.007 "write": true, 00:31:20.007 "unmap": true, 00:31:20.007 "flush": true, 00:31:20.007 "reset": true, 00:31:20.007 "nvme_admin": true, 00:31:20.007 "nvme_io": true, 00:31:20.007 "nvme_io_md": false, 00:31:20.007 "write_zeroes": true, 00:31:20.007 "zcopy": false, 00:31:20.007 "get_zone_info": false, 00:31:20.007 "zone_management": false, 00:31:20.007 "zone_append": false, 00:31:20.007 "compare": false, 00:31:20.007 "compare_and_write": false, 00:31:20.007 "abort": true, 00:31:20.007 "seek_hole": false, 00:31:20.007 "seek_data": false, 00:31:20.007 "copy": false, 00:31:20.007 "nvme_iov_md": false 00:31:20.007 }, 00:31:20.007 "driver_specific": { 00:31:20.007 "nvme": [ 00:31:20.007 { 00:31:20.007 "pci_address": "0000:d8:00.0", 00:31:20.007 "trid": { 00:31:20.007 "trtype": "PCIe", 00:31:20.007 "traddr": "0000:d8:00.0" 00:31:20.007 }, 00:31:20.007 "ctrlr_data": { 00:31:20.007 "cntlid": 0, 00:31:20.007 "vendor_id": "0x8086", 00:31:20.007 "model_number": "INTEL SSDPE2KX020T8", 00:31:20.007 "serial_number": "BTLJ125505KA2P0BGN", 00:31:20.007 "firmware_revision": "VDV10170", 00:31:20.007 "oacs": { 00:31:20.007 "security": 0, 00:31:20.007 "format": 1, 00:31:20.007 "firmware": 1, 00:31:20.007 "ns_manage": 1 00:31:20.007 }, 00:31:20.007 "multi_ctrlr": false, 00:31:20.007 "ana_reporting": false 00:31:20.007 }, 00:31:20.007 "vs": { 00:31:20.007 "nvme_version": "1.2" 00:31:20.007 }, 00:31:20.007 "ns_data": { 00:31:20.007 "id": 1, 00:31:20.007 "can_share": false 00:31:20.007 } 00:31:20.007 } 00:31:20.007 ], 00:31:20.007 "mp_policy": "active_passive" 00:31:20.007 } 00:31:20.007 } 00:31:20.007 ] 00:31:20.007 22:14:39 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:31:20.007 22:14:39 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:31:21.385 a0310030-e3ae-44ba-8db1-753ace01f6e1 00:31:21.385 22:14:40 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:31:21.385 ab0c77d1-bfce-440b-a97a-97d8ded052f9 00:31:21.385 22:14:40 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:31:21.386 22:14:40 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:31:21.386 22:14:40 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:31:21.386 22:14:40 compress_isal -- common/autotest_common.sh@899 -- # local i 00:31:21.386 22:14:40 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:31:21.386 22:14:40 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:31:21.386 22:14:40 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:21.645 22:14:40 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:31:21.645 [ 00:31:21.645 { 00:31:21.645 "name": "ab0c77d1-bfce-440b-a97a-97d8ded052f9", 00:31:21.645 "aliases": [ 00:31:21.645 "lvs0/lv0" 00:31:21.645 ], 00:31:21.645 "product_name": "Logical Volume", 00:31:21.645 "block_size": 512, 00:31:21.645 "num_blocks": 204800, 00:31:21.645 "uuid": "ab0c77d1-bfce-440b-a97a-97d8ded052f9", 00:31:21.645 "assigned_rate_limits": { 00:31:21.645 "rw_ios_per_sec": 0, 00:31:21.645 "rw_mbytes_per_sec": 0, 00:31:21.645 "r_mbytes_per_sec": 0, 00:31:21.645 "w_mbytes_per_sec": 0 00:31:21.645 }, 00:31:21.645 "claimed": false, 00:31:21.645 "zoned": false, 00:31:21.645 "supported_io_types": { 00:31:21.645 "read": true, 00:31:21.645 "write": true, 00:31:21.645 "unmap": true, 00:31:21.645 "flush": false, 00:31:21.645 "reset": true, 00:31:21.645 "nvme_admin": false, 00:31:21.645 "nvme_io": false, 00:31:21.645 "nvme_io_md": false, 00:31:21.645 "write_zeroes": true, 00:31:21.645 "zcopy": false, 00:31:21.645 "get_zone_info": false, 00:31:21.645 "zone_management": false, 00:31:21.645 "zone_append": false, 00:31:21.645 "compare": false, 00:31:21.645 "compare_and_write": false, 00:31:21.645 "abort": false, 00:31:21.645 "seek_hole": true, 00:31:21.645 "seek_data": true, 00:31:21.645 "copy": false, 00:31:21.645 "nvme_iov_md": false 00:31:21.645 }, 00:31:21.645 "driver_specific": { 00:31:21.645 "lvol": { 00:31:21.645 "lvol_store_uuid": "a0310030-e3ae-44ba-8db1-753ace01f6e1", 00:31:21.645 "base_bdev": "Nvme0n1", 00:31:21.645 "thin_provision": true, 00:31:21.645 "num_allocated_clusters": 0, 00:31:21.645 "snapshot": false, 00:31:21.645 "clone": false, 00:31:21.645 "esnap_clone": false 00:31:21.645 } 00:31:21.645 } 00:31:21.645 } 00:31:21.645 ] 00:31:21.645 22:14:41 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:31:21.645 22:14:41 compress_isal -- compress/compress.sh@41 -- # '[' -z '' ']' 00:31:21.645 22:14:41 compress_isal -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:31:21.904 [2024-07-13 22:14:41.165885] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:31:21.904 COMP_lvs0/lv0 00:31:21.904 22:14:41 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:31:21.904 22:14:41 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:31:21.904 22:14:41 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:31:21.904 22:14:41 compress_isal -- common/autotest_common.sh@899 -- # local i 00:31:21.904 22:14:41 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:31:21.904 22:14:41 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:31:21.904 22:14:41 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:22.163 22:14:41 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:31:22.163 [ 00:31:22.163 { 00:31:22.163 "name": "COMP_lvs0/lv0", 00:31:22.163 "aliases": [ 00:31:22.163 "123462b9-d31b-50dd-a6f8-ea2c88100101" 00:31:22.163 ], 00:31:22.163 "product_name": "compress", 00:31:22.163 "block_size": 512, 00:31:22.164 "num_blocks": 200704, 00:31:22.164 "uuid": "123462b9-d31b-50dd-a6f8-ea2c88100101", 00:31:22.164 "assigned_rate_limits": { 00:31:22.164 "rw_ios_per_sec": 0, 00:31:22.164 "rw_mbytes_per_sec": 0, 00:31:22.164 "r_mbytes_per_sec": 0, 00:31:22.164 "w_mbytes_per_sec": 0 00:31:22.164 }, 00:31:22.164 "claimed": false, 00:31:22.164 "zoned": false, 00:31:22.164 "supported_io_types": { 00:31:22.164 "read": true, 00:31:22.164 "write": true, 00:31:22.164 "unmap": false, 00:31:22.164 "flush": false, 00:31:22.164 "reset": false, 00:31:22.164 "nvme_admin": false, 00:31:22.164 "nvme_io": false, 00:31:22.164 "nvme_io_md": false, 00:31:22.164 "write_zeroes": true, 00:31:22.164 "zcopy": false, 00:31:22.164 "get_zone_info": false, 00:31:22.164 "zone_management": false, 00:31:22.164 "zone_append": false, 00:31:22.164 "compare": false, 00:31:22.164 "compare_and_write": false, 00:31:22.164 "abort": false, 00:31:22.164 "seek_hole": false, 00:31:22.164 "seek_data": false, 00:31:22.164 "copy": false, 00:31:22.164 "nvme_iov_md": false 00:31:22.164 }, 00:31:22.164 "driver_specific": { 00:31:22.164 "compress": { 00:31:22.164 "name": "COMP_lvs0/lv0", 00:31:22.164 "base_bdev_name": "ab0c77d1-bfce-440b-a97a-97d8ded052f9" 00:31:22.164 } 00:31:22.164 } 00:31:22.164 } 00:31:22.164 ] 00:31:22.164 22:14:41 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:31:22.164 22:14:41 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:31:22.423 Running I/O for 30 seconds... 00:31:54.503 00:31:54.503 Latency(us) 00:31:54.503 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:54.503 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 64, IO size: 16384) 00:31:54.503 Verification LBA range: start 0x0 length 0xc40 00:31:54.503 COMP_lvs0/lv0 : 30.01 1501.57 23.46 0.00 0.00 42459.80 1218.97 43411.05 00:31:54.503 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 64, IO size: 16384) 00:31:54.503 Verification LBA range: start 0xc40 length 0xc40 00:31:54.503 COMP_lvs0/lv0 : 30.01 4795.63 74.93 0.00 0.00 13243.52 704.51 24956.11 00:31:54.503 =================================================================================================================== 00:31:54.503 Total : 6297.19 98.39 0.00 0.00 20210.82 704.51 43411.05 00:31:54.503 0 00:31:54.503 22:15:11 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:31:54.503 22:15:11 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:31:54.503 22:15:11 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:31:54.503 22:15:12 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:31:54.503 22:15:12 compress_isal -- compress/compress.sh@78 -- # killprocess 1562468 00:31:54.504 22:15:12 compress_isal -- common/autotest_common.sh@948 -- # '[' -z 1562468 ']' 00:31:54.504 22:15:12 compress_isal -- common/autotest_common.sh@952 -- # kill -0 1562468 00:31:54.504 22:15:12 compress_isal -- common/autotest_common.sh@953 -- # uname 00:31:54.504 22:15:12 compress_isal -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:54.504 22:15:12 compress_isal -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1562468 00:31:54.504 22:15:12 compress_isal -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:54.504 22:15:12 compress_isal -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:54.504 22:15:12 compress_isal -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1562468' 00:31:54.504 killing process with pid 1562468 00:31:54.504 22:15:12 compress_isal -- common/autotest_common.sh@967 -- # kill 1562468 00:31:54.504 Received shutdown signal, test time was about 30.000000 seconds 00:31:54.504 00:31:54.504 Latency(us) 00:31:54.504 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:54.504 =================================================================================================================== 00:31:54.504 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:54.504 22:15:12 compress_isal -- common/autotest_common.sh@972 -- # wait 1562468 00:31:56.451 22:15:15 compress_isal -- compress/compress.sh@95 -- # export TEST_TRANSPORT=tcp 00:31:56.451 22:15:15 compress_isal -- compress/compress.sh@95 -- # TEST_TRANSPORT=tcp 00:31:56.451 22:15:15 compress_isal -- compress/compress.sh@96 -- # NET_TYPE=virt 00:31:56.451 22:15:15 compress_isal -- compress/compress.sh@96 -- # nvmftestinit 00:31:56.451 22:15:15 compress_isal -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:56.451 22:15:15 compress_isal -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:56.451 22:15:15 compress_isal -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:56.451 22:15:15 compress_isal -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:56.451 22:15:15 compress_isal -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:56.451 22:15:15 compress_isal -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:56.451 22:15:15 compress_isal -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:56.451 22:15:15 compress_isal -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:56.451 22:15:15 compress_isal -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:31:56.451 22:15:15 compress_isal -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:31:56.451 22:15:15 compress_isal -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:31:56.451 22:15:15 compress_isal -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:31:56.451 22:15:15 compress_isal -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:31:56.451 22:15:15 compress_isal -- nvmf/common.sh@432 -- # nvmf_veth_init 00:31:56.451 22:15:15 compress_isal -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:56.451 22:15:15 compress_isal -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:56.451 22:15:15 compress_isal -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:31:56.451 22:15:15 compress_isal -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:31:56.451 22:15:15 compress_isal -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:56.451 22:15:15 compress_isal -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:56.451 22:15:15 compress_isal -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:56.451 22:15:15 compress_isal -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:56.451 22:15:15 compress_isal -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:56.451 22:15:15 compress_isal -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:56.451 22:15:15 compress_isal -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:56.451 22:15:15 compress_isal -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:56.451 22:15:15 compress_isal -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:31:56.451 22:15:15 compress_isal -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:31:56.451 Cannot find device "nvmf_tgt_br" 00:31:56.451 22:15:15 compress_isal -- nvmf/common.sh@155 -- # true 00:31:56.451 22:15:15 compress_isal -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:31:56.451 Cannot find device "nvmf_tgt_br2" 00:31:56.451 22:15:15 compress_isal -- nvmf/common.sh@156 -- # true 00:31:56.451 22:15:15 compress_isal -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:31:56.451 22:15:15 compress_isal -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:31:56.451 Cannot find device "nvmf_tgt_br" 00:31:56.451 22:15:15 compress_isal -- nvmf/common.sh@158 -- # true 00:31:56.451 22:15:15 compress_isal -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:31:56.451 Cannot find device "nvmf_tgt_br2" 00:31:56.451 22:15:15 compress_isal -- nvmf/common.sh@159 -- # true 00:31:56.451 22:15:15 compress_isal -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:31:56.451 22:15:15 compress_isal -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:31:56.451 22:15:15 compress_isal -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:56.451 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:56.451 22:15:15 compress_isal -- nvmf/common.sh@162 -- # true 00:31:56.451 22:15:15 compress_isal -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:56.451 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:56.451 22:15:15 compress_isal -- nvmf/common.sh@163 -- # true 00:31:56.451 22:15:15 compress_isal -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:31:56.451 22:15:15 compress_isal -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:56.451 22:15:15 compress_isal -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:56.451 22:15:15 compress_isal -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:56.710 22:15:15 compress_isal -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:56.710 22:15:15 compress_isal -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:56.710 22:15:15 compress_isal -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:56.710 22:15:15 compress_isal -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:31:56.710 22:15:15 compress_isal -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:31:56.710 22:15:15 compress_isal -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:31:56.710 22:15:15 compress_isal -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:31:56.710 22:15:15 compress_isal -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:31:56.710 22:15:15 compress_isal -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:31:56.710 22:15:15 compress_isal -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:56.710 22:15:15 compress_isal -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:56.710 22:15:15 compress_isal -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:56.710 22:15:15 compress_isal -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:31:56.710 22:15:15 compress_isal -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:31:56.710 22:15:15 compress_isal -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:31:56.710 22:15:16 compress_isal -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:56.710 22:15:16 compress_isal -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:56.710 22:15:16 compress_isal -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:56.970 22:15:16 compress_isal -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:56.970 22:15:16 compress_isal -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:31:56.970 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:56.970 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:31:56.970 00:31:56.970 --- 10.0.0.2 ping statistics --- 00:31:56.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:56.970 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:31:56.970 22:15:16 compress_isal -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:31:56.970 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:56.970 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:31:56.970 00:31:56.970 --- 10.0.0.3 ping statistics --- 00:31:56.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:56.970 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:31:56.970 22:15:16 compress_isal -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:56.970 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:56.970 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:31:56.970 00:31:56.970 --- 10.0.0.1 ping statistics --- 00:31:56.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:56.970 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:31:56.970 22:15:16 compress_isal -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:56.970 22:15:16 compress_isal -- nvmf/common.sh@433 -- # return 0 00:31:56.970 22:15:16 compress_isal -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:56.970 22:15:16 compress_isal -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:56.970 22:15:16 compress_isal -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:56.970 22:15:16 compress_isal -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:56.970 22:15:16 compress_isal -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:56.970 22:15:16 compress_isal -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:56.970 22:15:16 compress_isal -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:56.970 22:15:16 compress_isal -- compress/compress.sh@97 -- # nvmfappstart -m 0x7 00:31:56.970 22:15:16 compress_isal -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:56.970 22:15:16 compress_isal -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:56.970 22:15:16 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:31:56.970 22:15:16 compress_isal -- nvmf/common.sh@481 -- # nvmfpid=1570096 00:31:56.970 22:15:16 compress_isal -- nvmf/common.sh@482 -- # waitforlisten 1570096 00:31:56.970 22:15:16 compress_isal -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:31:56.970 22:15:16 compress_isal -- common/autotest_common.sh@829 -- # '[' -z 1570096 ']' 00:31:56.970 22:15:16 compress_isal -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:56.970 22:15:16 compress_isal -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:56.970 22:15:16 compress_isal -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:56.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:56.970 22:15:16 compress_isal -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:56.970 22:15:16 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:31:56.970 [2024-07-13 22:15:16.295596] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:31:56.970 [2024-07-13 22:15:16.295688] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:57.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.229 EAL: Requested device 0000:3d:01.0 cannot be used 00:31:57.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.229 EAL: Requested device 0000:3d:01.1 cannot be used 00:31:57.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.229 EAL: Requested device 0000:3d:01.2 cannot be used 00:31:57.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.229 EAL: Requested device 0000:3d:01.3 cannot be used 00:31:57.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.229 EAL: Requested device 0000:3d:01.4 cannot be used 00:31:57.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.229 EAL: Requested device 0000:3d:01.5 cannot be used 00:31:57.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.229 EAL: Requested device 0000:3d:01.6 cannot be used 00:31:57.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.229 EAL: Requested device 0000:3d:01.7 cannot be used 00:31:57.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.229 EAL: Requested device 0000:3d:02.0 cannot be used 00:31:57.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.229 EAL: Requested device 0000:3d:02.1 cannot be used 00:31:57.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.229 EAL: Requested device 0000:3d:02.2 cannot be used 00:31:57.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.229 EAL: Requested device 0000:3d:02.3 cannot be used 00:31:57.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.229 EAL: Requested device 0000:3d:02.4 cannot be used 00:31:57.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.229 EAL: Requested device 0000:3d:02.5 cannot be used 00:31:57.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.229 EAL: Requested device 0000:3d:02.6 cannot be used 00:31:57.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.229 EAL: Requested device 0000:3d:02.7 cannot be used 00:31:57.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.229 EAL: Requested device 0000:3f:01.0 cannot be used 00:31:57.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.229 EAL: Requested device 0000:3f:01.1 cannot be used 00:31:57.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.229 EAL: Requested device 0000:3f:01.2 cannot be used 00:31:57.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.229 EAL: Requested device 0000:3f:01.3 cannot be used 00:31:57.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.229 EAL: Requested device 0000:3f:01.4 cannot be used 00:31:57.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.229 EAL: Requested device 0000:3f:01.5 cannot be used 00:31:57.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.229 EAL: Requested device 0000:3f:01.6 cannot be used 00:31:57.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.229 EAL: Requested device 0000:3f:01.7 cannot be used 00:31:57.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.229 EAL: Requested device 0000:3f:02.0 cannot be used 00:31:57.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.230 EAL: Requested device 0000:3f:02.1 cannot be used 00:31:57.230 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.230 EAL: Requested device 0000:3f:02.2 cannot be used 00:31:57.230 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.230 EAL: Requested device 0000:3f:02.3 cannot be used 00:31:57.230 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.230 EAL: Requested device 0000:3f:02.4 cannot be used 00:31:57.230 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.230 EAL: Requested device 0000:3f:02.5 cannot be used 00:31:57.230 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.230 EAL: Requested device 0000:3f:02.6 cannot be used 00:31:57.230 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.230 EAL: Requested device 0000:3f:02.7 cannot be used 00:31:57.230 [2024-07-13 22:15:16.468002] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:57.489 [2024-07-13 22:15:16.676036] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:57.489 [2024-07-13 22:15:16.676083] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:57.489 [2024-07-13 22:15:16.676098] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:57.489 [2024-07-13 22:15:16.676108] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:57.489 [2024-07-13 22:15:16.676120] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:57.489 [2024-07-13 22:15:16.676245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:57.489 [2024-07-13 22:15:16.676308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:57.489 [2024-07-13 22:15:16.676317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:57.747 22:15:17 compress_isal -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:57.747 22:15:17 compress_isal -- common/autotest_common.sh@862 -- # return 0 00:31:57.747 22:15:17 compress_isal -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:57.747 22:15:17 compress_isal -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:57.747 22:15:17 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:31:57.747 22:15:17 compress_isal -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:57.747 22:15:17 compress_isal -- compress/compress.sh@98 -- # trap 'nvmftestfini; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:57.747 22:15:17 compress_isal -- compress/compress.sh@101 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -u 8192 00:31:58.006 [2024-07-13 22:15:17.263594] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:58.006 22:15:17 compress_isal -- compress/compress.sh@102 -- # create_vols 00:31:58.006 22:15:17 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:58.006 22:15:17 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:32:01.295 22:15:20 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:32:01.295 22:15:20 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:32:01.295 22:15:20 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:32:01.295 22:15:20 compress_isal -- common/autotest_common.sh@899 -- # local i 00:32:01.295 22:15:20 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:32:01.295 22:15:20 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:32:01.295 22:15:20 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:01.295 22:15:20 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:32:01.553 [ 00:32:01.553 { 00:32:01.553 "name": "Nvme0n1", 00:32:01.553 "aliases": [ 00:32:01.553 "02fc2a63-6b93-4321-a4ce-0e675579dd14" 00:32:01.553 ], 00:32:01.553 "product_name": "NVMe disk", 00:32:01.553 "block_size": 512, 00:32:01.553 "num_blocks": 3907029168, 00:32:01.553 "uuid": "02fc2a63-6b93-4321-a4ce-0e675579dd14", 00:32:01.553 "assigned_rate_limits": { 00:32:01.553 "rw_ios_per_sec": 0, 00:32:01.553 "rw_mbytes_per_sec": 0, 00:32:01.553 "r_mbytes_per_sec": 0, 00:32:01.553 "w_mbytes_per_sec": 0 00:32:01.553 }, 00:32:01.554 "claimed": false, 00:32:01.554 "zoned": false, 00:32:01.554 "supported_io_types": { 00:32:01.554 "read": true, 00:32:01.554 "write": true, 00:32:01.554 "unmap": true, 00:32:01.554 "flush": true, 00:32:01.554 "reset": true, 00:32:01.554 "nvme_admin": true, 00:32:01.554 "nvme_io": true, 00:32:01.554 "nvme_io_md": false, 00:32:01.554 "write_zeroes": true, 00:32:01.554 "zcopy": false, 00:32:01.554 "get_zone_info": false, 00:32:01.554 "zone_management": false, 00:32:01.554 "zone_append": false, 00:32:01.554 "compare": false, 00:32:01.554 "compare_and_write": false, 00:32:01.554 "abort": true, 00:32:01.554 "seek_hole": false, 00:32:01.554 "seek_data": false, 00:32:01.554 "copy": false, 00:32:01.554 "nvme_iov_md": false 00:32:01.554 }, 00:32:01.554 "driver_specific": { 00:32:01.554 "nvme": [ 00:32:01.554 { 00:32:01.554 "pci_address": "0000:d8:00.0", 00:32:01.554 "trid": { 00:32:01.554 "trtype": "PCIe", 00:32:01.554 "traddr": "0000:d8:00.0" 00:32:01.554 }, 00:32:01.554 "ctrlr_data": { 00:32:01.554 "cntlid": 0, 00:32:01.554 "vendor_id": "0x8086", 00:32:01.554 "model_number": "INTEL SSDPE2KX020T8", 00:32:01.554 "serial_number": "BTLJ125505KA2P0BGN", 00:32:01.554 "firmware_revision": "VDV10170", 00:32:01.554 "oacs": { 00:32:01.554 "security": 0, 00:32:01.554 "format": 1, 00:32:01.554 "firmware": 1, 00:32:01.554 "ns_manage": 1 00:32:01.554 }, 00:32:01.554 "multi_ctrlr": false, 00:32:01.554 "ana_reporting": false 00:32:01.554 }, 00:32:01.554 "vs": { 00:32:01.554 "nvme_version": "1.2" 00:32:01.554 }, 00:32:01.554 "ns_data": { 00:32:01.554 "id": 1, 00:32:01.554 "can_share": false 00:32:01.554 } 00:32:01.554 } 00:32:01.554 ], 00:32:01.554 "mp_policy": "active_passive" 00:32:01.554 } 00:32:01.554 } 00:32:01.554 ] 00:32:01.554 22:15:20 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:32:01.554 22:15:20 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:32:02.929 9e207a75-146a-4ce2-810f-5e0cc9051368 00:32:02.929 22:15:21 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:32:02.929 c57f990b-6d70-4ed1-94c7-4049fdc29547 00:32:02.929 22:15:22 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:32:02.929 22:15:22 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:32:02.929 22:15:22 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:32:02.929 22:15:22 compress_isal -- common/autotest_common.sh@899 -- # local i 00:32:02.929 22:15:22 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:32:02.929 22:15:22 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:32:02.929 22:15:22 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:02.929 22:15:22 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:32:03.187 [ 00:32:03.187 { 00:32:03.187 "name": "c57f990b-6d70-4ed1-94c7-4049fdc29547", 00:32:03.187 "aliases": [ 00:32:03.187 "lvs0/lv0" 00:32:03.187 ], 00:32:03.187 "product_name": "Logical Volume", 00:32:03.187 "block_size": 512, 00:32:03.187 "num_blocks": 204800, 00:32:03.187 "uuid": "c57f990b-6d70-4ed1-94c7-4049fdc29547", 00:32:03.187 "assigned_rate_limits": { 00:32:03.187 "rw_ios_per_sec": 0, 00:32:03.187 "rw_mbytes_per_sec": 0, 00:32:03.187 "r_mbytes_per_sec": 0, 00:32:03.187 "w_mbytes_per_sec": 0 00:32:03.187 }, 00:32:03.187 "claimed": false, 00:32:03.187 "zoned": false, 00:32:03.187 "supported_io_types": { 00:32:03.187 "read": true, 00:32:03.187 "write": true, 00:32:03.187 "unmap": true, 00:32:03.187 "flush": false, 00:32:03.187 "reset": true, 00:32:03.187 "nvme_admin": false, 00:32:03.187 "nvme_io": false, 00:32:03.187 "nvme_io_md": false, 00:32:03.187 "write_zeroes": true, 00:32:03.187 "zcopy": false, 00:32:03.187 "get_zone_info": false, 00:32:03.187 "zone_management": false, 00:32:03.187 "zone_append": false, 00:32:03.187 "compare": false, 00:32:03.187 "compare_and_write": false, 00:32:03.187 "abort": false, 00:32:03.187 "seek_hole": true, 00:32:03.187 "seek_data": true, 00:32:03.187 "copy": false, 00:32:03.187 "nvme_iov_md": false 00:32:03.187 }, 00:32:03.187 "driver_specific": { 00:32:03.187 "lvol": { 00:32:03.187 "lvol_store_uuid": "9e207a75-146a-4ce2-810f-5e0cc9051368", 00:32:03.187 "base_bdev": "Nvme0n1", 00:32:03.187 "thin_provision": true, 00:32:03.187 "num_allocated_clusters": 0, 00:32:03.187 "snapshot": false, 00:32:03.188 "clone": false, 00:32:03.188 "esnap_clone": false 00:32:03.188 } 00:32:03.188 } 00:32:03.188 } 00:32:03.188 ] 00:32:03.188 22:15:22 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:32:03.188 22:15:22 compress_isal -- compress/compress.sh@41 -- # '[' -z '' ']' 00:32:03.188 22:15:22 compress_isal -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:32:03.446 [2024-07-13 22:15:22.647393] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:32:03.446 COMP_lvs0/lv0 00:32:03.446 22:15:22 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:32:03.446 22:15:22 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:32:03.446 22:15:22 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:32:03.446 22:15:22 compress_isal -- common/autotest_common.sh@899 -- # local i 00:32:03.446 22:15:22 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:32:03.446 22:15:22 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:32:03.446 22:15:22 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:03.704 22:15:22 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:32:03.704 [ 00:32:03.704 { 00:32:03.704 "name": "COMP_lvs0/lv0", 00:32:03.704 "aliases": [ 00:32:03.704 "e66b01b8-f31a-5010-a538-a9e4bd2ae760" 00:32:03.704 ], 00:32:03.704 "product_name": "compress", 00:32:03.704 "block_size": 512, 00:32:03.704 "num_blocks": 200704, 00:32:03.704 "uuid": "e66b01b8-f31a-5010-a538-a9e4bd2ae760", 00:32:03.704 "assigned_rate_limits": { 00:32:03.704 "rw_ios_per_sec": 0, 00:32:03.704 "rw_mbytes_per_sec": 0, 00:32:03.704 "r_mbytes_per_sec": 0, 00:32:03.705 "w_mbytes_per_sec": 0 00:32:03.705 }, 00:32:03.705 "claimed": false, 00:32:03.705 "zoned": false, 00:32:03.705 "supported_io_types": { 00:32:03.705 "read": true, 00:32:03.705 "write": true, 00:32:03.705 "unmap": false, 00:32:03.705 "flush": false, 00:32:03.705 "reset": false, 00:32:03.705 "nvme_admin": false, 00:32:03.705 "nvme_io": false, 00:32:03.705 "nvme_io_md": false, 00:32:03.705 "write_zeroes": true, 00:32:03.705 "zcopy": false, 00:32:03.705 "get_zone_info": false, 00:32:03.705 "zone_management": false, 00:32:03.705 "zone_append": false, 00:32:03.705 "compare": false, 00:32:03.705 "compare_and_write": false, 00:32:03.705 "abort": false, 00:32:03.705 "seek_hole": false, 00:32:03.705 "seek_data": false, 00:32:03.705 "copy": false, 00:32:03.705 "nvme_iov_md": false 00:32:03.705 }, 00:32:03.705 "driver_specific": { 00:32:03.705 "compress": { 00:32:03.705 "name": "COMP_lvs0/lv0", 00:32:03.705 "base_bdev_name": "c57f990b-6d70-4ed1-94c7-4049fdc29547" 00:32:03.705 } 00:32:03.705 } 00:32:03.705 } 00:32:03.705 ] 00:32:03.705 22:15:23 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:32:03.705 22:15:23 compress_isal -- compress/compress.sh@103 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:03.963 22:15:23 compress_isal -- compress/compress.sh@104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 COMP_lvs0/lv0 00:32:04.222 22:15:23 compress_isal -- compress/compress.sh@105 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:04.222 [2024-07-13 22:15:23.504441] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:04.222 22:15:23 compress_isal -- compress/compress.sh@108 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 64 -s 512 -w randrw -t 30 -c 0x18 -M 50 00:32:04.222 22:15:23 compress_isal -- compress/compress.sh@109 -- # perf_pid=1571321 00:32:04.222 22:15:23 compress_isal -- compress/compress.sh@112 -- # trap 'killprocess $perf_pid; compress_err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:04.222 22:15:23 compress_isal -- compress/compress.sh@113 -- # wait 1571321 00:32:04.481 [2024-07-13 22:15:23.782143] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:32:36.566 Initializing NVMe Controllers 00:32:36.566 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:32:36.566 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:32:36.566 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:32:36.566 Initialization complete. Launching workers. 00:32:36.567 ======================================================== 00:32:36.567 Latency(us) 00:32:36.567 Device Information : IOPS MiB/s Average min max 00:32:36.567 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 5776.47 22.56 11080.85 1608.32 33071.10 00:32:36.567 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 3638.07 14.21 17593.83 3712.48 38236.69 00:32:36.567 ======================================================== 00:32:36.567 Total : 9414.53 36.78 13597.67 1608.32 38236.69 00:32:36.567 00:32:36.567 22:15:53 compress_isal -- compress/compress.sh@114 -- # destroy_vols 00:32:36.567 22:15:53 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:32:36.567 22:15:54 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:32:36.567 22:15:54 compress_isal -- compress/compress.sh@116 -- # trap - SIGINT SIGTERM EXIT 00:32:36.567 22:15:54 compress_isal -- compress/compress.sh@117 -- # nvmftestfini 00:32:36.567 22:15:54 compress_isal -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:36.567 22:15:54 compress_isal -- nvmf/common.sh@117 -- # sync 00:32:36.567 22:15:54 compress_isal -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:36.567 22:15:54 compress_isal -- nvmf/common.sh@120 -- # set +e 00:32:36.567 22:15:54 compress_isal -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:36.567 22:15:54 compress_isal -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:36.567 rmmod nvme_tcp 00:32:36.567 rmmod nvme_fabrics 00:32:36.567 rmmod nvme_keyring 00:32:36.567 22:15:54 compress_isal -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:36.567 22:15:54 compress_isal -- nvmf/common.sh@124 -- # set -e 00:32:36.567 22:15:54 compress_isal -- nvmf/common.sh@125 -- # return 0 00:32:36.567 22:15:54 compress_isal -- nvmf/common.sh@489 -- # '[' -n 1570096 ']' 00:32:36.567 22:15:54 compress_isal -- nvmf/common.sh@490 -- # killprocess 1570096 00:32:36.567 22:15:54 compress_isal -- common/autotest_common.sh@948 -- # '[' -z 1570096 ']' 00:32:36.567 22:15:54 compress_isal -- common/autotest_common.sh@952 -- # kill -0 1570096 00:32:36.567 22:15:54 compress_isal -- common/autotest_common.sh@953 -- # uname 00:32:36.567 22:15:54 compress_isal -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:36.567 22:15:54 compress_isal -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1570096 00:32:36.567 22:15:54 compress_isal -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:36.567 22:15:54 compress_isal -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:36.567 22:15:54 compress_isal -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1570096' 00:32:36.567 killing process with pid 1570096 00:32:36.567 22:15:54 compress_isal -- common/autotest_common.sh@967 -- # kill 1570096 00:32:36.567 22:15:54 compress_isal -- common/autotest_common.sh@972 -- # wait 1570096 00:32:38.470 22:15:57 compress_isal -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:38.470 22:15:57 compress_isal -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:38.470 22:15:57 compress_isal -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:38.470 22:15:57 compress_isal -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:38.470 22:15:57 compress_isal -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:38.470 22:15:57 compress_isal -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:38.470 22:15:57 compress_isal -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:38.470 22:15:57 compress_isal -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:38.470 22:15:57 compress_isal -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:32:38.470 22:15:57 compress_isal -- compress/compress.sh@120 -- # rm -rf /tmp/pmem 00:32:38.470 00:32:38.470 real 2m14.346s 00:32:38.470 user 6m3.982s 00:32:38.470 sys 0m17.695s 00:32:38.470 22:15:57 compress_isal -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:38.470 22:15:57 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:32:38.470 ************************************ 00:32:38.470 END TEST compress_isal 00:32:38.470 ************************************ 00:32:38.470 22:15:57 -- common/autotest_common.sh@1142 -- # return 0 00:32:38.470 22:15:57 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:32:38.470 22:15:57 -- spdk/autotest.sh@356 -- # '[' 1 -eq 1 ']' 00:32:38.470 22:15:57 -- spdk/autotest.sh@357 -- # run_test blockdev_crypto_aesni /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_aesni 00:32:38.470 22:15:57 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:38.470 22:15:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:38.470 22:15:57 -- common/autotest_common.sh@10 -- # set +x 00:32:38.792 ************************************ 00:32:38.792 START TEST blockdev_crypto_aesni 00:32:38.792 ************************************ 00:32:38.792 22:15:57 blockdev_crypto_aesni -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_aesni 00:32:38.792 * Looking for test storage... 00:32:38.792 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:32:38.792 22:15:57 blockdev_crypto_aesni -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:32:38.792 22:15:57 blockdev_crypto_aesni -- bdev/nbd_common.sh@6 -- # set -e 00:32:38.792 22:15:57 blockdev_crypto_aesni -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:32:38.792 22:15:57 blockdev_crypto_aesni -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:32:38.792 22:15:57 blockdev_crypto_aesni -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:32:38.792 22:15:57 blockdev_crypto_aesni -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:32:38.792 22:15:57 blockdev_crypto_aesni -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:32:38.792 22:15:57 blockdev_crypto_aesni -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:32:38.792 22:15:57 blockdev_crypto_aesni -- bdev/blockdev.sh@20 -- # : 00:32:38.792 22:15:57 blockdev_crypto_aesni -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:32:38.792 22:15:57 blockdev_crypto_aesni -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:32:38.792 22:15:57 blockdev_crypto_aesni -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:32:38.792 22:15:57 blockdev_crypto_aesni -- bdev/blockdev.sh@674 -- # uname -s 00:32:38.792 22:15:57 blockdev_crypto_aesni -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:32:38.792 22:15:57 blockdev_crypto_aesni -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:32:38.792 22:15:57 blockdev_crypto_aesni -- bdev/blockdev.sh@682 -- # test_type=crypto_aesni 00:32:38.792 22:15:57 blockdev_crypto_aesni -- bdev/blockdev.sh@683 -- # crypto_device= 00:32:38.792 22:15:57 blockdev_crypto_aesni -- bdev/blockdev.sh@684 -- # dek= 00:32:38.792 22:15:57 blockdev_crypto_aesni -- bdev/blockdev.sh@685 -- # env_ctx= 00:32:38.792 22:15:57 blockdev_crypto_aesni -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:32:38.792 22:15:57 blockdev_crypto_aesni -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:32:38.792 22:15:57 blockdev_crypto_aesni -- bdev/blockdev.sh@690 -- # [[ crypto_aesni == bdev ]] 00:32:38.792 22:15:57 blockdev_crypto_aesni -- bdev/blockdev.sh@690 -- # [[ crypto_aesni == crypto_* ]] 00:32:38.792 22:15:57 blockdev_crypto_aesni -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:32:38.792 22:15:57 blockdev_crypto_aesni -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:32:38.792 22:15:57 blockdev_crypto_aesni -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=1576898 00:32:38.792 22:15:57 blockdev_crypto_aesni -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:32:38.792 22:15:57 blockdev_crypto_aesni -- bdev/blockdev.sh@49 -- # waitforlisten 1576898 00:32:38.792 22:15:57 blockdev_crypto_aesni -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:32:38.792 22:15:57 blockdev_crypto_aesni -- common/autotest_common.sh@829 -- # '[' -z 1576898 ']' 00:32:38.792 22:15:57 blockdev_crypto_aesni -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:38.792 22:15:57 blockdev_crypto_aesni -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:38.792 22:15:57 blockdev_crypto_aesni -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:38.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:38.792 22:15:57 blockdev_crypto_aesni -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:38.792 22:15:57 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:32:38.792 [2024-07-13 22:15:58.088926] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:32:38.792 [2024-07-13 22:15:58.089028] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1576898 ] 00:32:39.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.052 EAL: Requested device 0000:3d:01.0 cannot be used 00:32:39.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.052 EAL: Requested device 0000:3d:01.1 cannot be used 00:32:39.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.052 EAL: Requested device 0000:3d:01.2 cannot be used 00:32:39.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.052 EAL: Requested device 0000:3d:01.3 cannot be used 00:32:39.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.052 EAL: Requested device 0000:3d:01.4 cannot be used 00:32:39.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.052 EAL: Requested device 0000:3d:01.5 cannot be used 00:32:39.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.052 EAL: Requested device 0000:3d:01.6 cannot be used 00:32:39.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.052 EAL: Requested device 0000:3d:01.7 cannot be used 00:32:39.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.052 EAL: Requested device 0000:3d:02.0 cannot be used 00:32:39.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.052 EAL: Requested device 0000:3d:02.1 cannot be used 00:32:39.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.052 EAL: Requested device 0000:3d:02.2 cannot be used 00:32:39.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.052 EAL: Requested device 0000:3d:02.3 cannot be used 00:32:39.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.052 EAL: Requested device 0000:3d:02.4 cannot be used 00:32:39.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.052 EAL: Requested device 0000:3d:02.5 cannot be used 00:32:39.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.052 EAL: Requested device 0000:3d:02.6 cannot be used 00:32:39.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.052 EAL: Requested device 0000:3d:02.7 cannot be used 00:32:39.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.052 EAL: Requested device 0000:3f:01.0 cannot be used 00:32:39.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.052 EAL: Requested device 0000:3f:01.1 cannot be used 00:32:39.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.052 EAL: Requested device 0000:3f:01.2 cannot be used 00:32:39.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.052 EAL: Requested device 0000:3f:01.3 cannot be used 00:32:39.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.052 EAL: Requested device 0000:3f:01.4 cannot be used 00:32:39.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.052 EAL: Requested device 0000:3f:01.5 cannot be used 00:32:39.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.052 EAL: Requested device 0000:3f:01.6 cannot be used 00:32:39.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.052 EAL: Requested device 0000:3f:01.7 cannot be used 00:32:39.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.052 EAL: Requested device 0000:3f:02.0 cannot be used 00:32:39.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.052 EAL: Requested device 0000:3f:02.1 cannot be used 00:32:39.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.052 EAL: Requested device 0000:3f:02.2 cannot be used 00:32:39.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.052 EAL: Requested device 0000:3f:02.3 cannot be used 00:32:39.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.052 EAL: Requested device 0000:3f:02.4 cannot be used 00:32:39.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.052 EAL: Requested device 0000:3f:02.5 cannot be used 00:32:39.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.052 EAL: Requested device 0000:3f:02.6 cannot be used 00:32:39.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.052 EAL: Requested device 0000:3f:02.7 cannot be used 00:32:39.052 [2024-07-13 22:15:58.252730] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:39.311 [2024-07-13 22:15:58.456493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:39.570 22:15:58 blockdev_crypto_aesni -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:39.570 22:15:58 blockdev_crypto_aesni -- common/autotest_common.sh@862 -- # return 0 00:32:39.570 22:15:58 blockdev_crypto_aesni -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:32:39.570 22:15:58 blockdev_crypto_aesni -- bdev/blockdev.sh@705 -- # setup_crypto_aesni_conf 00:32:39.570 22:15:58 blockdev_crypto_aesni -- bdev/blockdev.sh@146 -- # rpc_cmd 00:32:39.570 22:15:58 blockdev_crypto_aesni -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.570 22:15:58 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:32:39.570 [2024-07-13 22:15:58.850125] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:32:39.570 [2024-07-13 22:15:58.858170] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:32:39.570 [2024-07-13 22:15:58.866189] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:32:39.829 [2024-07-13 22:15:59.106812] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:32:43.119 true 00:32:43.119 true 00:32:43.119 true 00:32:43.119 true 00:32:43.119 Malloc0 00:32:43.119 Malloc1 00:32:43.119 Malloc2 00:32:43.119 Malloc3 00:32:43.119 [2024-07-13 22:16:02.383751] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:32:43.119 crypto_ram 00:32:43.119 [2024-07-13 22:16:02.391756] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:32:43.119 crypto_ram2 00:32:43.119 [2024-07-13 22:16:02.399759] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:32:43.119 crypto_ram3 00:32:43.119 [2024-07-13 22:16:02.407809] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:32:43.119 crypto_ram4 00:32:43.119 22:16:02 blockdev_crypto_aesni -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.119 22:16:02 blockdev_crypto_aesni -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:32:43.119 22:16:02 blockdev_crypto_aesni -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.119 22:16:02 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:32:43.119 22:16:02 blockdev_crypto_aesni -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.119 22:16:02 blockdev_crypto_aesni -- bdev/blockdev.sh@740 -- # cat 00:32:43.119 22:16:02 blockdev_crypto_aesni -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:32:43.119 22:16:02 blockdev_crypto_aesni -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.119 22:16:02 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:32:43.119 22:16:02 blockdev_crypto_aesni -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.119 22:16:02 blockdev_crypto_aesni -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:32:43.119 22:16:02 blockdev_crypto_aesni -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.119 22:16:02 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:32:43.119 22:16:02 blockdev_crypto_aesni -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.119 22:16:02 blockdev_crypto_aesni -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:32:43.119 22:16:02 blockdev_crypto_aesni -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.119 22:16:02 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:32:43.119 22:16:02 blockdev_crypto_aesni -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.119 22:16:02 blockdev_crypto_aesni -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:32:43.119 22:16:02 blockdev_crypto_aesni -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:32:43.119 22:16:02 blockdev_crypto_aesni -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:32:43.119 22:16:02 blockdev_crypto_aesni -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.119 22:16:02 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:32:43.379 22:16:02 blockdev_crypto_aesni -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.379 22:16:02 blockdev_crypto_aesni -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:32:43.379 22:16:02 blockdev_crypto_aesni -- bdev/blockdev.sh@749 -- # jq -r .name 00:32:43.379 22:16:02 blockdev_crypto_aesni -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "3169081f-7bcc-5632-bc89-1782a05f1870"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "3169081f-7bcc-5632-bc89-1782a05f1870",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_aesni_cbc_1"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "5668ab86-e302-50eb-af6b-0500765ab723"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "5668ab86-e302-50eb-af6b-0500765ab723",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_aesni_cbc_2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "5ded7d7f-22c7-512f-a598-34063b9d3ad6"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "5ded7d7f-22c7-512f-a598-34063b9d3ad6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_aesni_cbc_3"' ' }' ' }' '}' '{' ' "name": "crypto_ram4",' ' "aliases": [' ' "37e285e5-191c-546c-9283-7472c2cf004f"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "37e285e5-191c-546c-9283-7472c2cf004f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram4",' ' "key_name": "test_dek_aesni_cbc_4"' ' }' ' }' '}' 00:32:43.379 22:16:02 blockdev_crypto_aesni -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:32:43.379 22:16:02 blockdev_crypto_aesni -- bdev/blockdev.sh@752 -- # hello_world_bdev=crypto_ram 00:32:43.379 22:16:02 blockdev_crypto_aesni -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:32:43.379 22:16:02 blockdev_crypto_aesni -- bdev/blockdev.sh@754 -- # killprocess 1576898 00:32:43.379 22:16:02 blockdev_crypto_aesni -- common/autotest_common.sh@948 -- # '[' -z 1576898 ']' 00:32:43.379 22:16:02 blockdev_crypto_aesni -- common/autotest_common.sh@952 -- # kill -0 1576898 00:32:43.379 22:16:02 blockdev_crypto_aesni -- common/autotest_common.sh@953 -- # uname 00:32:43.379 22:16:02 blockdev_crypto_aesni -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:43.379 22:16:02 blockdev_crypto_aesni -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1576898 00:32:43.379 22:16:02 blockdev_crypto_aesni -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:43.379 22:16:02 blockdev_crypto_aesni -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:43.379 22:16:02 blockdev_crypto_aesni -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1576898' 00:32:43.379 killing process with pid 1576898 00:32:43.379 22:16:02 blockdev_crypto_aesni -- common/autotest_common.sh@967 -- # kill 1576898 00:32:43.379 22:16:02 blockdev_crypto_aesni -- common/autotest_common.sh@972 -- # wait 1576898 00:32:46.670 22:16:05 blockdev_crypto_aesni -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:46.670 22:16:05 blockdev_crypto_aesni -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:32:46.670 22:16:05 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:32:46.670 22:16:05 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:46.670 22:16:05 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:32:46.670 ************************************ 00:32:46.670 START TEST bdev_hello_world 00:32:46.670 ************************************ 00:32:46.670 22:16:05 blockdev_crypto_aesni.bdev_hello_world -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:32:46.670 [2024-07-13 22:16:05.681885] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:32:46.670 [2024-07-13 22:16:05.681975] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1578225 ] 00:32:46.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.670 EAL: Requested device 0000:3d:01.0 cannot be used 00:32:46.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.670 EAL: Requested device 0000:3d:01.1 cannot be used 00:32:46.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.670 EAL: Requested device 0000:3d:01.2 cannot be used 00:32:46.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.670 EAL: Requested device 0000:3d:01.3 cannot be used 00:32:46.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.670 EAL: Requested device 0000:3d:01.4 cannot be used 00:32:46.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.670 EAL: Requested device 0000:3d:01.5 cannot be used 00:32:46.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.670 EAL: Requested device 0000:3d:01.6 cannot be used 00:32:46.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.670 EAL: Requested device 0000:3d:01.7 cannot be used 00:32:46.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.670 EAL: Requested device 0000:3d:02.0 cannot be used 00:32:46.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.670 EAL: Requested device 0000:3d:02.1 cannot be used 00:32:46.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.670 EAL: Requested device 0000:3d:02.2 cannot be used 00:32:46.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.670 EAL: Requested device 0000:3d:02.3 cannot be used 00:32:46.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.670 EAL: Requested device 0000:3d:02.4 cannot be used 00:32:46.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.670 EAL: Requested device 0000:3d:02.5 cannot be used 00:32:46.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.670 EAL: Requested device 0000:3d:02.6 cannot be used 00:32:46.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.670 EAL: Requested device 0000:3d:02.7 cannot be used 00:32:46.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.670 EAL: Requested device 0000:3f:01.0 cannot be used 00:32:46.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.670 EAL: Requested device 0000:3f:01.1 cannot be used 00:32:46.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.670 EAL: Requested device 0000:3f:01.2 cannot be used 00:32:46.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.670 EAL: Requested device 0000:3f:01.3 cannot be used 00:32:46.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.670 EAL: Requested device 0000:3f:01.4 cannot be used 00:32:46.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.670 EAL: Requested device 0000:3f:01.5 cannot be used 00:32:46.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.670 EAL: Requested device 0000:3f:01.6 cannot be used 00:32:46.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.670 EAL: Requested device 0000:3f:01.7 cannot be used 00:32:46.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.670 EAL: Requested device 0000:3f:02.0 cannot be used 00:32:46.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.670 EAL: Requested device 0000:3f:02.1 cannot be used 00:32:46.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.670 EAL: Requested device 0000:3f:02.2 cannot be used 00:32:46.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.670 EAL: Requested device 0000:3f:02.3 cannot be used 00:32:46.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.670 EAL: Requested device 0000:3f:02.4 cannot be used 00:32:46.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.670 EAL: Requested device 0000:3f:02.5 cannot be used 00:32:46.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.670 EAL: Requested device 0000:3f:02.6 cannot be used 00:32:46.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.670 EAL: Requested device 0000:3f:02.7 cannot be used 00:32:46.670 [2024-07-13 22:16:05.837873] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:46.670 [2024-07-13 22:16:06.041829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:46.929 [2024-07-13 22:16:06.063128] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:32:46.929 [2024-07-13 22:16:06.071141] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:32:46.929 [2024-07-13 22:16:06.079160] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:32:47.188 [2024-07-13 22:16:06.358856] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:32:49.736 [2024-07-13 22:16:08.970325] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:32:49.736 [2024-07-13 22:16:08.970390] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:32:49.736 [2024-07-13 22:16:08.970407] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:49.736 [2024-07-13 22:16:08.978339] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:32:49.736 [2024-07-13 22:16:08.978371] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:32:49.736 [2024-07-13 22:16:08.978383] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:49.736 [2024-07-13 22:16:08.986368] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:32:49.736 [2024-07-13 22:16:08.986400] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:32:49.736 [2024-07-13 22:16:08.986411] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:49.736 [2024-07-13 22:16:08.994376] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:32:49.736 [2024-07-13 22:16:08.994402] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:32:49.736 [2024-07-13 22:16:08.994413] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:49.994 [2024-07-13 22:16:09.189275] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:32:49.994 [2024-07-13 22:16:09.189315] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev crypto_ram 00:32:49.994 [2024-07-13 22:16:09.189336] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:32:49.994 [2024-07-13 22:16:09.190945] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:32:49.994 [2024-07-13 22:16:09.191028] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:32:49.994 [2024-07-13 22:16:09.191045] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:32:49.994 [2024-07-13 22:16:09.191095] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:32:49.994 00:32:49.994 [2024-07-13 22:16:09.191118] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:32:51.896 00:32:51.896 real 0m5.388s 00:32:51.896 user 0m4.896s 00:32:51.896 sys 0m0.450s 00:32:51.896 22:16:10 blockdev_crypto_aesni.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:51.896 22:16:10 blockdev_crypto_aesni.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:32:51.896 ************************************ 00:32:51.896 END TEST bdev_hello_world 00:32:51.896 ************************************ 00:32:51.896 22:16:11 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 0 00:32:51.896 22:16:11 blockdev_crypto_aesni -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:32:51.896 22:16:11 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:51.896 22:16:11 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:51.896 22:16:11 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:32:51.896 ************************************ 00:32:51.896 START TEST bdev_bounds 00:32:51.896 ************************************ 00:32:51.896 22:16:11 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:32:51.896 22:16:11 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=1579118 00:32:51.896 22:16:11 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:32:51.896 22:16:11 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 1579118' 00:32:51.896 Process bdevio pid: 1579118 00:32:51.896 22:16:11 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 1579118 00:32:51.896 22:16:11 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 1579118 ']' 00:32:51.896 22:16:11 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:51.896 22:16:11 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:51.896 22:16:11 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:51.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:51.896 22:16:11 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:51.896 22:16:11 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:32:51.896 22:16:11 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@289 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:32:51.896 [2024-07-13 22:16:11.155532] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:32:51.896 [2024-07-13 22:16:11.155619] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1579118 ] 00:32:51.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:51.896 EAL: Requested device 0000:3d:01.0 cannot be used 00:32:51.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:51.896 EAL: Requested device 0000:3d:01.1 cannot be used 00:32:51.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:51.897 EAL: Requested device 0000:3d:01.2 cannot be used 00:32:51.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:51.897 EAL: Requested device 0000:3d:01.3 cannot be used 00:32:51.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:51.897 EAL: Requested device 0000:3d:01.4 cannot be used 00:32:51.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:51.897 EAL: Requested device 0000:3d:01.5 cannot be used 00:32:51.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:51.897 EAL: Requested device 0000:3d:01.6 cannot be used 00:32:51.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:51.897 EAL: Requested device 0000:3d:01.7 cannot be used 00:32:51.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:51.897 EAL: Requested device 0000:3d:02.0 cannot be used 00:32:51.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:51.897 EAL: Requested device 0000:3d:02.1 cannot be used 00:32:51.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:51.897 EAL: Requested device 0000:3d:02.2 cannot be used 00:32:51.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:51.897 EAL: Requested device 0000:3d:02.3 cannot be used 00:32:51.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:51.897 EAL: Requested device 0000:3d:02.4 cannot be used 00:32:51.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:51.897 EAL: Requested device 0000:3d:02.5 cannot be used 00:32:51.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:51.897 EAL: Requested device 0000:3d:02.6 cannot be used 00:32:51.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:51.897 EAL: Requested device 0000:3d:02.7 cannot be used 00:32:51.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:51.897 EAL: Requested device 0000:3f:01.0 cannot be used 00:32:51.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:51.897 EAL: Requested device 0000:3f:01.1 cannot be used 00:32:51.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:51.897 EAL: Requested device 0000:3f:01.2 cannot be used 00:32:51.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:51.897 EAL: Requested device 0000:3f:01.3 cannot be used 00:32:51.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:51.897 EAL: Requested device 0000:3f:01.4 cannot be used 00:32:51.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:51.897 EAL: Requested device 0000:3f:01.5 cannot be used 00:32:51.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:51.897 EAL: Requested device 0000:3f:01.6 cannot be used 00:32:51.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:51.897 EAL: Requested device 0000:3f:01.7 cannot be used 00:32:51.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:51.897 EAL: Requested device 0000:3f:02.0 cannot be used 00:32:51.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:51.897 EAL: Requested device 0000:3f:02.1 cannot be used 00:32:51.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:51.897 EAL: Requested device 0000:3f:02.2 cannot be used 00:32:51.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:51.897 EAL: Requested device 0000:3f:02.3 cannot be used 00:32:51.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:51.897 EAL: Requested device 0000:3f:02.4 cannot be used 00:32:51.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:51.897 EAL: Requested device 0000:3f:02.5 cannot be used 00:32:51.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:51.897 EAL: Requested device 0000:3f:02.6 cannot be used 00:32:51.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:51.897 EAL: Requested device 0000:3f:02.7 cannot be used 00:32:52.155 [2024-07-13 22:16:11.316717] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:52.155 [2024-07-13 22:16:11.516733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:52.155 [2024-07-13 22:16:11.516800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:52.155 [2024-07-13 22:16:11.516804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:52.155 [2024-07-13 22:16:11.538140] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:32:52.414 [2024-07-13 22:16:11.546147] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:32:52.414 [2024-07-13 22:16:11.554173] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:32:52.673 [2024-07-13 22:16:11.839166] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:32:55.207 [2024-07-13 22:16:14.472034] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:32:55.207 [2024-07-13 22:16:14.472109] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:32:55.207 [2024-07-13 22:16:14.472124] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:55.207 [2024-07-13 22:16:14.480047] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:32:55.207 [2024-07-13 22:16:14.480081] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:32:55.207 [2024-07-13 22:16:14.480093] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:55.207 [2024-07-13 22:16:14.488088] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:32:55.207 [2024-07-13 22:16:14.488137] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:32:55.207 [2024-07-13 22:16:14.488149] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:55.207 [2024-07-13 22:16:14.496087] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:32:55.207 [2024-07-13 22:16:14.496113] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:32:55.207 [2024-07-13 22:16:14.496123] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:56.143 22:16:15 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:56.143 22:16:15 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:32:56.143 22:16:15 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@294 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:32:56.143 I/O targets: 00:32:56.143 crypto_ram: 65536 blocks of 512 bytes (32 MiB) 00:32:56.143 crypto_ram2: 65536 blocks of 512 bytes (32 MiB) 00:32:56.143 crypto_ram3: 8192 blocks of 4096 bytes (32 MiB) 00:32:56.143 crypto_ram4: 8192 blocks of 4096 bytes (32 MiB) 00:32:56.143 00:32:56.143 00:32:56.143 CUnit - A unit testing framework for C - Version 2.1-3 00:32:56.143 http://cunit.sourceforge.net/ 00:32:56.143 00:32:56.143 00:32:56.143 Suite: bdevio tests on: crypto_ram4 00:32:56.143 Test: blockdev write read block ...passed 00:32:56.143 Test: blockdev write zeroes read block ...passed 00:32:56.143 Test: blockdev write zeroes read no split ...passed 00:32:56.143 Test: blockdev write zeroes read split ...passed 00:32:56.143 Test: blockdev write zeroes read split partial ...passed 00:32:56.143 Test: blockdev reset ...passed 00:32:56.143 Test: blockdev write read 8 blocks ...passed 00:32:56.143 Test: blockdev write read size > 128k ...passed 00:32:56.143 Test: blockdev write read invalid size ...passed 00:32:56.143 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:32:56.143 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:32:56.143 Test: blockdev write read max offset ...passed 00:32:56.143 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:32:56.143 Test: blockdev writev readv 8 blocks ...passed 00:32:56.143 Test: blockdev writev readv 30 x 1block ...passed 00:32:56.143 Test: blockdev writev readv block ...passed 00:32:56.143 Test: blockdev writev readv size > 128k ...passed 00:32:56.143 Test: blockdev writev readv size > 128k in two iovs ...passed 00:32:56.143 Test: blockdev comparev and writev ...passed 00:32:56.143 Test: blockdev nvme passthru rw ...passed 00:32:56.143 Test: blockdev nvme passthru vendor specific ...passed 00:32:56.143 Test: blockdev nvme admin passthru ...passed 00:32:56.143 Test: blockdev copy ...passed 00:32:56.143 Suite: bdevio tests on: crypto_ram3 00:32:56.143 Test: blockdev write read block ...passed 00:32:56.143 Test: blockdev write zeroes read block ...passed 00:32:56.143 Test: blockdev write zeroes read no split ...passed 00:32:56.143 Test: blockdev write zeroes read split ...passed 00:32:56.143 Test: blockdev write zeroes read split partial ...passed 00:32:56.143 Test: blockdev reset ...passed 00:32:56.143 Test: blockdev write read 8 blocks ...passed 00:32:56.143 Test: blockdev write read size > 128k ...passed 00:32:56.143 Test: blockdev write read invalid size ...passed 00:32:56.143 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:32:56.143 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:32:56.143 Test: blockdev write read max offset ...passed 00:32:56.143 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:32:56.143 Test: blockdev writev readv 8 blocks ...passed 00:32:56.143 Test: blockdev writev readv 30 x 1block ...passed 00:32:56.143 Test: blockdev writev readv block ...passed 00:32:56.143 Test: blockdev writev readv size > 128k ...passed 00:32:56.143 Test: blockdev writev readv size > 128k in two iovs ...passed 00:32:56.143 Test: blockdev comparev and writev ...passed 00:32:56.143 Test: blockdev nvme passthru rw ...passed 00:32:56.143 Test: blockdev nvme passthru vendor specific ...passed 00:32:56.143 Test: blockdev nvme admin passthru ...passed 00:32:56.143 Test: blockdev copy ...passed 00:32:56.143 Suite: bdevio tests on: crypto_ram2 00:32:56.143 Test: blockdev write read block ...passed 00:32:56.143 Test: blockdev write zeroes read block ...passed 00:32:56.401 Test: blockdev write zeroes read no split ...passed 00:32:56.401 Test: blockdev write zeroes read split ...passed 00:32:56.401 Test: blockdev write zeroes read split partial ...passed 00:32:56.401 Test: blockdev reset ...passed 00:32:56.401 Test: blockdev write read 8 blocks ...passed 00:32:56.401 Test: blockdev write read size > 128k ...passed 00:32:56.401 Test: blockdev write read invalid size ...passed 00:32:56.401 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:32:56.401 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:32:56.401 Test: blockdev write read max offset ...passed 00:32:56.401 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:32:56.401 Test: blockdev writev readv 8 blocks ...passed 00:32:56.401 Test: blockdev writev readv 30 x 1block ...passed 00:32:56.401 Test: blockdev writev readv block ...passed 00:32:56.401 Test: blockdev writev readv size > 128k ...passed 00:32:56.401 Test: blockdev writev readv size > 128k in two iovs ...passed 00:32:56.401 Test: blockdev comparev and writev ...passed 00:32:56.401 Test: blockdev nvme passthru rw ...passed 00:32:56.401 Test: blockdev nvme passthru vendor specific ...passed 00:32:56.401 Test: blockdev nvme admin passthru ...passed 00:32:56.401 Test: blockdev copy ...passed 00:32:56.401 Suite: bdevio tests on: crypto_ram 00:32:56.401 Test: blockdev write read block ...passed 00:32:56.401 Test: blockdev write zeroes read block ...passed 00:32:56.401 Test: blockdev write zeroes read no split ...passed 00:32:56.660 Test: blockdev write zeroes read split ...passed 00:32:56.660 Test: blockdev write zeroes read split partial ...passed 00:32:56.660 Test: blockdev reset ...passed 00:32:56.660 Test: blockdev write read 8 blocks ...passed 00:32:56.660 Test: blockdev write read size > 128k ...passed 00:32:56.660 Test: blockdev write read invalid size ...passed 00:32:56.660 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:32:56.660 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:32:56.660 Test: blockdev write read max offset ...passed 00:32:56.660 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:32:56.660 Test: blockdev writev readv 8 blocks ...passed 00:32:56.660 Test: blockdev writev readv 30 x 1block ...passed 00:32:56.660 Test: blockdev writev readv block ...passed 00:32:56.660 Test: blockdev writev readv size > 128k ...passed 00:32:56.660 Test: blockdev writev readv size > 128k in two iovs ...passed 00:32:56.660 Test: blockdev comparev and writev ...passed 00:32:56.660 Test: blockdev nvme passthru rw ...passed 00:32:56.660 Test: blockdev nvme passthru vendor specific ...passed 00:32:56.660 Test: blockdev nvme admin passthru ...passed 00:32:56.660 Test: blockdev copy ...passed 00:32:56.660 00:32:56.660 Run Summary: Type Total Ran Passed Failed Inactive 00:32:56.660 suites 4 4 n/a 0 0 00:32:56.660 tests 92 92 92 0 0 00:32:56.660 asserts 520 520 520 0 n/a 00:32:56.660 00:32:56.660 Elapsed time = 1.357 seconds 00:32:56.660 0 00:32:56.660 22:16:15 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 1579118 00:32:56.660 22:16:15 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 1579118 ']' 00:32:56.660 22:16:15 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 1579118 00:32:56.660 22:16:15 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:32:56.660 22:16:15 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:56.660 22:16:15 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1579118 00:32:56.660 22:16:15 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:56.660 22:16:15 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:56.660 22:16:15 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1579118' 00:32:56.660 killing process with pid 1579118 00:32:56.660 22:16:15 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@967 -- # kill 1579118 00:32:56.660 22:16:15 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@972 -- # wait 1579118 00:32:58.627 22:16:17 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:32:58.627 00:32:58.627 real 0m6.780s 00:32:58.627 user 0m18.546s 00:32:58.627 sys 0m0.661s 00:32:58.627 22:16:17 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:58.627 22:16:17 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:32:58.627 ************************************ 00:32:58.627 END TEST bdev_bounds 00:32:58.627 ************************************ 00:32:58.627 22:16:17 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 0 00:32:58.627 22:16:17 blockdev_crypto_aesni -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '' 00:32:58.627 22:16:17 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:32:58.627 22:16:17 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:58.627 22:16:17 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:32:58.627 ************************************ 00:32:58.627 START TEST bdev_nbd 00:32:58.627 ************************************ 00:32:58.627 22:16:17 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '' 00:32:58.627 22:16:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:32:58.627 22:16:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:32:58.627 22:16:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:58.627 22:16:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:32:58.627 22:16:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:32:58.627 22:16:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:32:58.627 22:16:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=4 00:32:58.627 22:16:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:32:58.627 22:16:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:32:58.627 22:16:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:32:58.627 22:16:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=4 00:32:58.627 22:16:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:32:58.627 22:16:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:32:58.627 22:16:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:32:58.627 22:16:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:32:58.627 22:16:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=1580372 00:32:58.627 22:16:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:32:58.627 22:16:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:32:58.627 22:16:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 1580372 /var/tmp/spdk-nbd.sock 00:32:58.627 22:16:17 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 1580372 ']' 00:32:58.627 22:16:17 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:32:58.627 22:16:17 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:58.627 22:16:17 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:32:58.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:32:58.627 22:16:17 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:58.627 22:16:17 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:32:58.885 [2024-07-13 22:16:18.032203] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:32:58.885 [2024-07-13 22:16:18.032296] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:58.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.885 EAL: Requested device 0000:3d:01.0 cannot be used 00:32:58.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.885 EAL: Requested device 0000:3d:01.1 cannot be used 00:32:58.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.885 EAL: Requested device 0000:3d:01.2 cannot be used 00:32:58.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.885 EAL: Requested device 0000:3d:01.3 cannot be used 00:32:58.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.885 EAL: Requested device 0000:3d:01.4 cannot be used 00:32:58.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.885 EAL: Requested device 0000:3d:01.5 cannot be used 00:32:58.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.885 EAL: Requested device 0000:3d:01.6 cannot be used 00:32:58.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.885 EAL: Requested device 0000:3d:01.7 cannot be used 00:32:58.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.885 EAL: Requested device 0000:3d:02.0 cannot be used 00:32:58.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.885 EAL: Requested device 0000:3d:02.1 cannot be used 00:32:58.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.885 EAL: Requested device 0000:3d:02.2 cannot be used 00:32:58.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.885 EAL: Requested device 0000:3d:02.3 cannot be used 00:32:58.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.885 EAL: Requested device 0000:3d:02.4 cannot be used 00:32:58.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.885 EAL: Requested device 0000:3d:02.5 cannot be used 00:32:58.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.885 EAL: Requested device 0000:3d:02.6 cannot be used 00:32:58.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.885 EAL: Requested device 0000:3d:02.7 cannot be used 00:32:58.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.885 EAL: Requested device 0000:3f:01.0 cannot be used 00:32:58.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.885 EAL: Requested device 0000:3f:01.1 cannot be used 00:32:58.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.885 EAL: Requested device 0000:3f:01.2 cannot be used 00:32:58.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.885 EAL: Requested device 0000:3f:01.3 cannot be used 00:32:58.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.885 EAL: Requested device 0000:3f:01.4 cannot be used 00:32:58.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.885 EAL: Requested device 0000:3f:01.5 cannot be used 00:32:58.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.885 EAL: Requested device 0000:3f:01.6 cannot be used 00:32:58.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.885 EAL: Requested device 0000:3f:01.7 cannot be used 00:32:58.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.885 EAL: Requested device 0000:3f:02.0 cannot be used 00:32:58.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.885 EAL: Requested device 0000:3f:02.1 cannot be used 00:32:58.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.885 EAL: Requested device 0000:3f:02.2 cannot be used 00:32:58.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.885 EAL: Requested device 0000:3f:02.3 cannot be used 00:32:58.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.885 EAL: Requested device 0000:3f:02.4 cannot be used 00:32:58.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.885 EAL: Requested device 0000:3f:02.5 cannot be used 00:32:58.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.885 EAL: Requested device 0000:3f:02.6 cannot be used 00:32:58.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.885 EAL: Requested device 0000:3f:02.7 cannot be used 00:32:58.885 [2024-07-13 22:16:18.193723] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:59.142 [2024-07-13 22:16:18.399494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:59.142 [2024-07-13 22:16:18.420702] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:32:59.142 [2024-07-13 22:16:18.428724] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:32:59.142 [2024-07-13 22:16:18.436750] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:32:59.400 [2024-07-13 22:16:18.731480] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:33:02.680 [2024-07-13 22:16:21.379146] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:33:02.680 [2024-07-13 22:16:21.379210] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:33:02.680 [2024-07-13 22:16:21.379227] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:02.680 [2024-07-13 22:16:21.387162] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:33:02.680 [2024-07-13 22:16:21.387194] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:33:02.680 [2024-07-13 22:16:21.387205] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:02.680 [2024-07-13 22:16:21.395192] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:33:02.680 [2024-07-13 22:16:21.395229] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:33:02.680 [2024-07-13 22:16:21.395240] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:02.680 [2024-07-13 22:16:21.403186] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:33:02.680 [2024-07-13 22:16:21.403230] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:33:02.680 [2024-07-13 22:16:21.403241] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:02.937 22:16:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:02.937 22:16:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:33:02.937 22:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' 00:33:02.937 22:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:02.937 22:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:33:02.937 22:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:33:02.937 22:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' 00:33:02.937 22:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:02.937 22:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:33:02.937 22:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:33:02.937 22:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:33:02.937 22:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:33:02.937 22:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:33:02.937 22:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:33:02.937 22:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram 00:33:03.195 22:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:33:03.195 22:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:33:03.195 22:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:33:03.195 22:16:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:33:03.195 22:16:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:33:03.195 22:16:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:33:03.195 22:16:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:33:03.195 22:16:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:33:03.196 22:16:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:33:03.196 22:16:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:33:03.196 22:16:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:33:03.196 22:16:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:03.196 1+0 records in 00:33:03.196 1+0 records out 00:33:03.196 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279648 s, 14.6 MB/s 00:33:03.196 22:16:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:03.196 22:16:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:33:03.196 22:16:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:03.196 22:16:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:33:03.196 22:16:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:33:03.196 22:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:33:03.196 22:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:33:03.196 22:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 00:33:03.196 22:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:33:03.196 22:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:33:03.196 22:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:33:03.196 22:16:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:33:03.196 22:16:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:33:03.196 22:16:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:33:03.196 22:16:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:33:03.196 22:16:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:33:03.196 22:16:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:33:03.196 22:16:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:33:03.196 22:16:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:33:03.196 22:16:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:03.196 1+0 records in 00:33:03.196 1+0 records out 00:33:03.196 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000356125 s, 11.5 MB/s 00:33:03.196 22:16:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:03.196 22:16:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:33:03.196 22:16:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:03.196 22:16:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:33:03.196 22:16:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:33:03.196 22:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:33:03.196 22:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:33:03.196 22:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 00:33:03.453 22:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:33:03.453 22:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:33:03.453 22:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:33:03.453 22:16:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:33:03.453 22:16:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:33:03.453 22:16:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:33:03.453 22:16:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:33:03.453 22:16:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:33:03.453 22:16:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:33:03.453 22:16:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:33:03.453 22:16:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:33:03.453 22:16:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:03.453 1+0 records in 00:33:03.453 1+0 records out 00:33:03.453 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00029162 s, 14.0 MB/s 00:33:03.453 22:16:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:03.453 22:16:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:33:03.453 22:16:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:03.453 22:16:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:33:03.453 22:16:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:33:03.453 22:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:33:03.453 22:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:33:03.453 22:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram4 00:33:03.711 22:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:33:03.711 22:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:33:03.711 22:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:33:03.711 22:16:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:33:03.711 22:16:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:33:03.711 22:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:33:03.711 22:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:33:03.711 22:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:33:03.711 22:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:33:03.711 22:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:33:03.711 22:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:33:03.711 22:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:03.711 1+0 records in 00:33:03.711 1+0 records out 00:33:03.711 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000350753 s, 11.7 MB/s 00:33:03.711 22:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:03.711 22:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:33:03.712 22:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:03.712 22:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:33:03.712 22:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:33:03.712 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:33:03.712 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:33:03.712 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:03.970 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:33:03.970 { 00:33:03.970 "nbd_device": "/dev/nbd0", 00:33:03.970 "bdev_name": "crypto_ram" 00:33:03.970 }, 00:33:03.970 { 00:33:03.970 "nbd_device": "/dev/nbd1", 00:33:03.970 "bdev_name": "crypto_ram2" 00:33:03.970 }, 00:33:03.970 { 00:33:03.970 "nbd_device": "/dev/nbd2", 00:33:03.970 "bdev_name": "crypto_ram3" 00:33:03.970 }, 00:33:03.970 { 00:33:03.970 "nbd_device": "/dev/nbd3", 00:33:03.970 "bdev_name": "crypto_ram4" 00:33:03.970 } 00:33:03.970 ]' 00:33:03.970 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:33:03.970 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:33:03.970 { 00:33:03.970 "nbd_device": "/dev/nbd0", 00:33:03.970 "bdev_name": "crypto_ram" 00:33:03.970 }, 00:33:03.970 { 00:33:03.970 "nbd_device": "/dev/nbd1", 00:33:03.970 "bdev_name": "crypto_ram2" 00:33:03.970 }, 00:33:03.970 { 00:33:03.970 "nbd_device": "/dev/nbd2", 00:33:03.970 "bdev_name": "crypto_ram3" 00:33:03.970 }, 00:33:03.970 { 00:33:03.970 "nbd_device": "/dev/nbd3", 00:33:03.970 "bdev_name": "crypto_ram4" 00:33:03.970 } 00:33:03.970 ]' 00:33:03.970 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:33:03.970 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3' 00:33:03.970 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:03.970 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3') 00:33:03.970 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:03.970 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:33:03.970 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:03.970 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:33:04.228 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:04.228 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:04.228 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:04.228 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:04.228 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:04.228 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:04.228 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:04.228 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:04.228 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:04.228 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:33:04.228 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:04.228 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:04.228 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:04.228 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:04.228 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:04.228 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:04.228 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:04.228 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:04.228 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:04.228 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:33:04.485 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:33:04.485 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:33:04.485 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:33:04.485 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:04.485 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:04.485 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:33:04.485 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:04.485 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:04.485 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:04.485 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:33:04.743 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:33:04.743 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:33:04.743 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:33:04.743 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:04.743 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:04.743 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:33:04.743 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:04.743 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:04.743 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:04.743 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:04.743 22:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:05.001 22:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:33:05.001 22:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:33:05.001 22:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:05.001 22:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:33:05.001 22:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:05.001 22:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:33:05.001 22:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:33:05.001 22:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:33:05.001 22:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:33:05.001 22:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:33:05.001 22:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:33:05.001 22:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:33:05.001 22:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:33:05.001 22:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:05.001 22:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:33:05.001 22:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:33:05.001 22:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:33:05.001 22:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:33:05.001 22:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:33:05.001 22:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:05.001 22:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:33:05.001 22:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:05.001 22:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:33:05.001 22:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:05.001 22:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:33:05.001 22:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:05.001 22:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:33:05.001 22:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram /dev/nbd0 00:33:05.001 /dev/nbd0 00:33:05.001 22:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:05.001 22:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:05.001 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:33:05.001 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:33:05.001 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:33:05.001 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:33:05.001 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:33:05.259 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:33:05.259 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:33:05.259 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:33:05.259 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:05.259 1+0 records in 00:33:05.259 1+0 records out 00:33:05.259 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000313598 s, 13.1 MB/s 00:33:05.259 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:05.259 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:33:05.259 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:05.259 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:33:05.259 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:33:05.259 22:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:05.259 22:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:33:05.259 22:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 /dev/nbd1 00:33:05.259 /dev/nbd1 00:33:05.259 22:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:33:05.259 22:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:33:05.259 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:33:05.259 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:33:05.259 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:33:05.259 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:33:05.259 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:33:05.259 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:33:05.259 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:33:05.259 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:33:05.259 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:05.259 1+0 records in 00:33:05.259 1+0 records out 00:33:05.259 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000323579 s, 12.7 MB/s 00:33:05.259 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:05.259 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:33:05.259 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:05.259 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:33:05.259 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:33:05.259 22:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:05.259 22:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:33:05.259 22:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 /dev/nbd10 00:33:05.516 /dev/nbd10 00:33:05.516 22:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:33:05.516 22:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:33:05.516 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:33:05.516 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:33:05.516 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:33:05.516 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:33:05.516 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:33:05.516 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:33:05.516 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:33:05.516 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:33:05.516 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:05.516 1+0 records in 00:33:05.516 1+0 records out 00:33:05.516 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000304874 s, 13.4 MB/s 00:33:05.516 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:05.516 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:33:05.516 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:05.516 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:33:05.516 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:33:05.516 22:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:05.516 22:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:33:05.516 22:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram4 /dev/nbd11 00:33:05.774 /dev/nbd11 00:33:05.774 22:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:33:05.774 22:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:33:05.774 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:33:05.774 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:33:05.774 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:33:05.774 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:33:05.774 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:33:05.774 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:33:05.774 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:33:05.774 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:33:05.774 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:05.774 1+0 records in 00:33:05.774 1+0 records out 00:33:05.774 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000337721 s, 12.1 MB/s 00:33:05.774 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:05.774 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:33:05.774 22:16:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:05.774 22:16:25 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:33:05.774 22:16:25 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:33:05.774 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:05.774 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:33:05.774 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:05.774 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:05.774 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:06.032 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:33:06.032 { 00:33:06.032 "nbd_device": "/dev/nbd0", 00:33:06.032 "bdev_name": "crypto_ram" 00:33:06.032 }, 00:33:06.032 { 00:33:06.032 "nbd_device": "/dev/nbd1", 00:33:06.032 "bdev_name": "crypto_ram2" 00:33:06.032 }, 00:33:06.032 { 00:33:06.032 "nbd_device": "/dev/nbd10", 00:33:06.032 "bdev_name": "crypto_ram3" 00:33:06.032 }, 00:33:06.032 { 00:33:06.032 "nbd_device": "/dev/nbd11", 00:33:06.032 "bdev_name": "crypto_ram4" 00:33:06.032 } 00:33:06.032 ]' 00:33:06.032 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:33:06.032 { 00:33:06.032 "nbd_device": "/dev/nbd0", 00:33:06.032 "bdev_name": "crypto_ram" 00:33:06.032 }, 00:33:06.032 { 00:33:06.032 "nbd_device": "/dev/nbd1", 00:33:06.032 "bdev_name": "crypto_ram2" 00:33:06.032 }, 00:33:06.032 { 00:33:06.032 "nbd_device": "/dev/nbd10", 00:33:06.032 "bdev_name": "crypto_ram3" 00:33:06.032 }, 00:33:06.032 { 00:33:06.032 "nbd_device": "/dev/nbd11", 00:33:06.032 "bdev_name": "crypto_ram4" 00:33:06.032 } 00:33:06.032 ]' 00:33:06.032 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:06.032 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:33:06.032 /dev/nbd1 00:33:06.032 /dev/nbd10 00:33:06.032 /dev/nbd11' 00:33:06.032 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:33:06.032 /dev/nbd1 00:33:06.032 /dev/nbd10 00:33:06.032 /dev/nbd11' 00:33:06.032 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:06.032 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=4 00:33:06.032 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 4 00:33:06.032 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=4 00:33:06.032 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 4 -ne 4 ']' 00:33:06.032 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' write 00:33:06.032 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:33:06.032 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:33:06.032 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:33:06.032 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:33:06.032 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:33:06.032 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:33:06.032 256+0 records in 00:33:06.032 256+0 records out 00:33:06.032 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00461453 s, 227 MB/s 00:33:06.032 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:33:06.032 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:33:06.032 256+0 records in 00:33:06.032 256+0 records out 00:33:06.032 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0382759 s, 27.4 MB/s 00:33:06.032 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:33:06.032 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:33:06.032 256+0 records in 00:33:06.032 256+0 records out 00:33:06.032 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0362481 s, 28.9 MB/s 00:33:06.032 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:33:06.032 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:33:06.032 256+0 records in 00:33:06.032 256+0 records out 00:33:06.032 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0447667 s, 23.4 MB/s 00:33:06.032 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:33:06.032 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:33:06.032 256+0 records in 00:33:06.032 256+0 records out 00:33:06.032 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0289054 s, 36.3 MB/s 00:33:06.032 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' verify 00:33:06.032 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:33:06.032 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:33:06.032 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:33:06.032 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:33:06.032 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:33:06.032 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:33:06.032 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:33:06.032 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:33:06.032 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:33:06.032 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:33:06.290 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:33:06.290 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd10 00:33:06.290 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:33:06.290 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd11 00:33:06.290 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:33:06.290 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:33:06.290 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:06.290 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:33:06.291 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:06.291 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:33:06.291 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:06.291 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:33:06.291 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:06.291 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:06.291 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:06.291 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:06.291 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:06.291 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:06.291 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:06.291 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:06.291 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:06.291 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:33:06.549 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:06.549 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:06.549 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:06.549 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:06.549 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:06.549 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:06.549 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:06.549 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:06.549 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:06.549 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:33:06.807 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:33:06.807 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:33:06.807 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:33:06.807 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:06.807 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:06.807 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:33:06.807 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:06.807 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:06.807 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:06.807 22:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:33:06.807 22:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:33:06.807 22:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:33:06.807 22:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:33:06.807 22:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:06.807 22:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:06.807 22:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:33:06.807 22:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:06.807 22:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:06.807 22:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:06.807 22:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:06.807 22:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:07.065 22:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:33:07.065 22:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:33:07.065 22:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:07.065 22:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:33:07.065 22:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:33:07.065 22:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:07.065 22:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:33:07.065 22:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:33:07.066 22:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:33:07.066 22:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:33:07.066 22:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:33:07.066 22:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:33:07.066 22:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:33:07.066 22:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:07.066 22:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:33:07.066 22:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:33:07.066 22:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:33:07.066 22:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:33:07.324 malloc_lvol_verify 00:33:07.324 22:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:33:07.582 f047e446-3a75-48a4-809f-44eb1418db10 00:33:07.582 22:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:33:07.582 a47c909d-3f75-41ff-a8be-63c74aa688b3 00:33:07.582 22:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:33:07.841 /dev/nbd0 00:33:07.841 22:16:27 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:33:07.841 mke2fs 1.46.5 (30-Dec-2021) 00:33:07.841 Discarding device blocks: 0/4096 done 00:33:07.841 Creating filesystem with 4096 1k blocks and 1024 inodes 00:33:07.841 00:33:07.841 Allocating group tables: 0/1 done 00:33:07.841 Writing inode tables: 0/1 done 00:33:07.841 Creating journal (1024 blocks): done 00:33:07.841 Writing superblocks and filesystem accounting information: 0/1 done 00:33:07.841 00:33:07.841 22:16:27 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:33:07.841 22:16:27 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:33:07.841 22:16:27 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:07.841 22:16:27 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:33:07.841 22:16:27 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:07.841 22:16:27 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:33:07.841 22:16:27 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:07.841 22:16:27 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:33:08.099 22:16:27 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:08.099 22:16:27 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:08.099 22:16:27 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:08.099 22:16:27 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:08.099 22:16:27 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:08.099 22:16:27 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:08.099 22:16:27 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:08.099 22:16:27 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:08.099 22:16:27 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:33:08.099 22:16:27 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:33:08.099 22:16:27 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 1580372 00:33:08.099 22:16:27 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 1580372 ']' 00:33:08.099 22:16:27 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 1580372 00:33:08.099 22:16:27 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:33:08.099 22:16:27 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:08.099 22:16:27 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1580372 00:33:08.099 22:16:27 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:08.099 22:16:27 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:08.099 22:16:27 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1580372' 00:33:08.099 killing process with pid 1580372 00:33:08.099 22:16:27 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@967 -- # kill 1580372 00:33:08.099 22:16:27 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@972 -- # wait 1580372 00:33:10.002 22:16:29 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:33:10.002 00:33:10.002 real 0m11.312s 00:33:10.002 user 0m13.480s 00:33:10.002 sys 0m3.188s 00:33:10.002 22:16:29 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:10.002 22:16:29 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:33:10.002 ************************************ 00:33:10.002 END TEST bdev_nbd 00:33:10.002 ************************************ 00:33:10.002 22:16:29 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 0 00:33:10.002 22:16:29 blockdev_crypto_aesni -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:33:10.003 22:16:29 blockdev_crypto_aesni -- bdev/blockdev.sh@764 -- # '[' crypto_aesni = nvme ']' 00:33:10.003 22:16:29 blockdev_crypto_aesni -- bdev/blockdev.sh@764 -- # '[' crypto_aesni = gpt ']' 00:33:10.003 22:16:29 blockdev_crypto_aesni -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:33:10.003 22:16:29 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:10.003 22:16:29 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:10.003 22:16:29 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:33:10.003 ************************************ 00:33:10.003 START TEST bdev_fio 00:33:10.003 ************************************ 00:33:10.003 22:16:29 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:33:10.003 22:16:29 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:33:10.003 22:16:29 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:33:10.003 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:33:10.003 22:16:29 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:33:10.003 22:16:29 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:33:10.003 22:16:29 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:33:10.003 22:16:29 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:33:10.003 22:16:29 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:33:10.003 22:16:29 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:33:10.003 22:16:29 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:33:10.003 22:16:29 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:33:10.003 22:16:29 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:33:10.003 22:16:29 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:33:10.003 22:16:29 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:33:10.003 22:16:29 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:33:10.003 22:16:29 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:33:10.003 22:16:29 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:33:10.003 22:16:29 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:33:10.003 22:16:29 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:33:10.003 22:16:29 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:33:10.003 22:16:29 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:33:10.003 22:16:29 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:33:10.003 22:16:29 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:33:10.003 22:16:29 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:33:10.003 22:16:29 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:33:10.003 22:16:29 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram]' 00:33:10.003 22:16:29 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram 00:33:10.003 22:16:29 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:33:10.003 22:16:29 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram2]' 00:33:10.003 22:16:29 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram2 00:33:10.003 22:16:29 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:33:10.003 22:16:29 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram3]' 00:33:10.003 22:16:29 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram3 00:33:10.003 22:16:29 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:33:10.003 22:16:29 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram4]' 00:33:10.003 22:16:29 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram4 00:33:10.003 22:16:29 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:33:10.003 22:16:29 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:33:10.003 22:16:29 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:33:10.003 22:16:29 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:10.003 22:16:29 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:33:10.261 ************************************ 00:33:10.261 START TEST bdev_fio_rw_verify 00:33:10.261 ************************************ 00:33:10.261 22:16:29 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:33:10.261 22:16:29 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:33:10.261 22:16:29 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:10.261 22:16:29 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:10.261 22:16:29 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:10.261 22:16:29 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:33:10.261 22:16:29 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:33:10.261 22:16:29 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:10.261 22:16:29 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:10.261 22:16:29 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:33:10.261 22:16:29 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:33:10.261 22:16:29 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:10.262 22:16:29 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:10.262 22:16:29 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:10.262 22:16:29 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:33:10.262 22:16:29 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:10.262 22:16:29 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:33:10.520 job_crypto_ram: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:33:10.520 job_crypto_ram2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:33:10.520 job_crypto_ram3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:33:10.520 job_crypto_ram4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:33:10.520 fio-3.35 00:33:10.520 Starting 4 threads 00:33:10.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.793 EAL: Requested device 0000:3d:01.0 cannot be used 00:33:10.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.793 EAL: Requested device 0000:3d:01.1 cannot be used 00:33:10.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.793 EAL: Requested device 0000:3d:01.2 cannot be used 00:33:10.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.793 EAL: Requested device 0000:3d:01.3 cannot be used 00:33:10.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.793 EAL: Requested device 0000:3d:01.4 cannot be used 00:33:10.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.793 EAL: Requested device 0000:3d:01.5 cannot be used 00:33:10.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.793 EAL: Requested device 0000:3d:01.6 cannot be used 00:33:10.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.793 EAL: Requested device 0000:3d:01.7 cannot be used 00:33:10.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.793 EAL: Requested device 0000:3d:02.0 cannot be used 00:33:10.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.793 EAL: Requested device 0000:3d:02.1 cannot be used 00:33:10.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.793 EAL: Requested device 0000:3d:02.2 cannot be used 00:33:10.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.793 EAL: Requested device 0000:3d:02.3 cannot be used 00:33:10.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.793 EAL: Requested device 0000:3d:02.4 cannot be used 00:33:10.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.793 EAL: Requested device 0000:3d:02.5 cannot be used 00:33:10.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.793 EAL: Requested device 0000:3d:02.6 cannot be used 00:33:10.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.793 EAL: Requested device 0000:3d:02.7 cannot be used 00:33:10.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.793 EAL: Requested device 0000:3f:01.0 cannot be used 00:33:10.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.793 EAL: Requested device 0000:3f:01.1 cannot be used 00:33:10.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.793 EAL: Requested device 0000:3f:01.2 cannot be used 00:33:10.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.793 EAL: Requested device 0000:3f:01.3 cannot be used 00:33:10.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.793 EAL: Requested device 0000:3f:01.4 cannot be used 00:33:10.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.793 EAL: Requested device 0000:3f:01.5 cannot be used 00:33:10.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.793 EAL: Requested device 0000:3f:01.6 cannot be used 00:33:10.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.793 EAL: Requested device 0000:3f:01.7 cannot be used 00:33:10.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.793 EAL: Requested device 0000:3f:02.0 cannot be used 00:33:10.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.793 EAL: Requested device 0000:3f:02.1 cannot be used 00:33:10.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.793 EAL: Requested device 0000:3f:02.2 cannot be used 00:33:10.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.793 EAL: Requested device 0000:3f:02.3 cannot be used 00:33:10.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.793 EAL: Requested device 0000:3f:02.4 cannot be used 00:33:10.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.793 EAL: Requested device 0000:3f:02.5 cannot be used 00:33:10.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.793 EAL: Requested device 0000:3f:02.6 cannot be used 00:33:10.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.793 EAL: Requested device 0000:3f:02.7 cannot be used 00:33:25.710 00:33:25.710 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=1583079: Sat Jul 13 22:16:43 2024 00:33:25.710 read: IOPS=29.9k, BW=117MiB/s (123MB/s)(1170MiB/10001msec) 00:33:25.710 slat (usec): min=13, max=323, avg=43.98, stdev=29.42 00:33:25.710 clat (usec): min=13, max=2221, avg=242.53, stdev=169.04 00:33:25.710 lat (usec): min=36, max=2464, avg=286.51, stdev=187.72 00:33:25.710 clat percentiles (usec): 00:33:25.710 | 50.000th=[ 200], 99.000th=[ 865], 99.900th=[ 1029], 99.990th=[ 1270], 00:33:25.710 | 99.999th=[ 2147] 00:33:25.710 write: IOPS=32.8k, BW=128MiB/s (135MB/s)(1251MiB/9749msec); 0 zone resets 00:33:25.710 slat (usec): min=16, max=536, avg=53.56, stdev=29.23 00:33:25.710 clat (usec): min=25, max=2217, avg=293.99, stdev=200.01 00:33:25.710 lat (usec): min=58, max=2353, avg=347.56, stdev=218.04 00:33:25.710 clat percentiles (usec): 00:33:25.710 | 50.000th=[ 253], 99.000th=[ 1037], 99.900th=[ 1237], 99.990th=[ 1418], 00:33:25.710 | 99.999th=[ 2114] 00:33:25.710 bw ( KiB/s): min=107264, max=162752, per=97.92%, avg=128626.95, stdev=4242.11, samples=76 00:33:25.710 iops : min=26816, max=40688, avg=32156.74, stdev=1060.53, samples=76 00:33:25.710 lat (usec) : 20=0.01%, 50=0.13%, 100=12.84%, 250=44.59%, 500=31.71% 00:33:25.710 lat (usec) : 750=7.49%, 1000=2.57% 00:33:25.710 lat (msec) : 2=0.67%, 4=0.01% 00:33:25.710 cpu : usr=99.26%, sys=0.29%, ctx=65, majf=0, minf=29580 00:33:25.710 IO depths : 1=10.4%, 2=25.5%, 4=51.1%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:25.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.710 complete : 0=0.0%, 4=88.7%, 8=11.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.710 issued rwts: total=299516,320167,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:25.710 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:25.710 00:33:25.710 Run status group 0 (all jobs): 00:33:25.710 READ: bw=117MiB/s (123MB/s), 117MiB/s-117MiB/s (123MB/s-123MB/s), io=1170MiB (1227MB), run=10001-10001msec 00:33:25.710 WRITE: bw=128MiB/s (135MB/s), 128MiB/s-128MiB/s (135MB/s-135MB/s), io=1251MiB (1311MB), run=9749-9749msec 00:33:25.967 ----------------------------------------------------- 00:33:25.967 Suppressions used: 00:33:25.967 count bytes template 00:33:25.967 4 47 /usr/src/fio/parse.c 00:33:25.967 1626 156096 /usr/src/fio/iolog.c 00:33:25.967 1 8 libtcmalloc_minimal.so 00:33:25.967 1 904 libcrypto.so 00:33:25.967 ----------------------------------------------------- 00:33:25.967 00:33:25.967 00:33:25.967 real 0m15.880s 00:33:25.967 user 0m53.499s 00:33:25.968 sys 0m0.830s 00:33:25.968 22:16:45 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:25.968 22:16:45 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:33:25.968 ************************************ 00:33:25.968 END TEST bdev_fio_rw_verify 00:33:25.968 ************************************ 00:33:25.968 22:16:45 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:33:25.968 22:16:45 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:33:25.968 22:16:45 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:33:26.227 22:16:45 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:33:26.227 22:16:45 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:33:26.227 22:16:45 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:33:26.227 22:16:45 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:33:26.227 22:16:45 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:33:26.227 22:16:45 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:33:26.227 22:16:45 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:33:26.227 22:16:45 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:33:26.227 22:16:45 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:33:26.227 22:16:45 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:33:26.227 22:16:45 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:33:26.227 22:16:45 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:33:26.227 22:16:45 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:33:26.227 22:16:45 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:33:26.227 22:16:45 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:33:26.227 22:16:45 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "3169081f-7bcc-5632-bc89-1782a05f1870"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "3169081f-7bcc-5632-bc89-1782a05f1870",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_aesni_cbc_1"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "5668ab86-e302-50eb-af6b-0500765ab723"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "5668ab86-e302-50eb-af6b-0500765ab723",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_aesni_cbc_2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "5ded7d7f-22c7-512f-a598-34063b9d3ad6"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "5ded7d7f-22c7-512f-a598-34063b9d3ad6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_aesni_cbc_3"' ' }' ' }' '}' '{' ' "name": "crypto_ram4",' ' "aliases": [' ' "37e285e5-191c-546c-9283-7472c2cf004f"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "37e285e5-191c-546c-9283-7472c2cf004f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram4",' ' "key_name": "test_dek_aesni_cbc_4"' ' }' ' }' '}' 00:33:26.227 22:16:45 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n crypto_ram 00:33:26.227 crypto_ram2 00:33:26.227 crypto_ram3 00:33:26.227 crypto_ram4 ]] 00:33:26.227 22:16:45 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:33:26.227 22:16:45 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "3169081f-7bcc-5632-bc89-1782a05f1870"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "3169081f-7bcc-5632-bc89-1782a05f1870",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_aesni_cbc_1"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "5668ab86-e302-50eb-af6b-0500765ab723"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "5668ab86-e302-50eb-af6b-0500765ab723",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_aesni_cbc_2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "5ded7d7f-22c7-512f-a598-34063b9d3ad6"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "5ded7d7f-22c7-512f-a598-34063b9d3ad6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_aesni_cbc_3"' ' }' ' }' '}' '{' ' "name": "crypto_ram4",' ' "aliases": [' ' "37e285e5-191c-546c-9283-7472c2cf004f"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "37e285e5-191c-546c-9283-7472c2cf004f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram4",' ' "key_name": "test_dek_aesni_cbc_4"' ' }' ' }' '}' 00:33:26.228 22:16:45 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:33:26.228 22:16:45 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram]' 00:33:26.228 22:16:45 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram 00:33:26.228 22:16:45 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:33:26.228 22:16:45 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram2]' 00:33:26.228 22:16:45 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram2 00:33:26.228 22:16:45 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:33:26.228 22:16:45 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram3]' 00:33:26.228 22:16:45 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram3 00:33:26.228 22:16:45 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:33:26.228 22:16:45 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram4]' 00:33:26.228 22:16:45 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram4 00:33:26.228 22:16:45 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:33:26.228 22:16:45 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:33:26.228 22:16:45 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:26.228 22:16:45 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:33:26.228 ************************************ 00:33:26.228 START TEST bdev_fio_trim 00:33:26.228 ************************************ 00:33:26.228 22:16:45 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:33:26.228 22:16:45 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:33:26.228 22:16:45 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:26.228 22:16:45 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:26.228 22:16:45 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:26.228 22:16:45 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:33:26.228 22:16:45 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:33:26.228 22:16:45 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:26.228 22:16:45 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:26.228 22:16:45 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:33:26.228 22:16:45 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:33:26.228 22:16:45 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:26.228 22:16:45 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:26.228 22:16:45 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:26.228 22:16:45 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1347 -- # break 00:33:26.228 22:16:45 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:26.228 22:16:45 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:33:26.806 job_crypto_ram: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:33:26.806 job_crypto_ram2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:33:26.806 job_crypto_ram3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:33:26.806 job_crypto_ram4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:33:26.806 fio-3.35 00:33:26.806 Starting 4 threads 00:33:26.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:26.806 EAL: Requested device 0000:3d:01.0 cannot be used 00:33:26.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:26.806 EAL: Requested device 0000:3d:01.1 cannot be used 00:33:26.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:26.806 EAL: Requested device 0000:3d:01.2 cannot be used 00:33:26.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:26.806 EAL: Requested device 0000:3d:01.3 cannot be used 00:33:26.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:26.806 EAL: Requested device 0000:3d:01.4 cannot be used 00:33:26.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:26.806 EAL: Requested device 0000:3d:01.5 cannot be used 00:33:26.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:26.806 EAL: Requested device 0000:3d:01.6 cannot be used 00:33:26.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:26.806 EAL: Requested device 0000:3d:01.7 cannot be used 00:33:26.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:26.806 EAL: Requested device 0000:3d:02.0 cannot be used 00:33:26.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:26.806 EAL: Requested device 0000:3d:02.1 cannot be used 00:33:26.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:26.806 EAL: Requested device 0000:3d:02.2 cannot be used 00:33:26.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:26.806 EAL: Requested device 0000:3d:02.3 cannot be used 00:33:26.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:26.806 EAL: Requested device 0000:3d:02.4 cannot be used 00:33:26.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:26.806 EAL: Requested device 0000:3d:02.5 cannot be used 00:33:26.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:26.806 EAL: Requested device 0000:3d:02.6 cannot be used 00:33:26.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:26.806 EAL: Requested device 0000:3d:02.7 cannot be used 00:33:26.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:26.806 EAL: Requested device 0000:3f:01.0 cannot be used 00:33:26.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:26.806 EAL: Requested device 0000:3f:01.1 cannot be used 00:33:26.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:26.806 EAL: Requested device 0000:3f:01.2 cannot be used 00:33:26.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:26.806 EAL: Requested device 0000:3f:01.3 cannot be used 00:33:26.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:26.806 EAL: Requested device 0000:3f:01.4 cannot be used 00:33:26.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:26.806 EAL: Requested device 0000:3f:01.5 cannot be used 00:33:26.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:26.806 EAL: Requested device 0000:3f:01.6 cannot be used 00:33:26.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:26.806 EAL: Requested device 0000:3f:01.7 cannot be used 00:33:26.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:26.806 EAL: Requested device 0000:3f:02.0 cannot be used 00:33:26.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:26.806 EAL: Requested device 0000:3f:02.1 cannot be used 00:33:26.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:26.806 EAL: Requested device 0000:3f:02.2 cannot be used 00:33:26.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:26.806 EAL: Requested device 0000:3f:02.3 cannot be used 00:33:26.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:26.806 EAL: Requested device 0000:3f:02.4 cannot be used 00:33:26.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:26.806 EAL: Requested device 0000:3f:02.5 cannot be used 00:33:26.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:26.806 EAL: Requested device 0000:3f:02.6 cannot be used 00:33:26.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:26.806 EAL: Requested device 0000:3f:02.7 cannot be used 00:33:41.668 00:33:41.668 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=1585853: Sat Jul 13 22:16:59 2024 00:33:41.668 write: IOPS=52.7k, BW=206MiB/s (216MB/s)(2057MiB/10001msec); 0 zone resets 00:33:41.668 slat (usec): min=12, max=468, avg=42.73, stdev=25.56 00:33:41.668 clat (usec): min=39, max=1762, avg=194.08, stdev=129.41 00:33:41.668 lat (usec): min=52, max=1976, avg=236.80, stdev=146.03 00:33:41.668 clat percentiles (usec): 00:33:41.668 | 50.000th=[ 159], 99.000th=[ 685], 99.900th=[ 840], 99.990th=[ 963], 00:33:41.668 | 99.999th=[ 1582] 00:33:41.668 bw ( KiB/s): min=184432, max=257280, per=100.00%, avg=211938.47, stdev=8462.71, samples=76 00:33:41.668 iops : min=46108, max=64320, avg=52984.68, stdev=2115.71, samples=76 00:33:41.668 trim: IOPS=52.7k, BW=206MiB/s (216MB/s)(2057MiB/10001msec); 0 zone resets 00:33:41.668 slat (usec): min=4, max=480, avg=11.72, stdev= 5.11 00:33:41.668 clat (usec): min=52, max=1211, avg=182.77, stdev=84.11 00:33:41.668 lat (usec): min=57, max=1248, avg=194.49, stdev=86.15 00:33:41.668 clat percentiles (usec): 00:33:41.668 | 50.000th=[ 169], 99.000th=[ 474], 99.900th=[ 578], 99.990th=[ 660], 00:33:41.668 | 99.999th=[ 1090] 00:33:41.668 bw ( KiB/s): min=184424, max=257288, per=100.00%, avg=211940.53, stdev=8463.24, samples=76 00:33:41.668 iops : min=46106, max=64322, avg=52985.11, stdev=2115.80, samples=76 00:33:41.668 lat (usec) : 50=1.17%, 100=16.33%, 250=62.81%, 500=17.58%, 750=1.88% 00:33:41.668 lat (usec) : 1000=0.23% 00:33:41.668 lat (msec) : 2=0.01% 00:33:41.668 cpu : usr=99.55%, sys=0.04%, ctx=86, majf=0, minf=7676 00:33:41.668 IO depths : 1=8.0%, 2=26.3%, 4=52.6%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:41.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:41.668 complete : 0=0.0%, 4=88.4%, 8=11.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:41.668 issued rwts: total=0,526633,526634,0 short=0,0,0,0 dropped=0,0,0,0 00:33:41.668 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:41.668 00:33:41.668 Run status group 0 (all jobs): 00:33:41.668 WRITE: bw=206MiB/s (216MB/s), 206MiB/s-206MiB/s (216MB/s-216MB/s), io=2057MiB (2157MB), run=10001-10001msec 00:33:41.668 TRIM: bw=206MiB/s (216MB/s), 206MiB/s-206MiB/s (216MB/s-216MB/s), io=2057MiB (2157MB), run=10001-10001msec 00:33:41.925 ----------------------------------------------------- 00:33:41.925 Suppressions used: 00:33:41.925 count bytes template 00:33:41.925 4 47 /usr/src/fio/parse.c 00:33:41.925 1 8 libtcmalloc_minimal.so 00:33:41.925 1 904 libcrypto.so 00:33:41.925 ----------------------------------------------------- 00:33:41.925 00:33:42.183 00:33:42.183 real 0m15.810s 00:33:42.183 user 0m53.315s 00:33:42.183 sys 0m0.728s 00:33:42.183 22:17:01 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:42.183 22:17:01 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:33:42.183 ************************************ 00:33:42.183 END TEST bdev_fio_trim 00:33:42.183 ************************************ 00:33:42.183 22:17:01 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:33:42.183 22:17:01 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f 00:33:42.183 22:17:01 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@369 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:33:42.183 22:17:01 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@370 -- # popd 00:33:42.183 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:33:42.183 22:17:01 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:33:42.183 00:33:42.183 real 0m32.042s 00:33:42.183 user 1m47.007s 00:33:42.183 sys 0m1.740s 00:33:42.183 22:17:01 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:42.183 22:17:01 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:33:42.183 ************************************ 00:33:42.183 END TEST bdev_fio 00:33:42.183 ************************************ 00:33:42.183 22:17:01 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 0 00:33:42.183 22:17:01 blockdev_crypto_aesni -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:42.183 22:17:01 blockdev_crypto_aesni -- bdev/blockdev.sh@777 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:33:42.183 22:17:01 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:33:42.183 22:17:01 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:42.183 22:17:01 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:33:42.183 ************************************ 00:33:42.183 START TEST bdev_verify 00:33:42.183 ************************************ 00:33:42.183 22:17:01 blockdev_crypto_aesni.bdev_verify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:33:42.183 [2024-07-13 22:17:01.536183] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:33:42.183 [2024-07-13 22:17:01.536269] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1587975 ] 00:33:42.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:42.441 EAL: Requested device 0000:3d:01.0 cannot be used 00:33:42.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:42.441 EAL: Requested device 0000:3d:01.1 cannot be used 00:33:42.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:42.441 EAL: Requested device 0000:3d:01.2 cannot be used 00:33:42.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:42.441 EAL: Requested device 0000:3d:01.3 cannot be used 00:33:42.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:42.441 EAL: Requested device 0000:3d:01.4 cannot be used 00:33:42.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:42.441 EAL: Requested device 0000:3d:01.5 cannot be used 00:33:42.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:42.441 EAL: Requested device 0000:3d:01.6 cannot be used 00:33:42.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:42.441 EAL: Requested device 0000:3d:01.7 cannot be used 00:33:42.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:42.441 EAL: Requested device 0000:3d:02.0 cannot be used 00:33:42.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:42.441 EAL: Requested device 0000:3d:02.1 cannot be used 00:33:42.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:42.441 EAL: Requested device 0000:3d:02.2 cannot be used 00:33:42.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:42.441 EAL: Requested device 0000:3d:02.3 cannot be used 00:33:42.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:42.441 EAL: Requested device 0000:3d:02.4 cannot be used 00:33:42.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:42.441 EAL: Requested device 0000:3d:02.5 cannot be used 00:33:42.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:42.441 EAL: Requested device 0000:3d:02.6 cannot be used 00:33:42.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:42.441 EAL: Requested device 0000:3d:02.7 cannot be used 00:33:42.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:42.441 EAL: Requested device 0000:3f:01.0 cannot be used 00:33:42.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:42.441 EAL: Requested device 0000:3f:01.1 cannot be used 00:33:42.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:42.441 EAL: Requested device 0000:3f:01.2 cannot be used 00:33:42.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:42.441 EAL: Requested device 0000:3f:01.3 cannot be used 00:33:42.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:42.441 EAL: Requested device 0000:3f:01.4 cannot be used 00:33:42.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:42.441 EAL: Requested device 0000:3f:01.5 cannot be used 00:33:42.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:42.441 EAL: Requested device 0000:3f:01.6 cannot be used 00:33:42.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:42.441 EAL: Requested device 0000:3f:01.7 cannot be used 00:33:42.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:42.441 EAL: Requested device 0000:3f:02.0 cannot be used 00:33:42.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:42.441 EAL: Requested device 0000:3f:02.1 cannot be used 00:33:42.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:42.441 EAL: Requested device 0000:3f:02.2 cannot be used 00:33:42.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:42.441 EAL: Requested device 0000:3f:02.3 cannot be used 00:33:42.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:42.441 EAL: Requested device 0000:3f:02.4 cannot be used 00:33:42.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:42.441 EAL: Requested device 0000:3f:02.5 cannot be used 00:33:42.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:42.441 EAL: Requested device 0000:3f:02.6 cannot be used 00:33:42.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:42.442 EAL: Requested device 0000:3f:02.7 cannot be used 00:33:42.442 [2024-07-13 22:17:01.692876] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:42.700 [2024-07-13 22:17:01.892097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:42.700 [2024-07-13 22:17:01.892107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:42.700 [2024-07-13 22:17:01.913376] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:33:42.700 [2024-07-13 22:17:01.921389] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:33:42.700 [2024-07-13 22:17:01.929409] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:33:42.957 [2024-07-13 22:17:02.209266] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:33:45.487 [2024-07-13 22:17:04.866072] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:33:45.487 [2024-07-13 22:17:04.866139] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:33:45.487 [2024-07-13 22:17:04.866154] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:45.487 [2024-07-13 22:17:04.874099] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:33:45.487 [2024-07-13 22:17:04.874134] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:33:45.487 [2024-07-13 22:17:04.874148] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:45.744 [2024-07-13 22:17:04.882142] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:33:45.744 [2024-07-13 22:17:04.882174] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:33:45.744 [2024-07-13 22:17:04.882185] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:45.744 [2024-07-13 22:17:04.890128] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:33:45.744 [2024-07-13 22:17:04.890171] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:33:45.744 [2024-07-13 22:17:04.890183] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:45.744 Running I/O for 5 seconds... 00:33:51.008 00:33:51.008 Latency(us) 00:33:51.008 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:51.008 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:51.008 Verification LBA range: start 0x0 length 0x1000 00:33:51.008 crypto_ram : 5.06 648.17 2.53 0.00 0.00 196658.53 3486.52 136734.31 00:33:51.008 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:33:51.008 Verification LBA range: start 0x1000 length 0x1000 00:33:51.008 crypto_ram : 5.06 652.93 2.55 0.00 0.00 195388.84 4272.95 137573.17 00:33:51.008 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:51.008 Verification LBA range: start 0x0 length 0x1000 00:33:51.008 crypto_ram2 : 5.06 651.02 2.54 0.00 0.00 195561.17 4587.52 125829.12 00:33:51.008 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:33:51.008 Verification LBA range: start 0x1000 length 0x1000 00:33:51.008 crypto_ram2 : 5.06 655.77 2.56 0.00 0.00 194317.75 5400.17 125829.12 00:33:51.008 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:51.008 Verification LBA range: start 0x0 length 0x1000 00:33:51.008 crypto_ram3 : 5.04 5079.16 19.84 0.00 0.00 25004.87 6710.89 21600.67 00:33:51.008 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:33:51.008 Verification LBA range: start 0x1000 length 0x1000 00:33:51.008 crypto_ram3 : 5.04 5102.16 19.93 0.00 0.00 24901.58 6632.24 21705.52 00:33:51.008 Job: crypto_ram4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:51.009 Verification LBA range: start 0x0 length 0x1000 00:33:51.009 crypto_ram4 : 5.05 5093.93 19.90 0.00 0.00 24900.05 2896.69 21286.09 00:33:51.009 Job: crypto_ram4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:33:51.009 Verification LBA range: start 0x1000 length 0x1000 00:33:51.009 crypto_ram4 : 5.05 5108.39 19.95 0.00 0.00 24818.81 1395.92 21600.67 00:33:51.009 =================================================================================================================== 00:33:51.009 Total : 22991.52 89.81 0.00 0.00 44305.15 1395.92 137573.17 00:33:52.941 00:33:52.941 real 0m10.698s 00:33:52.941 user 0m19.982s 00:33:52.941 sys 0m0.436s 00:33:52.941 22:17:12 blockdev_crypto_aesni.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:52.941 22:17:12 blockdev_crypto_aesni.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:33:52.941 ************************************ 00:33:52.941 END TEST bdev_verify 00:33:52.941 ************************************ 00:33:52.941 22:17:12 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 0 00:33:52.941 22:17:12 blockdev_crypto_aesni -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:33:52.941 22:17:12 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:33:52.941 22:17:12 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:52.941 22:17:12 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:33:52.941 ************************************ 00:33:52.941 START TEST bdev_verify_big_io 00:33:52.941 ************************************ 00:33:52.941 22:17:12 blockdev_crypto_aesni.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:33:52.941 [2024-07-13 22:17:12.324471] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:33:52.941 [2024-07-13 22:17:12.324583] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1589580 ] 00:33:53.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:53.200 EAL: Requested device 0000:3d:01.0 cannot be used 00:33:53.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:53.200 EAL: Requested device 0000:3d:01.1 cannot be used 00:33:53.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:53.200 EAL: Requested device 0000:3d:01.2 cannot be used 00:33:53.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:53.200 EAL: Requested device 0000:3d:01.3 cannot be used 00:33:53.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:53.200 EAL: Requested device 0000:3d:01.4 cannot be used 00:33:53.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:53.200 EAL: Requested device 0000:3d:01.5 cannot be used 00:33:53.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:53.200 EAL: Requested device 0000:3d:01.6 cannot be used 00:33:53.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:53.200 EAL: Requested device 0000:3d:01.7 cannot be used 00:33:53.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:53.200 EAL: Requested device 0000:3d:02.0 cannot be used 00:33:53.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:53.200 EAL: Requested device 0000:3d:02.1 cannot be used 00:33:53.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:53.200 EAL: Requested device 0000:3d:02.2 cannot be used 00:33:53.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:53.200 EAL: Requested device 0000:3d:02.3 cannot be used 00:33:53.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:53.200 EAL: Requested device 0000:3d:02.4 cannot be used 00:33:53.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:53.200 EAL: Requested device 0000:3d:02.5 cannot be used 00:33:53.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:53.200 EAL: Requested device 0000:3d:02.6 cannot be used 00:33:53.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:53.200 EAL: Requested device 0000:3d:02.7 cannot be used 00:33:53.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:53.200 EAL: Requested device 0000:3f:01.0 cannot be used 00:33:53.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:53.200 EAL: Requested device 0000:3f:01.1 cannot be used 00:33:53.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:53.200 EAL: Requested device 0000:3f:01.2 cannot be used 00:33:53.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:53.200 EAL: Requested device 0000:3f:01.3 cannot be used 00:33:53.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:53.200 EAL: Requested device 0000:3f:01.4 cannot be used 00:33:53.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:53.200 EAL: Requested device 0000:3f:01.5 cannot be used 00:33:53.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:53.200 EAL: Requested device 0000:3f:01.6 cannot be used 00:33:53.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:53.200 EAL: Requested device 0000:3f:01.7 cannot be used 00:33:53.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:53.200 EAL: Requested device 0000:3f:02.0 cannot be used 00:33:53.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:53.200 EAL: Requested device 0000:3f:02.1 cannot be used 00:33:53.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:53.200 EAL: Requested device 0000:3f:02.2 cannot be used 00:33:53.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:53.200 EAL: Requested device 0000:3f:02.3 cannot be used 00:33:53.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:53.200 EAL: Requested device 0000:3f:02.4 cannot be used 00:33:53.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:53.200 EAL: Requested device 0000:3f:02.5 cannot be used 00:33:53.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:53.200 EAL: Requested device 0000:3f:02.6 cannot be used 00:33:53.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:53.200 EAL: Requested device 0000:3f:02.7 cannot be used 00:33:53.200 [2024-07-13 22:17:12.485818] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:53.458 [2024-07-13 22:17:12.692400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:53.458 [2024-07-13 22:17:12.692411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:53.458 [2024-07-13 22:17:12.713717] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:33:53.458 [2024-07-13 22:17:12.721733] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:33:53.458 [2024-07-13 22:17:12.729745] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:33:53.716 [2024-07-13 22:17:13.010421] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:33:56.998 [2024-07-13 22:17:15.666785] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:33:56.998 [2024-07-13 22:17:15.666845] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:33:56.998 [2024-07-13 22:17:15.666860] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:56.998 [2024-07-13 22:17:15.674804] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:33:56.998 [2024-07-13 22:17:15.674837] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:33:56.998 [2024-07-13 22:17:15.674848] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:56.998 [2024-07-13 22:17:15.682848] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:33:56.998 [2024-07-13 22:17:15.682877] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:33:56.998 [2024-07-13 22:17:15.682888] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:56.998 [2024-07-13 22:17:15.690842] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:33:56.998 [2024-07-13 22:17:15.690884] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:33:56.998 [2024-07-13 22:17:15.690895] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:56.998 Running I/O for 5 seconds... 00:34:02.270 00:34:02.270 Latency(us) 00:34:02.270 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:02.270 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:34:02.270 Verification LBA range: start 0x0 length 0x100 00:34:02.270 crypto_ram : 5.54 69.29 4.33 0.00 0.00 1802916.90 5976.88 1516660.33 00:34:02.270 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:34:02.270 Verification LBA range: start 0x100 length 0x100 00:34:02.270 crypto_ram : 5.46 70.34 4.40 0.00 0.00 1779289.02 7077.89 1523371.21 00:34:02.270 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:34:02.270 Verification LBA range: start 0x0 length 0x100 00:34:02.270 crypto_ram2 : 5.54 69.28 4.33 0.00 0.00 1754421.49 5452.60 1516660.33 00:34:02.270 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:34:02.270 Verification LBA range: start 0x100 length 0x100 00:34:02.270 crypto_ram2 : 5.46 70.50 4.41 0.00 0.00 1740133.05 6815.74 1523371.21 00:34:02.270 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:34:02.270 Verification LBA range: start 0x0 length 0x100 00:34:02.270 crypto_ram3 : 5.41 495.43 30.96 0.00 0.00 238523.24 3905.95 312056.22 00:34:02.270 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:34:02.270 Verification LBA range: start 0x100 length 0x100 00:34:02.270 crypto_ram3 : 5.33 503.50 31.47 0.00 0.00 238745.10 36909.88 320444.83 00:34:02.270 Job: crypto_ram4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:34:02.270 Verification LBA range: start 0x0 length 0x100 00:34:02.270 crypto_ram4 : 5.48 513.12 32.07 0.00 0.00 227243.28 12006.20 312056.22 00:34:02.270 Job: crypto_ram4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:34:02.270 Verification LBA range: start 0x100 length 0x100 00:34:02.270 crypto_ram4 : 5.38 516.43 32.28 0.00 0.00 228912.40 11639.19 313733.94 00:34:02.270 =================================================================================================================== 00:34:02.270 Total : 2307.89 144.24 0.00 0.00 422177.72 3905.95 1523371.21 00:34:04.171 00:34:04.171 real 0m11.319s 00:34:04.171 user 0m21.165s 00:34:04.171 sys 0m0.473s 00:34:04.171 22:17:23 blockdev_crypto_aesni.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:04.171 22:17:23 blockdev_crypto_aesni.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:34:04.171 ************************************ 00:34:04.171 END TEST bdev_verify_big_io 00:34:04.171 ************************************ 00:34:04.430 22:17:23 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 0 00:34:04.430 22:17:23 blockdev_crypto_aesni -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:04.430 22:17:23 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:34:04.430 22:17:23 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:04.430 22:17:23 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:34:04.430 ************************************ 00:34:04.430 START TEST bdev_write_zeroes 00:34:04.430 ************************************ 00:34:04.430 22:17:23 blockdev_crypto_aesni.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:04.430 [2024-07-13 22:17:23.728652] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:34:04.430 [2024-07-13 22:17:23.728749] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1591435 ] 00:34:04.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.689 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:04.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.689 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:04.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.689 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:04.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.689 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:04.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.689 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:04.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.689 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:04.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.689 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:04.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.689 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:04.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.689 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:04.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.689 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:04.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.689 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:04.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.689 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:04.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.689 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:04.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.689 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:04.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.689 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:04.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.689 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:04.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.689 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:04.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.689 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:04.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.689 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:04.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.689 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:04.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.689 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:04.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.689 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:04.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.689 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:04.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.689 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:04.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.689 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:04.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.689 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:04.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.689 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:04.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.689 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:04.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.689 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:04.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.689 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:04.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.689 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:04.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.689 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:04.689 [2024-07-13 22:17:23.889414] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:04.946 [2024-07-13 22:17:24.094365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:04.947 [2024-07-13 22:17:24.115653] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:34:04.947 [2024-07-13 22:17:24.123673] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:34:04.947 [2024-07-13 22:17:24.131685] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:34:05.204 [2024-07-13 22:17:24.407389] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:34:07.733 [2024-07-13 22:17:27.037525] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:34:07.733 [2024-07-13 22:17:27.037587] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:34:07.733 [2024-07-13 22:17:27.037600] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:07.733 [2024-07-13 22:17:27.045542] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:34:07.733 [2024-07-13 22:17:27.045574] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:34:07.733 [2024-07-13 22:17:27.045586] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:07.733 [2024-07-13 22:17:27.053580] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:34:07.733 [2024-07-13 22:17:27.053610] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:34:07.733 [2024-07-13 22:17:27.053621] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:07.733 [2024-07-13 22:17:27.061580] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:34:07.733 [2024-07-13 22:17:27.061606] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:34:07.733 [2024-07-13 22:17:27.061617] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:07.991 Running I/O for 1 seconds... 00:34:08.924 00:34:08.924 Latency(us) 00:34:08.924 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:08.924 Job: crypto_ram (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:34:08.924 crypto_ram : 1.02 2769.19 10.82 0.00 0.00 45938.33 4325.38 55784.24 00:34:08.924 Job: crypto_ram2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:34:08.924 crypto_ram2 : 1.02 2775.09 10.84 0.00 0.00 45647.53 4141.88 51380.22 00:34:08.924 Job: crypto_ram3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:34:08.924 crypto_ram3 : 1.02 21550.88 84.18 0.00 0.00 5869.24 1769.47 7759.46 00:34:08.925 Job: crypto_ram4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:34:08.925 crypto_ram4 : 1.02 21588.89 84.33 0.00 0.00 5845.78 1756.36 6527.39 00:34:08.925 =================================================================================================================== 00:34:08.925 Total : 48684.06 190.17 0.00 0.00 10420.83 1756.36 55784.24 00:34:10.823 00:34:10.823 real 0m6.449s 00:34:10.823 user 0m5.949s 00:34:10.823 sys 0m0.444s 00:34:10.823 22:17:30 blockdev_crypto_aesni.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:10.823 22:17:30 blockdev_crypto_aesni.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:34:10.823 ************************************ 00:34:10.823 END TEST bdev_write_zeroes 00:34:10.823 ************************************ 00:34:10.823 22:17:30 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 0 00:34:10.823 22:17:30 blockdev_crypto_aesni -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:10.823 22:17:30 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:34:10.823 22:17:30 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:10.823 22:17:30 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:34:10.823 ************************************ 00:34:10.823 START TEST bdev_json_nonenclosed 00:34:10.823 ************************************ 00:34:10.823 22:17:30 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:11.081 [2024-07-13 22:17:30.274227] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:34:11.081 [2024-07-13 22:17:30.274320] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1592519 ] 00:34:11.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.081 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:11.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.081 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:11.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.081 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:11.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.081 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:11.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.081 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:11.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.081 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:11.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.081 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:11.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.081 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:11.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.081 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:11.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.081 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:11.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.081 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:11.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.081 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:11.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.081 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:11.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.081 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:11.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.081 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:11.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.081 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:11.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.081 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:11.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.081 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:11.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.081 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:11.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.081 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:11.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.081 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:11.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.081 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:11.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.081 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:11.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.081 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:11.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.081 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:11.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.081 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:11.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.081 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:11.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.081 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:11.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.081 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:11.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.081 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:11.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.081 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:11.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.081 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:11.081 [2024-07-13 22:17:30.434400] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:11.339 [2024-07-13 22:17:30.636618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:11.339 [2024-07-13 22:17:30.636699] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:34:11.339 [2024-07-13 22:17:30.636718] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:34:11.339 [2024-07-13 22:17:30.636730] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:34:11.939 00:34:11.939 real 0m0.860s 00:34:11.939 user 0m0.663s 00:34:11.939 sys 0m0.192s 00:34:11.939 22:17:31 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:34:11.939 22:17:31 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:11.939 22:17:31 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:34:11.939 ************************************ 00:34:11.939 END TEST bdev_json_nonenclosed 00:34:11.939 ************************************ 00:34:11.939 22:17:31 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 234 00:34:11.939 22:17:31 blockdev_crypto_aesni -- bdev/blockdev.sh@782 -- # true 00:34:11.939 22:17:31 blockdev_crypto_aesni -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:11.939 22:17:31 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:34:11.939 22:17:31 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:11.939 22:17:31 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:34:11.939 ************************************ 00:34:11.939 START TEST bdev_json_nonarray 00:34:11.939 ************************************ 00:34:11.939 22:17:31 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:11.939 [2024-07-13 22:17:31.214513] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:34:11.939 [2024-07-13 22:17:31.214604] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1592785 ] 00:34:11.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.939 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:11.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.939 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:11.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.939 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:11.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.939 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:11.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.939 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:11.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.939 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:11.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.939 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:11.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.939 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:11.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.939 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:11.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.939 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:11.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.939 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:11.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.939 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:11.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.939 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:11.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.939 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:11.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.939 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:11.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.939 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:11.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.939 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:11.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.939 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:11.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.939 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:11.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.939 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:11.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.939 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:11.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.939 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:11.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.939 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:11.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.939 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:11.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.939 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:11.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.939 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:11.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.939 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:11.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.939 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:11.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.939 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:11.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.939 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:11.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.939 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:11.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:11.939 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:12.197 [2024-07-13 22:17:31.370401] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:12.197 [2024-07-13 22:17:31.567363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:12.197 [2024-07-13 22:17:31.567450] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:34:12.197 [2024-07-13 22:17:31.567476] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:34:12.197 [2024-07-13 22:17:31.567488] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:34:12.763 00:34:12.763 real 0m0.847s 00:34:12.763 user 0m0.639s 00:34:12.763 sys 0m0.205s 00:34:12.763 22:17:31 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:34:12.763 22:17:31 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:12.763 22:17:31 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:34:12.763 ************************************ 00:34:12.763 END TEST bdev_json_nonarray 00:34:12.763 ************************************ 00:34:12.763 22:17:32 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 234 00:34:12.763 22:17:32 blockdev_crypto_aesni -- bdev/blockdev.sh@785 -- # true 00:34:12.763 22:17:32 blockdev_crypto_aesni -- bdev/blockdev.sh@787 -- # [[ crypto_aesni == bdev ]] 00:34:12.763 22:17:32 blockdev_crypto_aesni -- bdev/blockdev.sh@794 -- # [[ crypto_aesni == gpt ]] 00:34:12.763 22:17:32 blockdev_crypto_aesni -- bdev/blockdev.sh@798 -- # [[ crypto_aesni == crypto_sw ]] 00:34:12.763 22:17:32 blockdev_crypto_aesni -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:34:12.763 22:17:32 blockdev_crypto_aesni -- bdev/blockdev.sh@811 -- # cleanup 00:34:12.763 22:17:32 blockdev_crypto_aesni -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:34:12.763 22:17:32 blockdev_crypto_aesni -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:34:12.763 22:17:32 blockdev_crypto_aesni -- bdev/blockdev.sh@26 -- # [[ crypto_aesni == rbd ]] 00:34:12.764 22:17:32 blockdev_crypto_aesni -- bdev/blockdev.sh@30 -- # [[ crypto_aesni == daos ]] 00:34:12.764 22:17:32 blockdev_crypto_aesni -- bdev/blockdev.sh@34 -- # [[ crypto_aesni = \g\p\t ]] 00:34:12.764 22:17:32 blockdev_crypto_aesni -- bdev/blockdev.sh@40 -- # [[ crypto_aesni == xnvme ]] 00:34:12.764 00:34:12.764 real 1m34.169s 00:34:12.764 user 3m20.024s 00:34:12.764 sys 0m9.175s 00:34:12.764 22:17:32 blockdev_crypto_aesni -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:12.764 22:17:32 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:34:12.764 ************************************ 00:34:12.764 END TEST blockdev_crypto_aesni 00:34:12.764 ************************************ 00:34:12.764 22:17:32 -- common/autotest_common.sh@1142 -- # return 0 00:34:12.764 22:17:32 -- spdk/autotest.sh@358 -- # run_test blockdev_crypto_sw /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_sw 00:34:12.764 22:17:32 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:12.764 22:17:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:12.764 22:17:32 -- common/autotest_common.sh@10 -- # set +x 00:34:12.764 ************************************ 00:34:12.764 START TEST blockdev_crypto_sw 00:34:12.764 ************************************ 00:34:12.764 22:17:32 blockdev_crypto_sw -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_sw 00:34:13.022 * Looking for test storage... 00:34:13.022 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:34:13.022 22:17:32 blockdev_crypto_sw -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:34:13.022 22:17:32 blockdev_crypto_sw -- bdev/nbd_common.sh@6 -- # set -e 00:34:13.022 22:17:32 blockdev_crypto_sw -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:34:13.022 22:17:32 blockdev_crypto_sw -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:34:13.022 22:17:32 blockdev_crypto_sw -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:34:13.022 22:17:32 blockdev_crypto_sw -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:34:13.022 22:17:32 blockdev_crypto_sw -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:34:13.022 22:17:32 blockdev_crypto_sw -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:34:13.022 22:17:32 blockdev_crypto_sw -- bdev/blockdev.sh@20 -- # : 00:34:13.022 22:17:32 blockdev_crypto_sw -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:34:13.022 22:17:32 blockdev_crypto_sw -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:34:13.022 22:17:32 blockdev_crypto_sw -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:34:13.022 22:17:32 blockdev_crypto_sw -- bdev/blockdev.sh@674 -- # uname -s 00:34:13.022 22:17:32 blockdev_crypto_sw -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:34:13.022 22:17:32 blockdev_crypto_sw -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:34:13.022 22:17:32 blockdev_crypto_sw -- bdev/blockdev.sh@682 -- # test_type=crypto_sw 00:34:13.022 22:17:32 blockdev_crypto_sw -- bdev/blockdev.sh@683 -- # crypto_device= 00:34:13.022 22:17:32 blockdev_crypto_sw -- bdev/blockdev.sh@684 -- # dek= 00:34:13.022 22:17:32 blockdev_crypto_sw -- bdev/blockdev.sh@685 -- # env_ctx= 00:34:13.022 22:17:32 blockdev_crypto_sw -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:34:13.022 22:17:32 blockdev_crypto_sw -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:34:13.022 22:17:32 blockdev_crypto_sw -- bdev/blockdev.sh@690 -- # [[ crypto_sw == bdev ]] 00:34:13.022 22:17:32 blockdev_crypto_sw -- bdev/blockdev.sh@690 -- # [[ crypto_sw == crypto_* ]] 00:34:13.023 22:17:32 blockdev_crypto_sw -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:34:13.023 22:17:32 blockdev_crypto_sw -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:34:13.023 22:17:32 blockdev_crypto_sw -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=1593057 00:34:13.023 22:17:32 blockdev_crypto_sw -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:34:13.023 22:17:32 blockdev_crypto_sw -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:34:13.023 22:17:32 blockdev_crypto_sw -- bdev/blockdev.sh@49 -- # waitforlisten 1593057 00:34:13.023 22:17:32 blockdev_crypto_sw -- common/autotest_common.sh@829 -- # '[' -z 1593057 ']' 00:34:13.023 22:17:32 blockdev_crypto_sw -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:13.023 22:17:32 blockdev_crypto_sw -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:13.023 22:17:32 blockdev_crypto_sw -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:13.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:13.023 22:17:32 blockdev_crypto_sw -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:13.023 22:17:32 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:34:13.023 [2024-07-13 22:17:32.344800] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:34:13.023 [2024-07-13 22:17:32.344899] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1593057 ] 00:34:13.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.281 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:13.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.281 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:13.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.281 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:13.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.281 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:13.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.281 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:13.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.281 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:13.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.281 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:13.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.281 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:13.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.281 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:13.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.281 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:13.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.281 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:13.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.281 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:13.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.281 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:13.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.281 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:13.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.281 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:13.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.281 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:13.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.281 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:13.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.281 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:13.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.281 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:13.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.281 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:13.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.281 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:13.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.281 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:13.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.281 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:13.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.281 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:13.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.281 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:13.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.281 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:13.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.281 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:13.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.281 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:13.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.281 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:13.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.281 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:13.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.281 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:13.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.281 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:13.281 [2024-07-13 22:17:32.503138] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:13.539 [2024-07-13 22:17:32.694696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:13.796 22:17:33 blockdev_crypto_sw -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:13.796 22:17:33 blockdev_crypto_sw -- common/autotest_common.sh@862 -- # return 0 00:34:13.796 22:17:33 blockdev_crypto_sw -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:34:13.796 22:17:33 blockdev_crypto_sw -- bdev/blockdev.sh@711 -- # setup_crypto_sw_conf 00:34:13.796 22:17:33 blockdev_crypto_sw -- bdev/blockdev.sh@193 -- # rpc_cmd 00:34:13.796 22:17:33 blockdev_crypto_sw -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.796 22:17:33 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:34:14.729 Malloc0 00:34:14.729 Malloc1 00:34:14.729 true 00:34:14.729 true 00:34:14.729 true 00:34:14.729 [2024-07-13 22:17:34.086786] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:34:14.729 crypto_ram 00:34:14.729 [2024-07-13 22:17:34.094794] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:34:14.729 crypto_ram2 00:34:14.729 [2024-07-13 22:17:34.102834] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:34:14.729 crypto_ram3 00:34:14.729 [ 00:34:14.729 { 00:34:14.729 "name": "Malloc1", 00:34:14.729 "aliases": [ 00:34:14.729 "9fc81c13-ae53-4360-9481-0094726bfd43" 00:34:14.729 ], 00:34:14.729 "product_name": "Malloc disk", 00:34:14.729 "block_size": 4096, 00:34:14.729 "num_blocks": 4096, 00:34:14.729 "uuid": "9fc81c13-ae53-4360-9481-0094726bfd43", 00:34:14.729 "assigned_rate_limits": { 00:34:14.729 "rw_ios_per_sec": 0, 00:34:14.729 "rw_mbytes_per_sec": 0, 00:34:14.729 "r_mbytes_per_sec": 0, 00:34:14.729 "w_mbytes_per_sec": 0 00:34:14.729 }, 00:34:14.729 "claimed": true, 00:34:14.729 "claim_type": "exclusive_write", 00:34:14.729 "zoned": false, 00:34:14.729 "supported_io_types": { 00:34:14.729 "read": true, 00:34:14.729 "write": true, 00:34:14.729 "unmap": true, 00:34:14.729 "flush": true, 00:34:14.729 "reset": true, 00:34:14.729 "nvme_admin": false, 00:34:14.729 "nvme_io": false, 00:34:14.729 "nvme_io_md": false, 00:34:14.729 "write_zeroes": true, 00:34:14.729 "zcopy": true, 00:34:14.729 "get_zone_info": false, 00:34:14.729 "zone_management": false, 00:34:14.729 "zone_append": false, 00:34:14.729 "compare": false, 00:34:14.729 "compare_and_write": false, 00:34:14.729 "abort": true, 00:34:14.729 "seek_hole": false, 00:34:14.988 "seek_data": false, 00:34:14.988 "copy": true, 00:34:14.988 "nvme_iov_md": false 00:34:14.988 }, 00:34:14.988 "memory_domains": [ 00:34:14.988 { 00:34:14.988 "dma_device_id": "system", 00:34:14.988 "dma_device_type": 1 00:34:14.988 }, 00:34:14.988 { 00:34:14.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:14.988 "dma_device_type": 2 00:34:14.988 } 00:34:14.988 ], 00:34:14.988 "driver_specific": {} 00:34:14.988 } 00:34:14.988 ] 00:34:14.988 22:17:34 blockdev_crypto_sw -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:14.988 22:17:34 blockdev_crypto_sw -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:34:14.988 22:17:34 blockdev_crypto_sw -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:14.988 22:17:34 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:34:14.988 22:17:34 blockdev_crypto_sw -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:14.988 22:17:34 blockdev_crypto_sw -- bdev/blockdev.sh@740 -- # cat 00:34:14.988 22:17:34 blockdev_crypto_sw -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:34:14.988 22:17:34 blockdev_crypto_sw -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:14.988 22:17:34 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:34:14.988 22:17:34 blockdev_crypto_sw -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:14.988 22:17:34 blockdev_crypto_sw -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:34:14.988 22:17:34 blockdev_crypto_sw -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:14.988 22:17:34 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:34:14.988 22:17:34 blockdev_crypto_sw -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:14.988 22:17:34 blockdev_crypto_sw -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:34:14.988 22:17:34 blockdev_crypto_sw -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:14.988 22:17:34 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:34:14.988 22:17:34 blockdev_crypto_sw -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:14.988 22:17:34 blockdev_crypto_sw -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:34:14.988 22:17:34 blockdev_crypto_sw -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:34:14.988 22:17:34 blockdev_crypto_sw -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:34:14.988 22:17:34 blockdev_crypto_sw -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:14.988 22:17:34 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:34:14.988 22:17:34 blockdev_crypto_sw -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:14.988 22:17:34 blockdev_crypto_sw -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:34:14.988 22:17:34 blockdev_crypto_sw -- bdev/blockdev.sh@749 -- # jq -r .name 00:34:14.988 22:17:34 blockdev_crypto_sw -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "8d425104-43e8-511c-9139-e047ed9d63aa"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "8d425104-43e8-511c-9139-e047ed9d63aa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_sw"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "730d6c5b-3d91-55d2-b576-c40e4cce8fb6"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 4096,' ' "uuid": "730d6c5b-3d91-55d2-b576-c40e4cce8fb6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "crypto_ram2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_sw3"' ' }' ' }' '}' 00:34:14.988 22:17:34 blockdev_crypto_sw -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:34:14.988 22:17:34 blockdev_crypto_sw -- bdev/blockdev.sh@752 -- # hello_world_bdev=crypto_ram 00:34:14.988 22:17:34 blockdev_crypto_sw -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:34:14.988 22:17:34 blockdev_crypto_sw -- bdev/blockdev.sh@754 -- # killprocess 1593057 00:34:14.988 22:17:34 blockdev_crypto_sw -- common/autotest_common.sh@948 -- # '[' -z 1593057 ']' 00:34:14.988 22:17:34 blockdev_crypto_sw -- common/autotest_common.sh@952 -- # kill -0 1593057 00:34:14.988 22:17:34 blockdev_crypto_sw -- common/autotest_common.sh@953 -- # uname 00:34:14.988 22:17:34 blockdev_crypto_sw -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:14.988 22:17:34 blockdev_crypto_sw -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1593057 00:34:14.988 22:17:34 blockdev_crypto_sw -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:14.988 22:17:34 blockdev_crypto_sw -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:14.988 22:17:34 blockdev_crypto_sw -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1593057' 00:34:14.988 killing process with pid 1593057 00:34:14.988 22:17:34 blockdev_crypto_sw -- common/autotest_common.sh@967 -- # kill 1593057 00:34:14.988 22:17:34 blockdev_crypto_sw -- common/autotest_common.sh@972 -- # wait 1593057 00:34:17.519 22:17:36 blockdev_crypto_sw -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:17.519 22:17:36 blockdev_crypto_sw -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:34:17.519 22:17:36 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:34:17.519 22:17:36 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:17.519 22:17:36 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:34:17.519 ************************************ 00:34:17.519 START TEST bdev_hello_world 00:34:17.519 ************************************ 00:34:17.519 22:17:36 blockdev_crypto_sw.bdev_hello_world -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:34:17.519 [2024-07-13 22:17:36.850050] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:34:17.519 [2024-07-13 22:17:36.850148] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1593711 ] 00:34:17.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.777 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:17.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.777 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:17.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.777 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:17.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.777 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:17.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.777 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:17.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.777 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:17.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.777 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:17.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.777 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:17.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.777 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:17.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.777 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:17.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.777 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:17.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.777 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:17.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.777 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:17.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.777 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:17.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.777 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:17.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.777 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:17.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.777 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:17.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.777 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:17.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.777 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:17.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.777 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:17.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.777 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:17.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.777 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:17.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.777 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:17.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.777 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:17.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.777 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:17.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.777 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:17.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.777 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:17.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.777 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:17.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.777 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:17.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.777 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:17.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.777 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:17.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.777 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:17.777 [2024-07-13 22:17:37.013577] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:18.035 [2024-07-13 22:17:37.215882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:18.294 [2024-07-13 22:17:37.628891] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:34:18.294 [2024-07-13 22:17:37.628971] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:34:18.294 [2024-07-13 22:17:37.628985] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:18.294 [2024-07-13 22:17:37.636906] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:34:18.294 [2024-07-13 22:17:37.636941] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:34:18.294 [2024-07-13 22:17:37.636952] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:18.294 [2024-07-13 22:17:37.644924] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:34:18.294 [2024-07-13 22:17:37.644951] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:34:18.294 [2024-07-13 22:17:37.644962] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:18.552 [2024-07-13 22:17:37.715218] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:34:18.552 [2024-07-13 22:17:37.715244] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev crypto_ram 00:34:18.552 [2024-07-13 22:17:37.715266] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:34:18.552 [2024-07-13 22:17:37.716857] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:34:18.552 [2024-07-13 22:17:37.716944] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:34:18.552 [2024-07-13 22:17:37.716961] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:34:18.552 [2024-07-13 22:17:37.716990] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:34:18.552 00:34:18.552 [2024-07-13 22:17:37.717007] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:34:19.928 00:34:19.928 real 0m2.246s 00:34:19.928 user 0m1.918s 00:34:19.928 sys 0m0.308s 00:34:19.928 22:17:39 blockdev_crypto_sw.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:19.928 22:17:39 blockdev_crypto_sw.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:34:19.928 ************************************ 00:34:19.928 END TEST bdev_hello_world 00:34:19.928 ************************************ 00:34:19.928 22:17:39 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:34:19.928 22:17:39 blockdev_crypto_sw -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:34:19.928 22:17:39 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:19.928 22:17:39 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:19.928 22:17:39 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:34:19.928 ************************************ 00:34:19.928 START TEST bdev_bounds 00:34:19.928 ************************************ 00:34:19.928 22:17:39 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:34:19.928 22:17:39 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=1594197 00:34:19.928 22:17:39 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:34:19.928 22:17:39 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 1594197' 00:34:19.928 Process bdevio pid: 1594197 00:34:19.928 22:17:39 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 1594197 00:34:19.928 22:17:39 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 1594197 ']' 00:34:19.928 22:17:39 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:19.928 22:17:39 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:19.928 22:17:39 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:19.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:19.928 22:17:39 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@289 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:34:19.928 22:17:39 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:19.928 22:17:39 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:34:19.928 [2024-07-13 22:17:39.177415] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:34:19.928 [2024-07-13 22:17:39.177509] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1594197 ] 00:34:19.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:19.928 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:19.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:19.928 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:19.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:19.928 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:19.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:19.928 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:19.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:19.928 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:19.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:19.928 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:19.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:19.928 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:19.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:19.928 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:19.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:19.928 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:19.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:19.928 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:19.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:19.928 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:19.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:19.928 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:19.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:19.928 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:19.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:19.928 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:19.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:19.928 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:19.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:19.928 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:19.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:19.928 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:19.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:19.928 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:19.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:19.928 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:19.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:19.928 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:19.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:19.928 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:19.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:19.928 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:19.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:19.928 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:19.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:19.928 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:19.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:19.928 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:19.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:19.928 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:19.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:19.928 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:19.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:19.928 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:19.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:19.928 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:19.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:19.928 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:19.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:19.928 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:19.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:19.928 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:20.187 [2024-07-13 22:17:39.336893] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:20.187 [2024-07-13 22:17:39.541399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:20.187 [2024-07-13 22:17:39.541467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:20.187 [2024-07-13 22:17:39.541471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:20.754 [2024-07-13 22:17:39.990805] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:34:20.754 [2024-07-13 22:17:39.990868] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:34:20.754 [2024-07-13 22:17:39.990883] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:20.754 [2024-07-13 22:17:39.998822] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:34:20.754 [2024-07-13 22:17:39.998851] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:34:20.754 [2024-07-13 22:17:39.998862] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:20.754 [2024-07-13 22:17:40.006851] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:34:20.754 [2024-07-13 22:17:40.006882] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:34:20.754 [2024-07-13 22:17:40.006894] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:20.754 22:17:40 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:20.754 22:17:40 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:34:20.754 22:17:40 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@294 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:34:21.012 I/O targets: 00:34:21.012 crypto_ram: 32768 blocks of 512 bytes (16 MiB) 00:34:21.012 crypto_ram3: 4096 blocks of 4096 bytes (16 MiB) 00:34:21.013 00:34:21.013 00:34:21.013 CUnit - A unit testing framework for C - Version 2.1-3 00:34:21.013 http://cunit.sourceforge.net/ 00:34:21.013 00:34:21.013 00:34:21.013 Suite: bdevio tests on: crypto_ram3 00:34:21.013 Test: blockdev write read block ...passed 00:34:21.013 Test: blockdev write zeroes read block ...passed 00:34:21.013 Test: blockdev write zeroes read no split ...passed 00:34:21.013 Test: blockdev write zeroes read split ...passed 00:34:21.013 Test: blockdev write zeroes read split partial ...passed 00:34:21.013 Test: blockdev reset ...passed 00:34:21.013 Test: blockdev write read 8 blocks ...passed 00:34:21.013 Test: blockdev write read size > 128k ...passed 00:34:21.013 Test: blockdev write read invalid size ...passed 00:34:21.013 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:21.013 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:21.013 Test: blockdev write read max offset ...passed 00:34:21.013 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:21.013 Test: blockdev writev readv 8 blocks ...passed 00:34:21.013 Test: blockdev writev readv 30 x 1block ...passed 00:34:21.013 Test: blockdev writev readv block ...passed 00:34:21.013 Test: blockdev writev readv size > 128k ...passed 00:34:21.013 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:21.013 Test: blockdev comparev and writev ...passed 00:34:21.013 Test: blockdev nvme passthru rw ...passed 00:34:21.013 Test: blockdev nvme passthru vendor specific ...passed 00:34:21.013 Test: blockdev nvme admin passthru ...passed 00:34:21.013 Test: blockdev copy ...passed 00:34:21.013 Suite: bdevio tests on: crypto_ram 00:34:21.013 Test: blockdev write read block ...passed 00:34:21.013 Test: blockdev write zeroes read block ...passed 00:34:21.013 Test: blockdev write zeroes read no split ...passed 00:34:21.013 Test: blockdev write zeroes read split ...passed 00:34:21.013 Test: blockdev write zeroes read split partial ...passed 00:34:21.013 Test: blockdev reset ...passed 00:34:21.013 Test: blockdev write read 8 blocks ...passed 00:34:21.013 Test: blockdev write read size > 128k ...passed 00:34:21.013 Test: blockdev write read invalid size ...passed 00:34:21.013 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:21.013 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:21.013 Test: blockdev write read max offset ...passed 00:34:21.013 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:21.013 Test: blockdev writev readv 8 blocks ...passed 00:34:21.013 Test: blockdev writev readv 30 x 1block ...passed 00:34:21.013 Test: blockdev writev readv block ...passed 00:34:21.013 Test: blockdev writev readv size > 128k ...passed 00:34:21.013 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:21.013 Test: blockdev comparev and writev ...passed 00:34:21.013 Test: blockdev nvme passthru rw ...passed 00:34:21.013 Test: blockdev nvme passthru vendor specific ...passed 00:34:21.013 Test: blockdev nvme admin passthru ...passed 00:34:21.013 Test: blockdev copy ...passed 00:34:21.013 00:34:21.013 Run Summary: Type Total Ran Passed Failed Inactive 00:34:21.013 suites 2 2 n/a 0 0 00:34:21.013 tests 46 46 46 0 0 00:34:21.013 asserts 260 260 260 0 n/a 00:34:21.013 00:34:21.013 Elapsed time = 0.424 seconds 00:34:21.013 0 00:34:21.013 22:17:40 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 1594197 00:34:21.013 22:17:40 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 1594197 ']' 00:34:21.013 22:17:40 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 1594197 00:34:21.013 22:17:40 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:34:21.013 22:17:40 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:21.013 22:17:40 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1594197 00:34:21.271 22:17:40 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:21.271 22:17:40 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:21.271 22:17:40 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1594197' 00:34:21.271 killing process with pid 1594197 00:34:21.271 22:17:40 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@967 -- # kill 1594197 00:34:21.271 22:17:40 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@972 -- # wait 1594197 00:34:22.649 22:17:41 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:34:22.649 00:34:22.649 real 0m2.630s 00:34:22.649 user 0m6.118s 00:34:22.649 sys 0m0.444s 00:34:22.649 22:17:41 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:22.649 22:17:41 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:34:22.649 ************************************ 00:34:22.649 END TEST bdev_bounds 00:34:22.649 ************************************ 00:34:22.649 22:17:41 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:34:22.649 22:17:41 blockdev_crypto_sw -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram3' '' 00:34:22.649 22:17:41 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:34:22.649 22:17:41 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:22.649 22:17:41 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:34:22.649 ************************************ 00:34:22.649 START TEST bdev_nbd 00:34:22.649 ************************************ 00:34:22.649 22:17:41 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram3' '' 00:34:22.649 22:17:41 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:34:22.649 22:17:41 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:34:22.649 22:17:41 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:22.649 22:17:41 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:34:22.649 22:17:41 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('crypto_ram' 'crypto_ram3') 00:34:22.649 22:17:41 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:34:22.649 22:17:41 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=2 00:34:22.649 22:17:41 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:34:22.649 22:17:41 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:34:22.649 22:17:41 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:34:22.649 22:17:41 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=2 00:34:22.649 22:17:41 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:22.649 22:17:41 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:34:22.649 22:17:41 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:34:22.649 22:17:41 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:34:22.649 22:17:41 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=1594656 00:34:22.649 22:17:41 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:34:22.649 22:17:41 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:34:22.649 22:17:41 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 1594656 /var/tmp/spdk-nbd.sock 00:34:22.649 22:17:41 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 1594656 ']' 00:34:22.649 22:17:41 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:34:22.649 22:17:41 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:22.649 22:17:41 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:34:22.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:34:22.649 22:17:41 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:22.649 22:17:41 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:34:22.649 [2024-07-13 22:17:41.897420] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:34:22.649 [2024-07-13 22:17:41.897511] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:22.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:22.649 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:22.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:22.649 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:22.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:22.649 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:22.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:22.649 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:22.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:22.649 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:22.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:22.649 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:22.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:22.649 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:22.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:22.649 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:22.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:22.649 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:22.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:22.649 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:22.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:22.649 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:22.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:22.649 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:22.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:22.649 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:22.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:22.649 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:22.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:22.649 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:22.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:22.649 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:22.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:22.649 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:22.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:22.649 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:22.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:22.649 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:22.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:22.649 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:22.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:22.649 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:22.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:22.649 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:22.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:22.649 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:22.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:22.649 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:22.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:22.649 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:22.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:22.649 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:22.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:22.649 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:22.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:22.649 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:22.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:22.649 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:22.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:22.649 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:22.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:22.649 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:22.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:22.649 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:22.908 [2024-07-13 22:17:42.061711] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:22.908 [2024-07-13 22:17:42.261555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:23.476 [2024-07-13 22:17:42.701603] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:34:23.476 [2024-07-13 22:17:42.701679] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:34:23.476 [2024-07-13 22:17:42.701694] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:23.476 [2024-07-13 22:17:42.709627] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:34:23.476 [2024-07-13 22:17:42.709661] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:34:23.476 [2024-07-13 22:17:42.709672] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:23.476 [2024-07-13 22:17:42.717656] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:34:23.476 [2024-07-13 22:17:42.717683] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:34:23.476 [2024-07-13 22:17:42.717694] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:23.476 22:17:42 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:23.476 22:17:42 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:34:23.476 22:17:42 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' 00:34:23.476 22:17:42 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:23.476 22:17:42 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:34:23.476 22:17:42 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:34:23.476 22:17:42 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' 00:34:23.476 22:17:42 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:23.476 22:17:42 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:34:23.476 22:17:42 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:34:23.476 22:17:42 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:34:23.476 22:17:42 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:34:23.476 22:17:42 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:34:23.476 22:17:42 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:34:23.476 22:17:42 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram 00:34:23.734 22:17:42 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:34:23.734 22:17:42 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:34:23.734 22:17:42 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:34:23.734 22:17:42 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:34:23.734 22:17:42 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:34:23.734 22:17:42 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:34:23.734 22:17:42 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:34:23.734 22:17:42 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:34:23.734 22:17:42 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:34:23.734 22:17:42 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:34:23.734 22:17:42 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:34:23.734 22:17:42 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:23.734 1+0 records in 00:34:23.734 1+0 records out 00:34:23.734 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251535 s, 16.3 MB/s 00:34:23.734 22:17:43 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:34:23.734 22:17:43 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:34:23.734 22:17:43 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:34:23.735 22:17:43 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:34:23.735 22:17:43 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:34:23.735 22:17:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:34:23.735 22:17:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:34:23.735 22:17:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 00:34:23.995 22:17:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:34:23.995 22:17:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:34:23.995 22:17:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:34:23.995 22:17:43 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:34:23.995 22:17:43 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:34:23.995 22:17:43 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:34:23.995 22:17:43 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:34:23.995 22:17:43 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:34:23.995 22:17:43 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:34:23.995 22:17:43 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:34:23.995 22:17:43 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:34:23.995 22:17:43 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:23.995 1+0 records in 00:34:23.995 1+0 records out 00:34:23.995 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291115 s, 14.1 MB/s 00:34:23.995 22:17:43 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:34:23.995 22:17:43 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:34:23.995 22:17:43 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:34:23.995 22:17:43 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:34:23.995 22:17:43 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:34:23.995 22:17:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:34:23.995 22:17:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:34:23.995 22:17:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:34:24.254 22:17:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:34:24.254 { 00:34:24.254 "nbd_device": "/dev/nbd0", 00:34:24.254 "bdev_name": "crypto_ram" 00:34:24.254 }, 00:34:24.254 { 00:34:24.254 "nbd_device": "/dev/nbd1", 00:34:24.254 "bdev_name": "crypto_ram3" 00:34:24.254 } 00:34:24.254 ]' 00:34:24.254 22:17:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:34:24.254 22:17:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:34:24.254 { 00:34:24.254 "nbd_device": "/dev/nbd0", 00:34:24.254 "bdev_name": "crypto_ram" 00:34:24.254 }, 00:34:24.254 { 00:34:24.254 "nbd_device": "/dev/nbd1", 00:34:24.254 "bdev_name": "crypto_ram3" 00:34:24.254 } 00:34:24.254 ]' 00:34:24.254 22:17:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:34:24.254 22:17:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:34:24.254 22:17:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:24.254 22:17:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:24.254 22:17:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:24.254 22:17:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:34:24.254 22:17:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:24.254 22:17:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:34:24.254 22:17:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:24.254 22:17:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:24.254 22:17:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:24.254 22:17:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:24.254 22:17:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:24.254 22:17:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:24.254 22:17:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:24.254 22:17:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:24.254 22:17:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:24.254 22:17:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:34:24.512 22:17:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:34:24.512 22:17:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:34:24.512 22:17:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:34:24.512 22:17:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:24.512 22:17:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:24.512 22:17:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:34:24.512 22:17:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:24.512 22:17:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:24.512 22:17:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:34:24.512 22:17:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:24.512 22:17:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:34:24.769 22:17:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:34:24.769 22:17:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:34:24.769 22:17:43 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:34:24.769 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:34:24.769 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:34:24.769 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:34:24.769 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:34:24.769 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:34:24.769 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:34:24.769 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:34:24.769 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:34:24.769 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:34:24.769 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' '/dev/nbd0 /dev/nbd1' 00:34:24.769 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:24.769 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:34:24.769 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:34:24.769 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:24.769 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:34:24.769 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' '/dev/nbd0 /dev/nbd1' 00:34:24.769 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:24.769 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:34:24.769 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:34:24.769 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:24.769 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:34:24.769 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:34:24.769 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:34:24.769 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:24.769 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram /dev/nbd0 00:34:25.026 /dev/nbd0 00:34:25.026 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:34:25.027 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:34:25.027 22:17:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:34:25.027 22:17:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:34:25.027 22:17:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:34:25.027 22:17:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:34:25.027 22:17:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:34:25.027 22:17:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:34:25.027 22:17:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:34:25.027 22:17:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:34:25.027 22:17:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:25.027 1+0 records in 00:34:25.027 1+0 records out 00:34:25.027 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274034 s, 14.9 MB/s 00:34:25.027 22:17:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:34:25.027 22:17:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:34:25.027 22:17:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:34:25.027 22:17:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:34:25.027 22:17:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:34:25.027 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:25.027 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:25.027 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 /dev/nbd1 00:34:25.027 /dev/nbd1 00:34:25.027 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:34:25.285 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:34:25.285 22:17:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:34:25.285 22:17:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:34:25.285 22:17:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:34:25.285 22:17:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:34:25.285 22:17:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:34:25.285 22:17:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:34:25.285 22:17:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:34:25.285 22:17:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:34:25.285 22:17:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:25.285 1+0 records in 00:34:25.285 1+0 records out 00:34:25.285 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000314465 s, 13.0 MB/s 00:34:25.285 22:17:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:34:25.285 22:17:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:34:25.285 22:17:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:34:25.285 22:17:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:34:25.285 22:17:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:34:25.285 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:25.285 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:25.285 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:34:25.285 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:25.285 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:34:25.285 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:34:25.285 { 00:34:25.285 "nbd_device": "/dev/nbd0", 00:34:25.285 "bdev_name": "crypto_ram" 00:34:25.285 }, 00:34:25.285 { 00:34:25.285 "nbd_device": "/dev/nbd1", 00:34:25.285 "bdev_name": "crypto_ram3" 00:34:25.285 } 00:34:25.285 ]' 00:34:25.285 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:34:25.285 { 00:34:25.285 "nbd_device": "/dev/nbd0", 00:34:25.285 "bdev_name": "crypto_ram" 00:34:25.285 }, 00:34:25.285 { 00:34:25.285 "nbd_device": "/dev/nbd1", 00:34:25.285 "bdev_name": "crypto_ram3" 00:34:25.285 } 00:34:25.285 ]' 00:34:25.285 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:34:25.285 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:34:25.285 /dev/nbd1' 00:34:25.285 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:34:25.285 /dev/nbd1' 00:34:25.285 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:34:25.285 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=2 00:34:25.285 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 2 00:34:25.285 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=2 00:34:25.285 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:34:25.285 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:34:25.285 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:25.285 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:34:25.285 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:34:25.285 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:34:25.285 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:34:25.285 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:34:25.544 256+0 records in 00:34:25.544 256+0 records out 00:34:25.544 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114531 s, 91.6 MB/s 00:34:25.544 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:34:25.544 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:34:25.544 256+0 records in 00:34:25.544 256+0 records out 00:34:25.544 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204865 s, 51.2 MB/s 00:34:25.544 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:34:25.544 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:34:25.544 256+0 records in 00:34:25.544 256+0 records out 00:34:25.544 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0274771 s, 38.2 MB/s 00:34:25.544 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:34:25.544 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:25.544 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:34:25.544 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:34:25.544 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:34:25.544 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:34:25.544 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:34:25.544 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:34:25.544 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:34:25.544 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:34:25.544 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:34:25.544 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:34:25.544 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:34:25.544 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:25.544 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:25.544 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:25.544 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:34:25.544 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:25.544 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:34:25.803 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:25.803 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:25.803 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:25.803 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:25.803 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:25.803 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:25.803 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:25.803 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:25.803 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:25.803 22:17:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:34:25.803 22:17:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:34:25.803 22:17:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:34:25.803 22:17:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:34:25.803 22:17:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:25.803 22:17:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:25.803 22:17:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:34:25.803 22:17:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:25.803 22:17:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:25.803 22:17:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:34:25.803 22:17:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:25.803 22:17:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:34:26.124 22:17:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:34:26.125 22:17:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:34:26.125 22:17:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:34:26.125 22:17:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:34:26.125 22:17:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:34:26.125 22:17:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:34:26.125 22:17:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:34:26.125 22:17:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:34:26.125 22:17:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:34:26.125 22:17:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:34:26.125 22:17:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:34:26.125 22:17:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:34:26.125 22:17:45 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:34:26.125 22:17:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:26.125 22:17:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:26.125 22:17:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:34:26.125 22:17:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:34:26.125 22:17:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:34:26.413 malloc_lvol_verify 00:34:26.413 22:17:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:34:26.413 d13701be-1422-4704-84ba-747b66caef43 00:34:26.413 22:17:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:34:26.671 8eb2b1fa-07e7-4228-b881-97709c691dfa 00:34:26.671 22:17:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:34:26.671 /dev/nbd0 00:34:26.671 22:17:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:34:26.671 mke2fs 1.46.5 (30-Dec-2021) 00:34:26.930 Discarding device blocks: 0/4096 done 00:34:26.930 Creating filesystem with 4096 1k blocks and 1024 inodes 00:34:26.930 00:34:26.930 Allocating group tables: 0/1 done 00:34:26.930 Writing inode tables: 0/1 done 00:34:26.930 Creating journal (1024 blocks): done 00:34:26.930 Writing superblocks and filesystem accounting information: 0/1 done 00:34:26.930 00:34:26.930 22:17:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:34:26.930 22:17:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:34:26.930 22:17:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:26.930 22:17:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:34:26.930 22:17:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:26.930 22:17:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:34:26.930 22:17:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:26.930 22:17:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:34:26.930 22:17:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:26.930 22:17:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:26.930 22:17:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:26.930 22:17:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:26.930 22:17:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:26.930 22:17:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:26.930 22:17:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:26.930 22:17:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:26.930 22:17:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:34:26.930 22:17:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:34:26.930 22:17:46 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 1594656 00:34:26.930 22:17:46 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 1594656 ']' 00:34:26.930 22:17:46 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 1594656 00:34:26.930 22:17:46 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:34:26.930 22:17:46 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:26.930 22:17:46 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1594656 00:34:26.930 22:17:46 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:26.930 22:17:46 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:26.930 22:17:46 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1594656' 00:34:26.930 killing process with pid 1594656 00:34:26.930 22:17:46 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@967 -- # kill 1594656 00:34:26.930 22:17:46 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@972 -- # wait 1594656 00:34:28.306 22:17:47 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:34:28.306 00:34:28.306 real 0m5.861s 00:34:28.306 user 0m7.389s 00:34:28.306 sys 0m2.009s 00:34:28.306 22:17:47 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:28.306 22:17:47 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:34:28.306 ************************************ 00:34:28.306 END TEST bdev_nbd 00:34:28.306 ************************************ 00:34:28.565 22:17:47 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:34:28.565 22:17:47 blockdev_crypto_sw -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:34:28.565 22:17:47 blockdev_crypto_sw -- bdev/blockdev.sh@764 -- # '[' crypto_sw = nvme ']' 00:34:28.565 22:17:47 blockdev_crypto_sw -- bdev/blockdev.sh@764 -- # '[' crypto_sw = gpt ']' 00:34:28.565 22:17:47 blockdev_crypto_sw -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:34:28.565 22:17:47 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:28.565 22:17:47 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:28.565 22:17:47 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:34:28.565 ************************************ 00:34:28.565 START TEST bdev_fio 00:34:28.565 ************************************ 00:34:28.565 22:17:47 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:34:28.565 22:17:47 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:34:28.565 22:17:47 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:34:28.565 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:34:28.565 22:17:47 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:34:28.565 22:17:47 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:34:28.565 22:17:47 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:34:28.565 22:17:47 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:34:28.565 22:17:47 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:34:28.565 22:17:47 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:34:28.565 22:17:47 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:34:28.565 22:17:47 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:34:28.565 22:17:47 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:34:28.565 22:17:47 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:34:28.565 22:17:47 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:34:28.565 22:17:47 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:34:28.565 22:17:47 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:34:28.565 22:17:47 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:34:28.565 22:17:47 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:34:28.565 22:17:47 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:34:28.565 22:17:47 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:34:28.565 22:17:47 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:34:28.565 22:17:47 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:34:28.565 22:17:47 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:34:28.565 22:17:47 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:34:28.565 22:17:47 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:34:28.565 22:17:47 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram]' 00:34:28.565 22:17:47 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram 00:34:28.565 22:17:47 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:34:28.565 22:17:47 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram3]' 00:34:28.565 22:17:47 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram3 00:34:28.565 22:17:47 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:34:28.565 22:17:47 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:34:28.565 22:17:47 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:34:28.565 22:17:47 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:28.565 22:17:47 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:34:28.565 ************************************ 00:34:28.565 START TEST bdev_fio_rw_verify 00:34:28.565 ************************************ 00:34:28.565 22:17:47 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:34:28.565 22:17:47 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:34:28.565 22:17:47 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:28.565 22:17:47 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:28.565 22:17:47 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:28.565 22:17:47 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:34:28.565 22:17:47 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:34:28.565 22:17:47 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:28.565 22:17:47 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:28.565 22:17:47 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:34:28.565 22:17:47 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:34:28.565 22:17:47 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:28.565 22:17:47 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:34:28.565 22:17:47 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:34:28.565 22:17:47 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:34:28.565 22:17:47 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:28.565 22:17:47 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:34:29.165 job_crypto_ram: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:34:29.165 job_crypto_ram3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:34:29.165 fio-3.35 00:34:29.165 Starting 2 threads 00:34:29.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.165 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:29.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.165 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:29.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.165 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:29.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.165 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:29.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.165 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:29.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.165 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:29.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.165 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:29.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.165 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:29.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.165 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:29.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.165 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:29.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.165 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:29.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.165 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:29.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.165 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:29.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.165 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:29.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.165 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:29.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.165 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:29.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.165 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:29.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.165 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:29.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.165 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:29.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.165 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:29.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.165 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:29.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.165 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:29.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.165 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:29.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.165 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:29.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.165 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:29.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.165 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:29.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.165 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:29.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.165 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:29.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.165 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:29.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.165 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:29.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.165 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:29.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.165 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:41.354 00:34:41.354 job_crypto_ram: (groupid=0, jobs=2): err= 0: pid=1596081: Sat Jul 13 22:17:59 2024 00:34:41.354 read: IOPS=30.6k, BW=120MiB/s (125MB/s)(1197MiB/10001msec) 00:34:41.354 slat (usec): min=9, max=107, avg=14.72, stdev= 3.12 00:34:41.354 clat (usec): min=5, max=498, avg=105.14, stdev=42.94 00:34:41.354 lat (usec): min=18, max=549, avg=119.86, stdev=44.15 00:34:41.354 clat percentiles (usec): 00:34:41.354 | 50.000th=[ 102], 99.000th=[ 206], 99.900th=[ 231], 99.990th=[ 265], 00:34:41.354 | 99.999th=[ 416] 00:34:41.354 write: IOPS=36.8k, BW=144MiB/s (151MB/s)(1364MiB/9492msec); 0 zone resets 00:34:41.354 slat (usec): min=9, max=195, avg=24.22, stdev= 4.17 00:34:41.354 clat (usec): min=17, max=748, avg=140.20, stdev=65.17 00:34:41.354 lat (usec): min=35, max=845, avg=164.42, stdev=66.58 00:34:41.354 clat percentiles (usec): 00:34:41.354 | 50.000th=[ 137], 99.000th=[ 285], 99.900th=[ 314], 99.990th=[ 506], 00:34:41.354 | 99.999th=[ 717] 00:34:41.354 bw ( KiB/s): min=131720, max=147008, per=94.84%, avg=139583.58, stdev=2281.73, samples=38 00:34:41.354 iops : min=32930, max=36752, avg=34895.89, stdev=570.43, samples=38 00:34:41.354 lat (usec) : 10=0.01%, 20=0.01%, 50=8.65%, 100=30.28%, 250=57.69% 00:34:41.354 lat (usec) : 500=3.36%, 750=0.01% 00:34:41.354 cpu : usr=99.28%, sys=0.36%, ctx=35, majf=0, minf=26011 00:34:41.354 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:41.354 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.354 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.354 issued rwts: total=306416,349247,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:41.354 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:41.354 00:34:41.354 Run status group 0 (all jobs): 00:34:41.354 READ: bw=120MiB/s (125MB/s), 120MiB/s-120MiB/s (125MB/s-125MB/s), io=1197MiB (1255MB), run=10001-10001msec 00:34:41.354 WRITE: bw=144MiB/s (151MB/s), 144MiB/s-144MiB/s (151MB/s-151MB/s), io=1364MiB (1431MB), run=9492-9492msec 00:34:41.354 ----------------------------------------------------- 00:34:41.354 Suppressions used: 00:34:41.354 count bytes template 00:34:41.354 2 23 /usr/src/fio/parse.c 00:34:41.354 1475 141600 /usr/src/fio/iolog.c 00:34:41.354 1 8 libtcmalloc_minimal.so 00:34:41.354 1 904 libcrypto.so 00:34:41.354 ----------------------------------------------------- 00:34:41.354 00:34:41.354 00:34:41.354 real 0m12.713s 00:34:41.354 user 0m30.244s 00:34:41.354 sys 0m0.659s 00:34:41.354 22:18:00 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:41.354 22:18:00 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:34:41.354 ************************************ 00:34:41.354 END TEST bdev_fio_rw_verify 00:34:41.354 ************************************ 00:34:41.354 22:18:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:34:41.354 22:18:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:34:41.354 22:18:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:34:41.354 22:18:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:34:41.354 22:18:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:34:41.354 22:18:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:34:41.354 22:18:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:34:41.354 22:18:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:34:41.354 22:18:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:34:41.354 22:18:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:34:41.354 22:18:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:34:41.354 22:18:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:34:41.354 22:18:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:34:41.354 22:18:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:34:41.354 22:18:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:34:41.354 22:18:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:34:41.354 22:18:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:34:41.355 22:18:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "8d425104-43e8-511c-9139-e047ed9d63aa"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "8d425104-43e8-511c-9139-e047ed9d63aa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_sw"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "730d6c5b-3d91-55d2-b576-c40e4cce8fb6"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 4096,' ' "uuid": "730d6c5b-3d91-55d2-b576-c40e4cce8fb6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "crypto_ram2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_sw3"' ' }' ' }' '}' 00:34:41.355 22:18:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:34:41.355 22:18:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n crypto_ram 00:34:41.355 crypto_ram3 ]] 00:34:41.355 22:18:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:34:41.355 22:18:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "8d425104-43e8-511c-9139-e047ed9d63aa"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "8d425104-43e8-511c-9139-e047ed9d63aa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_sw"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "730d6c5b-3d91-55d2-b576-c40e4cce8fb6"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 4096,' ' "uuid": "730d6c5b-3d91-55d2-b576-c40e4cce8fb6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "crypto_ram2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_sw3"' ' }' ' }' '}' 00:34:41.355 22:18:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:34:41.355 22:18:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram]' 00:34:41.355 22:18:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram 00:34:41.355 22:18:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:34:41.355 22:18:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram3]' 00:34:41.355 22:18:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram3 00:34:41.355 22:18:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:34:41.355 22:18:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:34:41.355 22:18:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:41.355 22:18:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:34:41.612 ************************************ 00:34:41.612 START TEST bdev_fio_trim 00:34:41.612 ************************************ 00:34:41.612 22:18:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:34:41.612 22:18:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:34:41.612 22:18:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:41.612 22:18:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:41.612 22:18:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:41.612 22:18:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:34:41.612 22:18:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:34:41.612 22:18:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:41.612 22:18:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:41.612 22:18:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:34:41.612 22:18:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:41.612 22:18:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:34:41.612 22:18:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:34:41.612 22:18:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:34:41.612 22:18:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1347 -- # break 00:34:41.612 22:18:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:41.612 22:18:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:34:41.869 job_crypto_ram: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:34:41.869 job_crypto_ram3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:34:41.869 fio-3.35 00:34:41.869 Starting 2 threads 00:34:42.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.126 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:42.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.126 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:42.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.126 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:42.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.126 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:42.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.126 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:42.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.126 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:42.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.126 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:42.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.126 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:42.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.126 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:42.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.126 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:42.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.126 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:42.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.126 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:42.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.126 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:42.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.126 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:42.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.126 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:42.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.126 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:42.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.126 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:42.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.126 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:42.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.126 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:42.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.126 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:42.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.126 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:42.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.126 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:42.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.126 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:42.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.126 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:42.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.126 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:42.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.126 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:42.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.126 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:42.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.126 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:42.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.126 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:42.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.126 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:42.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.126 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:42.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.126 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:54.316 00:34:54.316 job_crypto_ram: (groupid=0, jobs=2): err= 0: pid=1598454: Sat Jul 13 22:18:12 2024 00:34:54.316 write: IOPS=54.4k, BW=212MiB/s (223MB/s)(2124MiB/10001msec); 0 zone resets 00:34:54.316 slat (usec): min=9, max=122, avg=16.35, stdev= 3.48 00:34:54.316 clat (usec): min=24, max=845, avg=119.81, stdev=67.79 00:34:54.316 lat (usec): min=33, max=903, avg=136.15, stdev=70.29 00:34:54.316 clat percentiles (usec): 00:34:54.316 | 50.000th=[ 95], 99.000th=[ 255], 99.900th=[ 277], 99.990th=[ 461], 00:34:54.316 | 99.999th=[ 750] 00:34:54.316 bw ( KiB/s): min=211088, max=220128, per=100.00%, avg=217503.58, stdev=1014.86, samples=38 00:34:54.316 iops : min=52772, max=55032, avg=54375.89, stdev=253.72, samples=38 00:34:54.316 trim: IOPS=54.4k, BW=212MiB/s (223MB/s)(2124MiB/10001msec); 0 zone resets 00:34:54.316 slat (nsec): min=4044, max=88782, avg=7448.37, stdev=2013.85 00:34:54.316 clat (usec): min=30, max=532, avg=79.66, stdev=25.02 00:34:54.316 lat (usec): min=36, max=582, avg=87.10, stdev=25.27 00:34:54.316 clat percentiles (usec): 00:34:54.316 | 50.000th=[ 80], 99.000th=[ 135], 99.900th=[ 149], 99.990th=[ 208], 00:34:54.316 | 99.999th=[ 359] 00:34:54.316 bw ( KiB/s): min=211112, max=220128, per=100.00%, avg=217505.26, stdev=1012.57, samples=38 00:34:54.316 iops : min=52778, max=55032, avg=54376.32, stdev=253.14, samples=38 00:34:54.316 lat (usec) : 50=15.99%, 100=48.16%, 250=35.12%, 500=0.73%, 750=0.01% 00:34:54.316 lat (usec) : 1000=0.01% 00:34:54.316 cpu : usr=99.64%, sys=0.04%, ctx=39, majf=0, minf=2106 00:34:54.316 IO depths : 1=7.5%, 2=17.4%, 4=60.1%, 8=15.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:54.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.316 complete : 0=0.0%, 4=86.9%, 8=13.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.316 issued rwts: total=0,543785,543785,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.316 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:54.316 00:34:54.316 Run status group 0 (all jobs): 00:34:54.316 WRITE: bw=212MiB/s (223MB/s), 212MiB/s-212MiB/s (223MB/s-223MB/s), io=2124MiB (2227MB), run=10001-10001msec 00:34:54.316 TRIM: bw=212MiB/s (223MB/s), 212MiB/s-212MiB/s (223MB/s-223MB/s), io=2124MiB (2227MB), run=10001-10001msec 00:34:54.316 ----------------------------------------------------- 00:34:54.316 Suppressions used: 00:34:54.316 count bytes template 00:34:54.316 2 23 /usr/src/fio/parse.c 00:34:54.316 1 8 libtcmalloc_minimal.so 00:34:54.316 1 904 libcrypto.so 00:34:54.316 ----------------------------------------------------- 00:34:54.316 00:34:54.316 00:34:54.316 real 0m12.615s 00:34:54.316 user 0m30.652s 00:34:54.316 sys 0m0.647s 00:34:54.316 22:18:13 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:54.316 22:18:13 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:34:54.316 ************************************ 00:34:54.316 END TEST bdev_fio_trim 00:34:54.316 ************************************ 00:34:54.316 22:18:13 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:34:54.316 22:18:13 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f 00:34:54.316 22:18:13 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@369 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:34:54.316 22:18:13 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@370 -- # popd 00:34:54.316 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:34:54.316 22:18:13 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:34:54.316 00:34:54.316 real 0m25.688s 00:34:54.316 user 1m1.087s 00:34:54.316 sys 0m1.496s 00:34:54.316 22:18:13 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:54.316 22:18:13 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:34:54.316 ************************************ 00:34:54.316 END TEST bdev_fio 00:34:54.316 ************************************ 00:34:54.316 22:18:13 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:34:54.316 22:18:13 blockdev_crypto_sw -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:54.316 22:18:13 blockdev_crypto_sw -- bdev/blockdev.sh@777 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:34:54.316 22:18:13 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:34:54.316 22:18:13 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:54.316 22:18:13 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:34:54.316 ************************************ 00:34:54.316 START TEST bdev_verify 00:34:54.316 ************************************ 00:34:54.316 22:18:13 blockdev_crypto_sw.bdev_verify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:34:54.316 [2024-07-13 22:18:13.615046] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:34:54.316 [2024-07-13 22:18:13.615136] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1600738 ] 00:34:54.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.574 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:54.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.574 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:54.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.574 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:54.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.574 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:54.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.574 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:54.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.574 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:54.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.574 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:54.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.574 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:54.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.574 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:54.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.574 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:54.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.574 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:54.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.574 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:54.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.574 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:54.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.574 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:54.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.574 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:54.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.574 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:54.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.574 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:54.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.574 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:54.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.574 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:54.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.574 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:54.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.574 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:54.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.574 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:54.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.574 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:54.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.574 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:54.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.574 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:54.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.574 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:54.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.574 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:54.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.574 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:54.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.574 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:54.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.574 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:54.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.574 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:54.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.574 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:54.574 [2024-07-13 22:18:13.779244] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:54.832 [2024-07-13 22:18:13.987261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:54.832 [2024-07-13 22:18:13.987272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:55.089 [2024-07-13 22:18:14.431111] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:34:55.089 [2024-07-13 22:18:14.431178] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:34:55.089 [2024-07-13 22:18:14.431192] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:55.089 [2024-07-13 22:18:14.439129] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:34:55.089 [2024-07-13 22:18:14.439161] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:34:55.089 [2024-07-13 22:18:14.439173] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:55.089 [2024-07-13 22:18:14.447154] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:34:55.089 [2024-07-13 22:18:14.447183] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:34:55.089 [2024-07-13 22:18:14.447194] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:55.347 Running I/O for 5 seconds... 00:35:00.636 00:35:00.636 Latency(us) 00:35:00.636 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:00.636 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:00.636 Verification LBA range: start 0x0 length 0x800 00:35:00.636 crypto_ram : 5.02 7477.17 29.21 0.00 0.00 17057.13 1336.93 21076.38 00:35:00.636 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:35:00.636 Verification LBA range: start 0x800 length 0x800 00:35:00.637 crypto_ram : 5.02 7477.92 29.21 0.00 0.00 17055.51 1474.56 21076.38 00:35:00.637 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:00.637 Verification LBA range: start 0x0 length 0x800 00:35:00.637 crypto_ram3 : 5.02 3746.59 14.64 0.00 0.00 33997.84 1559.76 24956.11 00:35:00.637 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:35:00.637 Verification LBA range: start 0x800 length 0x800 00:35:00.637 crypto_ram3 : 5.02 3746.97 14.64 0.00 0.00 33991.55 1664.61 24956.11 00:35:00.637 =================================================================================================================== 00:35:00.637 Total : 22448.64 87.69 0.00 0.00 22715.28 1336.93 24956.11 00:35:01.571 00:35:01.571 real 0m7.365s 00:35:01.571 user 0m13.466s 00:35:01.571 sys 0m0.305s 00:35:01.571 22:18:20 blockdev_crypto_sw.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:01.571 22:18:20 blockdev_crypto_sw.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:35:01.571 ************************************ 00:35:01.571 END TEST bdev_verify 00:35:01.571 ************************************ 00:35:01.571 22:18:20 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:35:01.571 22:18:20 blockdev_crypto_sw -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:35:01.571 22:18:20 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:35:01.571 22:18:20 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:01.571 22:18:20 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:35:01.830 ************************************ 00:35:01.830 START TEST bdev_verify_big_io 00:35:01.830 ************************************ 00:35:01.830 22:18:20 blockdev_crypto_sw.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:35:01.830 [2024-07-13 22:18:21.064712] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:35:01.830 [2024-07-13 22:18:21.064801] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1602061 ] 00:35:01.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:01.830 EAL: Requested device 0000:3d:01.0 cannot be used 00:35:01.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:01.830 EAL: Requested device 0000:3d:01.1 cannot be used 00:35:01.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:01.830 EAL: Requested device 0000:3d:01.2 cannot be used 00:35:01.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:01.830 EAL: Requested device 0000:3d:01.3 cannot be used 00:35:01.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:01.830 EAL: Requested device 0000:3d:01.4 cannot be used 00:35:01.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:01.830 EAL: Requested device 0000:3d:01.5 cannot be used 00:35:01.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:01.830 EAL: Requested device 0000:3d:01.6 cannot be used 00:35:01.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:01.830 EAL: Requested device 0000:3d:01.7 cannot be used 00:35:01.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:01.830 EAL: Requested device 0000:3d:02.0 cannot be used 00:35:01.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:01.830 EAL: Requested device 0000:3d:02.1 cannot be used 00:35:01.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:01.830 EAL: Requested device 0000:3d:02.2 cannot be used 00:35:01.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:01.830 EAL: Requested device 0000:3d:02.3 cannot be used 00:35:01.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:01.830 EAL: Requested device 0000:3d:02.4 cannot be used 00:35:01.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:01.830 EAL: Requested device 0000:3d:02.5 cannot be used 00:35:01.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:01.830 EAL: Requested device 0000:3d:02.6 cannot be used 00:35:01.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:01.830 EAL: Requested device 0000:3d:02.7 cannot be used 00:35:01.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:01.830 EAL: Requested device 0000:3f:01.0 cannot be used 00:35:01.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:01.830 EAL: Requested device 0000:3f:01.1 cannot be used 00:35:01.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:01.830 EAL: Requested device 0000:3f:01.2 cannot be used 00:35:01.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:01.830 EAL: Requested device 0000:3f:01.3 cannot be used 00:35:01.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:01.830 EAL: Requested device 0000:3f:01.4 cannot be used 00:35:01.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:01.830 EAL: Requested device 0000:3f:01.5 cannot be used 00:35:01.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:01.830 EAL: Requested device 0000:3f:01.6 cannot be used 00:35:01.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:01.830 EAL: Requested device 0000:3f:01.7 cannot be used 00:35:01.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:01.830 EAL: Requested device 0000:3f:02.0 cannot be used 00:35:01.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:01.831 EAL: Requested device 0000:3f:02.1 cannot be used 00:35:01.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:01.831 EAL: Requested device 0000:3f:02.2 cannot be used 00:35:01.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:01.831 EAL: Requested device 0000:3f:02.3 cannot be used 00:35:01.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:01.831 EAL: Requested device 0000:3f:02.4 cannot be used 00:35:01.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:01.831 EAL: Requested device 0000:3f:02.5 cannot be used 00:35:01.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:01.831 EAL: Requested device 0000:3f:02.6 cannot be used 00:35:01.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:01.831 EAL: Requested device 0000:3f:02.7 cannot be used 00:35:02.089 [2024-07-13 22:18:21.224023] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:02.089 [2024-07-13 22:18:21.432800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:02.089 [2024-07-13 22:18:21.432810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:02.656 [2024-07-13 22:18:21.881037] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:35:02.656 [2024-07-13 22:18:21.881097] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:35:02.656 [2024-07-13 22:18:21.881111] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:35:02.656 [2024-07-13 22:18:21.889049] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:35:02.656 [2024-07-13 22:18:21.889081] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:35:02.656 [2024-07-13 22:18:21.889093] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:35:02.656 [2024-07-13 22:18:21.897075] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:35:02.656 [2024-07-13 22:18:21.897102] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:35:02.656 [2024-07-13 22:18:21.897113] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:35:02.656 Running I/O for 5 seconds... 00:35:07.914 00:35:07.914 Latency(us) 00:35:07.914 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:07.914 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:35:07.914 Verification LBA range: start 0x0 length 0x80 00:35:07.914 crypto_ram : 5.03 814.95 50.93 0.00 0.00 154565.64 4849.66 213070.64 00:35:07.914 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:35:07.914 Verification LBA range: start 0x80 length 0x80 00:35:07.914 crypto_ram : 5.13 824.00 51.50 0.00 0.00 152992.67 5793.38 211392.92 00:35:07.914 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:35:07.914 Verification LBA range: start 0x0 length 0x80 00:35:07.914 crypto_ram3 : 5.13 423.80 26.49 0.00 0.00 290896.26 4823.45 221459.25 00:35:07.914 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:35:07.914 Verification LBA range: start 0x80 length 0x80 00:35:07.915 crypto_ram3 : 5.14 423.58 26.47 0.00 0.00 291173.79 4456.45 219781.53 00:35:07.915 =================================================================================================================== 00:35:07.915 Total : 2486.33 155.40 0.00 0.00 200909.59 4456.45 221459.25 00:35:09.287 00:35:09.287 real 0m7.552s 00:35:09.287 user 0m13.859s 00:35:09.287 sys 0m0.302s 00:35:09.287 22:18:28 blockdev_crypto_sw.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:09.287 22:18:28 blockdev_crypto_sw.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:35:09.287 ************************************ 00:35:09.287 END TEST bdev_verify_big_io 00:35:09.287 ************************************ 00:35:09.287 22:18:28 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:35:09.287 22:18:28 blockdev_crypto_sw -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:35:09.287 22:18:28 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:35:09.287 22:18:28 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:09.287 22:18:28 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:35:09.287 ************************************ 00:35:09.287 START TEST bdev_write_zeroes 00:35:09.287 ************************************ 00:35:09.287 22:18:28 blockdev_crypto_sw.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:35:09.545 [2024-07-13 22:18:28.702328] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:35:09.545 [2024-07-13 22:18:28.702418] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1603180 ] 00:35:09.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:09.545 EAL: Requested device 0000:3d:01.0 cannot be used 00:35:09.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:09.545 EAL: Requested device 0000:3d:01.1 cannot be used 00:35:09.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:09.545 EAL: Requested device 0000:3d:01.2 cannot be used 00:35:09.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:09.545 EAL: Requested device 0000:3d:01.3 cannot be used 00:35:09.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:09.545 EAL: Requested device 0000:3d:01.4 cannot be used 00:35:09.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:09.545 EAL: Requested device 0000:3d:01.5 cannot be used 00:35:09.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:09.545 EAL: Requested device 0000:3d:01.6 cannot be used 00:35:09.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:09.545 EAL: Requested device 0000:3d:01.7 cannot be used 00:35:09.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:09.545 EAL: Requested device 0000:3d:02.0 cannot be used 00:35:09.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:09.545 EAL: Requested device 0000:3d:02.1 cannot be used 00:35:09.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:09.545 EAL: Requested device 0000:3d:02.2 cannot be used 00:35:09.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:09.545 EAL: Requested device 0000:3d:02.3 cannot be used 00:35:09.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:09.545 EAL: Requested device 0000:3d:02.4 cannot be used 00:35:09.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:09.545 EAL: Requested device 0000:3d:02.5 cannot be used 00:35:09.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:09.545 EAL: Requested device 0000:3d:02.6 cannot be used 00:35:09.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:09.545 EAL: Requested device 0000:3d:02.7 cannot be used 00:35:09.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:09.545 EAL: Requested device 0000:3f:01.0 cannot be used 00:35:09.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:09.545 EAL: Requested device 0000:3f:01.1 cannot be used 00:35:09.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:09.545 EAL: Requested device 0000:3f:01.2 cannot be used 00:35:09.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:09.545 EAL: Requested device 0000:3f:01.3 cannot be used 00:35:09.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:09.545 EAL: Requested device 0000:3f:01.4 cannot be used 00:35:09.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:09.545 EAL: Requested device 0000:3f:01.5 cannot be used 00:35:09.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:09.545 EAL: Requested device 0000:3f:01.6 cannot be used 00:35:09.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:09.545 EAL: Requested device 0000:3f:01.7 cannot be used 00:35:09.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:09.545 EAL: Requested device 0000:3f:02.0 cannot be used 00:35:09.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:09.545 EAL: Requested device 0000:3f:02.1 cannot be used 00:35:09.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:09.545 EAL: Requested device 0000:3f:02.2 cannot be used 00:35:09.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:09.545 EAL: Requested device 0000:3f:02.3 cannot be used 00:35:09.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:09.545 EAL: Requested device 0000:3f:02.4 cannot be used 00:35:09.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:09.545 EAL: Requested device 0000:3f:02.5 cannot be used 00:35:09.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:09.545 EAL: Requested device 0000:3f:02.6 cannot be used 00:35:09.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:09.545 EAL: Requested device 0000:3f:02.7 cannot be used 00:35:09.545 [2024-07-13 22:18:28.863501] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:09.803 [2024-07-13 22:18:29.068831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:10.366 [2024-07-13 22:18:29.496572] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:35:10.366 [2024-07-13 22:18:29.496632] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:35:10.366 [2024-07-13 22:18:29.496647] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:35:10.366 [2024-07-13 22:18:29.504590] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:35:10.366 [2024-07-13 22:18:29.504621] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:35:10.366 [2024-07-13 22:18:29.504633] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:35:10.366 [2024-07-13 22:18:29.512609] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:35:10.366 [2024-07-13 22:18:29.512633] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:35:10.366 [2024-07-13 22:18:29.512644] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:35:10.366 Running I/O for 1 seconds... 00:35:11.298 00:35:11.298 Latency(us) 00:35:11.298 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:11.298 Job: crypto_ram (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:35:11.298 crypto_ram : 1.00 38768.59 151.44 0.00 0.00 3294.01 924.06 5164.24 00:35:11.298 Job: crypto_ram3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:35:11.298 crypto_ram3 : 1.01 19425.45 75.88 0.00 0.00 6554.62 1205.86 7497.32 00:35:11.298 =================================================================================================================== 00:35:11.298 Total : 58194.04 227.32 0.00 0.00 4385.64 924.06 7497.32 00:35:12.666 00:35:12.666 real 0m3.282s 00:35:12.666 user 0m2.958s 00:35:12.666 sys 0m0.299s 00:35:12.666 22:18:31 blockdev_crypto_sw.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:12.666 22:18:31 blockdev_crypto_sw.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:35:12.666 ************************************ 00:35:12.666 END TEST bdev_write_zeroes 00:35:12.666 ************************************ 00:35:12.666 22:18:31 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:35:12.666 22:18:31 blockdev_crypto_sw -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:35:12.666 22:18:31 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:35:12.666 22:18:31 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:12.666 22:18:31 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:35:12.666 ************************************ 00:35:12.666 START TEST bdev_json_nonenclosed 00:35:12.666 ************************************ 00:35:12.666 22:18:31 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:35:12.923 [2024-07-13 22:18:32.068274] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:35:12.923 [2024-07-13 22:18:32.068371] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1603772 ] 00:35:12.923 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:12.923 EAL: Requested device 0000:3d:01.0 cannot be used 00:35:12.923 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:12.923 EAL: Requested device 0000:3d:01.1 cannot be used 00:35:12.923 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:12.923 EAL: Requested device 0000:3d:01.2 cannot be used 00:35:12.923 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:12.923 EAL: Requested device 0000:3d:01.3 cannot be used 00:35:12.923 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:12.923 EAL: Requested device 0000:3d:01.4 cannot be used 00:35:12.923 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:12.923 EAL: Requested device 0000:3d:01.5 cannot be used 00:35:12.923 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:12.923 EAL: Requested device 0000:3d:01.6 cannot be used 00:35:12.923 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:12.923 EAL: Requested device 0000:3d:01.7 cannot be used 00:35:12.923 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:12.923 EAL: Requested device 0000:3d:02.0 cannot be used 00:35:12.923 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:12.923 EAL: Requested device 0000:3d:02.1 cannot be used 00:35:12.923 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:12.923 EAL: Requested device 0000:3d:02.2 cannot be used 00:35:12.923 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:12.923 EAL: Requested device 0000:3d:02.3 cannot be used 00:35:12.923 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:12.923 EAL: Requested device 0000:3d:02.4 cannot be used 00:35:12.923 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:12.923 EAL: Requested device 0000:3d:02.5 cannot be used 00:35:12.923 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:12.923 EAL: Requested device 0000:3d:02.6 cannot be used 00:35:12.923 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:12.923 EAL: Requested device 0000:3d:02.7 cannot be used 00:35:12.923 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:12.923 EAL: Requested device 0000:3f:01.0 cannot be used 00:35:12.923 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:12.923 EAL: Requested device 0000:3f:01.1 cannot be used 00:35:12.923 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:12.923 EAL: Requested device 0000:3f:01.2 cannot be used 00:35:12.923 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:12.923 EAL: Requested device 0000:3f:01.3 cannot be used 00:35:12.923 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:12.923 EAL: Requested device 0000:3f:01.4 cannot be used 00:35:12.923 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:12.923 EAL: Requested device 0000:3f:01.5 cannot be used 00:35:12.923 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:12.923 EAL: Requested device 0000:3f:01.6 cannot be used 00:35:12.923 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:12.923 EAL: Requested device 0000:3f:01.7 cannot be used 00:35:12.923 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:12.923 EAL: Requested device 0000:3f:02.0 cannot be used 00:35:12.923 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:12.923 EAL: Requested device 0000:3f:02.1 cannot be used 00:35:12.923 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:12.923 EAL: Requested device 0000:3f:02.2 cannot be used 00:35:12.923 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:12.923 EAL: Requested device 0000:3f:02.3 cannot be used 00:35:12.923 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:12.923 EAL: Requested device 0000:3f:02.4 cannot be used 00:35:12.923 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:12.923 EAL: Requested device 0000:3f:02.5 cannot be used 00:35:12.923 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:12.923 EAL: Requested device 0000:3f:02.6 cannot be used 00:35:12.923 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:12.923 EAL: Requested device 0000:3f:02.7 cannot be used 00:35:12.923 [2024-07-13 22:18:32.227013] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:13.180 [2024-07-13 22:18:32.433568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:13.180 [2024-07-13 22:18:32.433640] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:35:13.180 [2024-07-13 22:18:32.433666] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:35:13.180 [2024-07-13 22:18:32.433678] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:35:13.745 00:35:13.745 real 0m0.857s 00:35:13.745 user 0m0.652s 00:35:13.745 sys 0m0.201s 00:35:13.745 22:18:32 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:35:13.745 22:18:32 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:13.745 22:18:32 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:35:13.745 ************************************ 00:35:13.745 END TEST bdev_json_nonenclosed 00:35:13.745 ************************************ 00:35:13.745 22:18:32 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 234 00:35:13.745 22:18:32 blockdev_crypto_sw -- bdev/blockdev.sh@782 -- # true 00:35:13.745 22:18:32 blockdev_crypto_sw -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:35:13.745 22:18:32 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:35:13.745 22:18:32 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:13.745 22:18:32 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:35:13.745 ************************************ 00:35:13.745 START TEST bdev_json_nonarray 00:35:13.745 ************************************ 00:35:13.745 22:18:32 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:35:13.745 [2024-07-13 22:18:33.010023] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:35:13.745 [2024-07-13 22:18:33.010108] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1603969 ] 00:35:13.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:13.745 EAL: Requested device 0000:3d:01.0 cannot be used 00:35:13.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:13.745 EAL: Requested device 0000:3d:01.1 cannot be used 00:35:13.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:13.745 EAL: Requested device 0000:3d:01.2 cannot be used 00:35:13.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:13.745 EAL: Requested device 0000:3d:01.3 cannot be used 00:35:13.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:13.745 EAL: Requested device 0000:3d:01.4 cannot be used 00:35:13.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:13.745 EAL: Requested device 0000:3d:01.5 cannot be used 00:35:13.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:13.745 EAL: Requested device 0000:3d:01.6 cannot be used 00:35:13.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:13.745 EAL: Requested device 0000:3d:01.7 cannot be used 00:35:13.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:13.745 EAL: Requested device 0000:3d:02.0 cannot be used 00:35:13.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:13.745 EAL: Requested device 0000:3d:02.1 cannot be used 00:35:13.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:13.745 EAL: Requested device 0000:3d:02.2 cannot be used 00:35:13.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:13.745 EAL: Requested device 0000:3d:02.3 cannot be used 00:35:13.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:13.745 EAL: Requested device 0000:3d:02.4 cannot be used 00:35:13.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:13.745 EAL: Requested device 0000:3d:02.5 cannot be used 00:35:13.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:13.745 EAL: Requested device 0000:3d:02.6 cannot be used 00:35:13.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:13.745 EAL: Requested device 0000:3d:02.7 cannot be used 00:35:13.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:13.745 EAL: Requested device 0000:3f:01.0 cannot be used 00:35:13.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:13.745 EAL: Requested device 0000:3f:01.1 cannot be used 00:35:13.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:13.745 EAL: Requested device 0000:3f:01.2 cannot be used 00:35:13.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:13.745 EAL: Requested device 0000:3f:01.3 cannot be used 00:35:13.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:13.745 EAL: Requested device 0000:3f:01.4 cannot be used 00:35:13.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:13.745 EAL: Requested device 0000:3f:01.5 cannot be used 00:35:13.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:13.745 EAL: Requested device 0000:3f:01.6 cannot be used 00:35:13.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:13.745 EAL: Requested device 0000:3f:01.7 cannot be used 00:35:13.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:13.745 EAL: Requested device 0000:3f:02.0 cannot be used 00:35:13.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:13.745 EAL: Requested device 0000:3f:02.1 cannot be used 00:35:13.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:13.745 EAL: Requested device 0000:3f:02.2 cannot be used 00:35:13.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:13.745 EAL: Requested device 0000:3f:02.3 cannot be used 00:35:13.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:13.745 EAL: Requested device 0000:3f:02.4 cannot be used 00:35:13.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:13.745 EAL: Requested device 0000:3f:02.5 cannot be used 00:35:13.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:13.745 EAL: Requested device 0000:3f:02.6 cannot be used 00:35:13.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:13.745 EAL: Requested device 0000:3f:02.7 cannot be used 00:35:14.003 [2024-07-13 22:18:33.166190] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:14.003 [2024-07-13 22:18:33.364800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:14.003 [2024-07-13 22:18:33.364887] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:35:14.003 [2024-07-13 22:18:33.364910] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:35:14.003 [2024-07-13 22:18:33.364922] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:35:14.567 00:35:14.567 real 0m0.853s 00:35:14.567 user 0m0.646s 00:35:14.567 sys 0m0.203s 00:35:14.567 22:18:33 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:35:14.567 22:18:33 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:14.567 22:18:33 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:35:14.567 ************************************ 00:35:14.568 END TEST bdev_json_nonarray 00:35:14.568 ************************************ 00:35:14.568 22:18:33 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 234 00:35:14.568 22:18:33 blockdev_crypto_sw -- bdev/blockdev.sh@785 -- # true 00:35:14.568 22:18:33 blockdev_crypto_sw -- bdev/blockdev.sh@787 -- # [[ crypto_sw == bdev ]] 00:35:14.568 22:18:33 blockdev_crypto_sw -- bdev/blockdev.sh@794 -- # [[ crypto_sw == gpt ]] 00:35:14.568 22:18:33 blockdev_crypto_sw -- bdev/blockdev.sh@798 -- # [[ crypto_sw == crypto_sw ]] 00:35:14.568 22:18:33 blockdev_crypto_sw -- bdev/blockdev.sh@799 -- # run_test bdev_crypto_enomem bdev_crypto_enomem 00:35:14.568 22:18:33 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:14.568 22:18:33 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:14.568 22:18:33 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:35:14.568 ************************************ 00:35:14.568 START TEST bdev_crypto_enomem 00:35:14.568 ************************************ 00:35:14.568 22:18:33 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@1123 -- # bdev_crypto_enomem 00:35:14.568 22:18:33 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@635 -- # local base_dev=base0 00:35:14.568 22:18:33 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@636 -- # local test_dev=crypt0 00:35:14.568 22:18:33 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@637 -- # local err_dev=EE_base0 00:35:14.568 22:18:33 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@638 -- # local qd=32 00:35:14.568 22:18:33 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@641 -- # ERR_PID=1604174 00:35:14.568 22:18:33 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@642 -- # trap 'cleanup; killprocess $ERR_PID; exit 1' SIGINT SIGTERM EXIT 00:35:14.568 22:18:33 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@640 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 32 -o 4096 -w randwrite -t 5 -f '' 00:35:14.568 22:18:33 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@643 -- # waitforlisten 1604174 00:35:14.568 22:18:33 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@829 -- # '[' -z 1604174 ']' 00:35:14.568 22:18:33 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:14.568 22:18:33 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:14.568 22:18:33 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:14.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:14.568 22:18:33 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:14.568 22:18:33 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:35:14.827 [2024-07-13 22:18:33.960898] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:35:14.827 [2024-07-13 22:18:33.961006] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1604174 ] 00:35:14.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:14.827 EAL: Requested device 0000:3d:01.0 cannot be used 00:35:14.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:14.827 EAL: Requested device 0000:3d:01.1 cannot be used 00:35:14.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:14.827 EAL: Requested device 0000:3d:01.2 cannot be used 00:35:14.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:14.827 EAL: Requested device 0000:3d:01.3 cannot be used 00:35:14.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:14.827 EAL: Requested device 0000:3d:01.4 cannot be used 00:35:14.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:14.827 EAL: Requested device 0000:3d:01.5 cannot be used 00:35:14.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:14.827 EAL: Requested device 0000:3d:01.6 cannot be used 00:35:14.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:14.827 EAL: Requested device 0000:3d:01.7 cannot be used 00:35:14.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:14.827 EAL: Requested device 0000:3d:02.0 cannot be used 00:35:14.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:14.827 EAL: Requested device 0000:3d:02.1 cannot be used 00:35:14.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:14.827 EAL: Requested device 0000:3d:02.2 cannot be used 00:35:14.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:14.827 EAL: Requested device 0000:3d:02.3 cannot be used 00:35:14.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:14.827 EAL: Requested device 0000:3d:02.4 cannot be used 00:35:14.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:14.827 EAL: Requested device 0000:3d:02.5 cannot be used 00:35:14.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:14.827 EAL: Requested device 0000:3d:02.6 cannot be used 00:35:14.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:14.827 EAL: Requested device 0000:3d:02.7 cannot be used 00:35:14.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:14.827 EAL: Requested device 0000:3f:01.0 cannot be used 00:35:14.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:14.827 EAL: Requested device 0000:3f:01.1 cannot be used 00:35:14.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:14.827 EAL: Requested device 0000:3f:01.2 cannot be used 00:35:14.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:14.827 EAL: Requested device 0000:3f:01.3 cannot be used 00:35:14.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:14.827 EAL: Requested device 0000:3f:01.4 cannot be used 00:35:14.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:14.827 EAL: Requested device 0000:3f:01.5 cannot be used 00:35:14.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:14.827 EAL: Requested device 0000:3f:01.6 cannot be used 00:35:14.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:14.827 EAL: Requested device 0000:3f:01.7 cannot be used 00:35:14.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:14.827 EAL: Requested device 0000:3f:02.0 cannot be used 00:35:14.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:14.827 EAL: Requested device 0000:3f:02.1 cannot be used 00:35:14.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:14.827 EAL: Requested device 0000:3f:02.2 cannot be used 00:35:14.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:14.827 EAL: Requested device 0000:3f:02.3 cannot be used 00:35:14.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:14.827 EAL: Requested device 0000:3f:02.4 cannot be used 00:35:14.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:14.827 EAL: Requested device 0000:3f:02.5 cannot be used 00:35:14.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:14.827 EAL: Requested device 0000:3f:02.6 cannot be used 00:35:14.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:14.827 EAL: Requested device 0000:3f:02.7 cannot be used 00:35:14.827 [2024-07-13 22:18:34.123635] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:15.084 [2024-07-13 22:18:34.329050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:15.340 22:18:34 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:15.340 22:18:34 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@862 -- # return 0 00:35:15.340 22:18:34 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@645 -- # rpc_cmd 00:35:15.340 22:18:34 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.340 22:18:34 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:35:15.598 true 00:35:15.598 base0 00:35:15.598 true 00:35:15.598 [2024-07-13 22:18:34.751028] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:35:15.598 crypt0 00:35:15.598 22:18:34 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.598 22:18:34 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@652 -- # waitforbdev crypt0 00:35:15.598 22:18:34 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@897 -- # local bdev_name=crypt0 00:35:15.598 22:18:34 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:35:15.598 22:18:34 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@899 -- # local i 00:35:15.598 22:18:34 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:35:15.598 22:18:34 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:35:15.598 22:18:34 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:35:15.598 22:18:34 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.598 22:18:34 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:35:15.598 22:18:34 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.598 22:18:34 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b crypt0 -t 2000 00:35:15.598 22:18:34 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.598 22:18:34 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:35:15.598 [ 00:35:15.598 { 00:35:15.598 "name": "crypt0", 00:35:15.598 "aliases": [ 00:35:15.598 "6a0e1740-8c02-55ff-ae97-ed1ddaabb959" 00:35:15.598 ], 00:35:15.598 "product_name": "crypto", 00:35:15.598 "block_size": 512, 00:35:15.598 "num_blocks": 2097152, 00:35:15.598 "uuid": "6a0e1740-8c02-55ff-ae97-ed1ddaabb959", 00:35:15.598 "assigned_rate_limits": { 00:35:15.598 "rw_ios_per_sec": 0, 00:35:15.598 "rw_mbytes_per_sec": 0, 00:35:15.598 "r_mbytes_per_sec": 0, 00:35:15.598 "w_mbytes_per_sec": 0 00:35:15.598 }, 00:35:15.598 "claimed": false, 00:35:15.598 "zoned": false, 00:35:15.598 "supported_io_types": { 00:35:15.598 "read": true, 00:35:15.598 "write": true, 00:35:15.598 "unmap": false, 00:35:15.598 "flush": false, 00:35:15.598 "reset": true, 00:35:15.598 "nvme_admin": false, 00:35:15.598 "nvme_io": false, 00:35:15.598 "nvme_io_md": false, 00:35:15.598 "write_zeroes": true, 00:35:15.598 "zcopy": false, 00:35:15.598 "get_zone_info": false, 00:35:15.598 "zone_management": false, 00:35:15.598 "zone_append": false, 00:35:15.598 "compare": false, 00:35:15.598 "compare_and_write": false, 00:35:15.598 "abort": false, 00:35:15.598 "seek_hole": false, 00:35:15.598 "seek_data": false, 00:35:15.598 "copy": false, 00:35:15.598 "nvme_iov_md": false 00:35:15.598 }, 00:35:15.598 "memory_domains": [ 00:35:15.598 { 00:35:15.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:15.598 "dma_device_type": 2 00:35:15.598 } 00:35:15.598 ], 00:35:15.598 "driver_specific": { 00:35:15.598 "crypto": { 00:35:15.598 "base_bdev_name": "EE_base0", 00:35:15.598 "name": "crypt0", 00:35:15.598 "key_name": "test_dek_sw" 00:35:15.598 } 00:35:15.598 } 00:35:15.598 } 00:35:15.598 ] 00:35:15.598 22:18:34 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.598 22:18:34 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@905 -- # return 0 00:35:15.598 22:18:34 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@655 -- # rpcpid=1604259 00:35:15.598 22:18:34 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@657 -- # sleep 1 00:35:15.598 22:18:34 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@654 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:35:15.598 Running I/O for 5 seconds... 00:35:16.532 22:18:35 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_error_inject_error EE_base0 -n 5 -q 31 write nomem 00:35:16.532 22:18:35 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:16.532 22:18:35 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:35:16.532 22:18:35 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:16.532 22:18:35 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@660 -- # wait 1604259 00:35:20.734 00:35:20.734 Latency(us) 00:35:20.734 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:20.734 Job: crypt0 (Core Mask 0x2, workload: randwrite, depth: 32, IO size: 4096) 00:35:20.734 crypt0 : 5.00 53322.13 208.29 0.00 0.00 597.65 276.89 897.84 00:35:20.734 =================================================================================================================== 00:35:20.734 Total : 53322.13 208.29 0.00 0.00 597.65 276.89 897.84 00:35:20.734 0 00:35:20.734 22:18:39 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@662 -- # rpc_cmd bdev_crypto_delete crypt0 00:35:20.734 22:18:39 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.734 22:18:39 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:35:20.734 22:18:39 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.734 22:18:39 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@664 -- # killprocess 1604174 00:35:20.734 22:18:39 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@948 -- # '[' -z 1604174 ']' 00:35:20.734 22:18:39 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@952 -- # kill -0 1604174 00:35:20.734 22:18:39 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@953 -- # uname 00:35:20.734 22:18:39 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:20.734 22:18:39 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1604174 00:35:20.734 22:18:39 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:20.734 22:18:39 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:20.734 22:18:39 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1604174' 00:35:20.734 killing process with pid 1604174 00:35:20.734 22:18:39 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@967 -- # kill 1604174 00:35:20.734 Received shutdown signal, test time was about 5.000000 seconds 00:35:20.734 00:35:20.734 Latency(us) 00:35:20.734 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:20.734 =================================================================================================================== 00:35:20.734 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:20.734 22:18:39 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@972 -- # wait 1604174 00:35:22.106 22:18:41 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@665 -- # trap - SIGINT SIGTERM EXIT 00:35:22.106 00:35:22.106 real 0m7.328s 00:35:22.106 user 0m7.358s 00:35:22.106 sys 0m0.448s 00:35:22.106 22:18:41 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:22.106 22:18:41 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:35:22.106 ************************************ 00:35:22.106 END TEST bdev_crypto_enomem 00:35:22.106 ************************************ 00:35:22.106 22:18:41 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:35:22.106 22:18:41 blockdev_crypto_sw -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:35:22.106 22:18:41 blockdev_crypto_sw -- bdev/blockdev.sh@811 -- # cleanup 00:35:22.106 22:18:41 blockdev_crypto_sw -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:35:22.106 22:18:41 blockdev_crypto_sw -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:35:22.106 22:18:41 blockdev_crypto_sw -- bdev/blockdev.sh@26 -- # [[ crypto_sw == rbd ]] 00:35:22.106 22:18:41 blockdev_crypto_sw -- bdev/blockdev.sh@30 -- # [[ crypto_sw == daos ]] 00:35:22.106 22:18:41 blockdev_crypto_sw -- bdev/blockdev.sh@34 -- # [[ crypto_sw = \g\p\t ]] 00:35:22.106 22:18:41 blockdev_crypto_sw -- bdev/blockdev.sh@40 -- # [[ crypto_sw == xnvme ]] 00:35:22.106 00:35:22.106 real 1m9.136s 00:35:22.106 user 2m0.210s 00:35:22.106 sys 0m7.316s 00:35:22.106 22:18:41 blockdev_crypto_sw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:22.106 22:18:41 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:35:22.106 ************************************ 00:35:22.106 END TEST blockdev_crypto_sw 00:35:22.106 ************************************ 00:35:22.106 22:18:41 -- common/autotest_common.sh@1142 -- # return 0 00:35:22.106 22:18:41 -- spdk/autotest.sh@359 -- # run_test blockdev_crypto_qat /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_qat 00:35:22.106 22:18:41 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:22.106 22:18:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:22.106 22:18:41 -- common/autotest_common.sh@10 -- # set +x 00:35:22.106 ************************************ 00:35:22.106 START TEST blockdev_crypto_qat 00:35:22.106 ************************************ 00:35:22.106 22:18:41 blockdev_crypto_qat -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_qat 00:35:22.106 * Looking for test storage... 00:35:22.106 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:35:22.106 22:18:41 blockdev_crypto_qat -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:35:22.106 22:18:41 blockdev_crypto_qat -- bdev/nbd_common.sh@6 -- # set -e 00:35:22.106 22:18:41 blockdev_crypto_qat -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:35:22.106 22:18:41 blockdev_crypto_qat -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:35:22.106 22:18:41 blockdev_crypto_qat -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:35:22.106 22:18:41 blockdev_crypto_qat -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:35:22.106 22:18:41 blockdev_crypto_qat -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:35:22.106 22:18:41 blockdev_crypto_qat -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:35:22.106 22:18:41 blockdev_crypto_qat -- bdev/blockdev.sh@20 -- # : 00:35:22.106 22:18:41 blockdev_crypto_qat -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:35:22.106 22:18:41 blockdev_crypto_qat -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:35:22.106 22:18:41 blockdev_crypto_qat -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:35:22.106 22:18:41 blockdev_crypto_qat -- bdev/blockdev.sh@674 -- # uname -s 00:35:22.106 22:18:41 blockdev_crypto_qat -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:35:22.106 22:18:41 blockdev_crypto_qat -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:35:22.106 22:18:41 blockdev_crypto_qat -- bdev/blockdev.sh@682 -- # test_type=crypto_qat 00:35:22.106 22:18:41 blockdev_crypto_qat -- bdev/blockdev.sh@683 -- # crypto_device= 00:35:22.106 22:18:41 blockdev_crypto_qat -- bdev/blockdev.sh@684 -- # dek= 00:35:22.106 22:18:41 blockdev_crypto_qat -- bdev/blockdev.sh@685 -- # env_ctx= 00:35:22.106 22:18:41 blockdev_crypto_qat -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:35:22.106 22:18:41 blockdev_crypto_qat -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:35:22.106 22:18:41 blockdev_crypto_qat -- bdev/blockdev.sh@690 -- # [[ crypto_qat == bdev ]] 00:35:22.106 22:18:41 blockdev_crypto_qat -- bdev/blockdev.sh@690 -- # [[ crypto_qat == crypto_* ]] 00:35:22.106 22:18:41 blockdev_crypto_qat -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:35:22.106 22:18:41 blockdev_crypto_qat -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:35:22.106 22:18:41 blockdev_crypto_qat -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=1605379 00:35:22.106 22:18:41 blockdev_crypto_qat -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:35:22.106 22:18:41 blockdev_crypto_qat -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:35:22.106 22:18:41 blockdev_crypto_qat -- bdev/blockdev.sh@49 -- # waitforlisten 1605379 00:35:22.106 22:18:41 blockdev_crypto_qat -- common/autotest_common.sh@829 -- # '[' -z 1605379 ']' 00:35:22.106 22:18:41 blockdev_crypto_qat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:22.106 22:18:41 blockdev_crypto_qat -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:22.106 22:18:41 blockdev_crypto_qat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:22.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:22.106 22:18:41 blockdev_crypto_qat -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:22.106 22:18:41 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:35:22.364 [2024-07-13 22:18:41.565539] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:35:22.364 [2024-07-13 22:18:41.565635] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1605379 ] 00:35:22.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.364 EAL: Requested device 0000:3d:01.0 cannot be used 00:35:22.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.364 EAL: Requested device 0000:3d:01.1 cannot be used 00:35:22.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.364 EAL: Requested device 0000:3d:01.2 cannot be used 00:35:22.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.364 EAL: Requested device 0000:3d:01.3 cannot be used 00:35:22.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.364 EAL: Requested device 0000:3d:01.4 cannot be used 00:35:22.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.364 EAL: Requested device 0000:3d:01.5 cannot be used 00:35:22.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.364 EAL: Requested device 0000:3d:01.6 cannot be used 00:35:22.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.364 EAL: Requested device 0000:3d:01.7 cannot be used 00:35:22.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.364 EAL: Requested device 0000:3d:02.0 cannot be used 00:35:22.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.364 EAL: Requested device 0000:3d:02.1 cannot be used 00:35:22.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.364 EAL: Requested device 0000:3d:02.2 cannot be used 00:35:22.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.364 EAL: Requested device 0000:3d:02.3 cannot be used 00:35:22.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.364 EAL: Requested device 0000:3d:02.4 cannot be used 00:35:22.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.364 EAL: Requested device 0000:3d:02.5 cannot be used 00:35:22.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.364 EAL: Requested device 0000:3d:02.6 cannot be used 00:35:22.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.364 EAL: Requested device 0000:3d:02.7 cannot be used 00:35:22.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.364 EAL: Requested device 0000:3f:01.0 cannot be used 00:35:22.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.364 EAL: Requested device 0000:3f:01.1 cannot be used 00:35:22.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.364 EAL: Requested device 0000:3f:01.2 cannot be used 00:35:22.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.364 EAL: Requested device 0000:3f:01.3 cannot be used 00:35:22.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.364 EAL: Requested device 0000:3f:01.4 cannot be used 00:35:22.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.364 EAL: Requested device 0000:3f:01.5 cannot be used 00:35:22.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.364 EAL: Requested device 0000:3f:01.6 cannot be used 00:35:22.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.364 EAL: Requested device 0000:3f:01.7 cannot be used 00:35:22.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.364 EAL: Requested device 0000:3f:02.0 cannot be used 00:35:22.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.364 EAL: Requested device 0000:3f:02.1 cannot be used 00:35:22.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.364 EAL: Requested device 0000:3f:02.2 cannot be used 00:35:22.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.364 EAL: Requested device 0000:3f:02.3 cannot be used 00:35:22.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.364 EAL: Requested device 0000:3f:02.4 cannot be used 00:35:22.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.364 EAL: Requested device 0000:3f:02.5 cannot be used 00:35:22.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.364 EAL: Requested device 0000:3f:02.6 cannot be used 00:35:22.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.364 EAL: Requested device 0000:3f:02.7 cannot be used 00:35:22.364 [2024-07-13 22:18:41.728610] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:22.622 [2024-07-13 22:18:41.931810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:23.186 22:18:42 blockdev_crypto_qat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:23.186 22:18:42 blockdev_crypto_qat -- common/autotest_common.sh@862 -- # return 0 00:35:23.186 22:18:42 blockdev_crypto_qat -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:35:23.186 22:18:42 blockdev_crypto_qat -- bdev/blockdev.sh@708 -- # setup_crypto_qat_conf 00:35:23.186 22:18:42 blockdev_crypto_qat -- bdev/blockdev.sh@170 -- # rpc_cmd 00:35:23.186 22:18:42 blockdev_crypto_qat -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.186 22:18:42 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:35:23.186 [2024-07-13 22:18:42.317442] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:35:23.186 [2024-07-13 22:18:42.325487] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:35:23.186 [2024-07-13 22:18:42.333505] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:35:23.186 [2024-07-13 22:18:42.575724] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:35:26.461 true 00:35:26.461 true 00:35:26.461 true 00:35:26.461 true 00:35:26.461 Malloc0 00:35:26.461 Malloc1 00:35:26.461 Malloc2 00:35:26.461 Malloc3 00:35:26.461 [2024-07-13 22:18:45.821478] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:35:26.461 crypto_ram 00:35:26.461 [2024-07-13 22:18:45.829481] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:35:26.461 crypto_ram1 00:35:26.461 [2024-07-13 22:18:45.837488] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:35:26.461 crypto_ram2 00:35:26.461 [2024-07-13 22:18:45.845519] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:35:26.461 crypto_ram3 00:35:26.718 [ 00:35:26.718 { 00:35:26.718 "name": "Malloc1", 00:35:26.718 "aliases": [ 00:35:26.718 "adc39099-c8a7-4993-a434-de01c5cc96e5" 00:35:26.718 ], 00:35:26.718 "product_name": "Malloc disk", 00:35:26.718 "block_size": 512, 00:35:26.718 "num_blocks": 65536, 00:35:26.718 "uuid": "adc39099-c8a7-4993-a434-de01c5cc96e5", 00:35:26.718 "assigned_rate_limits": { 00:35:26.718 "rw_ios_per_sec": 0, 00:35:26.718 "rw_mbytes_per_sec": 0, 00:35:26.718 "r_mbytes_per_sec": 0, 00:35:26.718 "w_mbytes_per_sec": 0 00:35:26.718 }, 00:35:26.718 "claimed": true, 00:35:26.718 "claim_type": "exclusive_write", 00:35:26.718 "zoned": false, 00:35:26.718 "supported_io_types": { 00:35:26.718 "read": true, 00:35:26.718 "write": true, 00:35:26.718 "unmap": true, 00:35:26.718 "flush": true, 00:35:26.718 "reset": true, 00:35:26.718 "nvme_admin": false, 00:35:26.718 "nvme_io": false, 00:35:26.718 "nvme_io_md": false, 00:35:26.718 "write_zeroes": true, 00:35:26.718 "zcopy": true, 00:35:26.718 "get_zone_info": false, 00:35:26.718 "zone_management": false, 00:35:26.718 "zone_append": false, 00:35:26.718 "compare": false, 00:35:26.718 "compare_and_write": false, 00:35:26.718 "abort": true, 00:35:26.718 "seek_hole": false, 00:35:26.718 "seek_data": false, 00:35:26.718 "copy": true, 00:35:26.718 "nvme_iov_md": false 00:35:26.718 }, 00:35:26.718 "memory_domains": [ 00:35:26.718 { 00:35:26.718 "dma_device_id": "system", 00:35:26.718 "dma_device_type": 1 00:35:26.718 }, 00:35:26.718 { 00:35:26.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:26.718 "dma_device_type": 2 00:35:26.718 } 00:35:26.718 ], 00:35:26.718 "driver_specific": {} 00:35:26.718 } 00:35:26.718 ] 00:35:26.718 22:18:45 blockdev_crypto_qat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:26.718 22:18:45 blockdev_crypto_qat -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:35:26.719 22:18:45 blockdev_crypto_qat -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:26.719 22:18:45 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:35:26.719 22:18:45 blockdev_crypto_qat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:26.719 22:18:45 blockdev_crypto_qat -- bdev/blockdev.sh@740 -- # cat 00:35:26.719 22:18:45 blockdev_crypto_qat -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:35:26.719 22:18:45 blockdev_crypto_qat -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:26.719 22:18:45 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:35:26.719 22:18:45 blockdev_crypto_qat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:26.719 22:18:45 blockdev_crypto_qat -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:35:26.719 22:18:45 blockdev_crypto_qat -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:26.719 22:18:45 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:35:26.719 22:18:45 blockdev_crypto_qat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:26.719 22:18:45 blockdev_crypto_qat -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:35:26.719 22:18:45 blockdev_crypto_qat -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:26.719 22:18:45 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:35:26.719 22:18:45 blockdev_crypto_qat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:26.719 22:18:45 blockdev_crypto_qat -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:35:26.719 22:18:45 blockdev_crypto_qat -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:35:26.719 22:18:45 blockdev_crypto_qat -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:26.719 22:18:45 blockdev_crypto_qat -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:35:26.719 22:18:45 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:35:26.719 22:18:46 blockdev_crypto_qat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:26.719 22:18:46 blockdev_crypto_qat -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:35:26.719 22:18:46 blockdev_crypto_qat -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "3af95f34-1a53-51c1-a185-74ea945644ee"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "3af95f34-1a53-51c1-a185-74ea945644ee",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_qat_cbc"' ' }' ' }' '}' '{' ' "name": "crypto_ram1",' ' "aliases": [' ' "d9a66eaf-1c21-5d5f-99e5-d09f47ef4e64"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "d9a66eaf-1c21-5d5f-99e5-d09f47ef4e64",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram1",' ' "key_name": "test_dek_qat_xts"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "e0550d64-31b8-5673-9de5-c1bfd9d035f4"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "e0550d64-31b8-5673-9de5-c1bfd9d035f4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_qat_cbc2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "2c1c9e6a-86a0-5e59-aa5c-b9bccc051b4c"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "2c1c9e6a-86a0-5e59-aa5c-b9bccc051b4c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_qat_xts2"' ' }' ' }' '}' 00:35:26.719 22:18:46 blockdev_crypto_qat -- bdev/blockdev.sh@749 -- # jq -r .name 00:35:26.719 22:18:46 blockdev_crypto_qat -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:35:26.719 22:18:46 blockdev_crypto_qat -- bdev/blockdev.sh@752 -- # hello_world_bdev=crypto_ram 00:35:26.719 22:18:46 blockdev_crypto_qat -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:35:26.719 22:18:46 blockdev_crypto_qat -- bdev/blockdev.sh@754 -- # killprocess 1605379 00:35:26.719 22:18:46 blockdev_crypto_qat -- common/autotest_common.sh@948 -- # '[' -z 1605379 ']' 00:35:26.719 22:18:46 blockdev_crypto_qat -- common/autotest_common.sh@952 -- # kill -0 1605379 00:35:26.719 22:18:46 blockdev_crypto_qat -- common/autotest_common.sh@953 -- # uname 00:35:26.719 22:18:46 blockdev_crypto_qat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:26.719 22:18:46 blockdev_crypto_qat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1605379 00:35:26.977 22:18:46 blockdev_crypto_qat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:26.977 22:18:46 blockdev_crypto_qat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:26.977 22:18:46 blockdev_crypto_qat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1605379' 00:35:26.977 killing process with pid 1605379 00:35:26.977 22:18:46 blockdev_crypto_qat -- common/autotest_common.sh@967 -- # kill 1605379 00:35:26.977 22:18:46 blockdev_crypto_qat -- common/autotest_common.sh@972 -- # wait 1605379 00:35:30.256 22:18:49 blockdev_crypto_qat -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:35:30.256 22:18:49 blockdev_crypto_qat -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:35:30.256 22:18:49 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:35:30.256 22:18:49 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:30.256 22:18:49 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:35:30.256 ************************************ 00:35:30.256 START TEST bdev_hello_world 00:35:30.256 ************************************ 00:35:30.256 22:18:49 blockdev_crypto_qat.bdev_hello_world -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:35:30.256 [2024-07-13 22:18:49.150170] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:35:30.256 [2024-07-13 22:18:49.150249] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1606704 ] 00:35:30.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:30.256 EAL: Requested device 0000:3d:01.0 cannot be used 00:35:30.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:30.256 EAL: Requested device 0000:3d:01.1 cannot be used 00:35:30.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:30.256 EAL: Requested device 0000:3d:01.2 cannot be used 00:35:30.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:30.256 EAL: Requested device 0000:3d:01.3 cannot be used 00:35:30.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:30.256 EAL: Requested device 0000:3d:01.4 cannot be used 00:35:30.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:30.256 EAL: Requested device 0000:3d:01.5 cannot be used 00:35:30.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:30.256 EAL: Requested device 0000:3d:01.6 cannot be used 00:35:30.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:30.256 EAL: Requested device 0000:3d:01.7 cannot be used 00:35:30.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:30.256 EAL: Requested device 0000:3d:02.0 cannot be used 00:35:30.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:30.256 EAL: Requested device 0000:3d:02.1 cannot be used 00:35:30.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:30.256 EAL: Requested device 0000:3d:02.2 cannot be used 00:35:30.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:30.256 EAL: Requested device 0000:3d:02.3 cannot be used 00:35:30.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:30.256 EAL: Requested device 0000:3d:02.4 cannot be used 00:35:30.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:30.256 EAL: Requested device 0000:3d:02.5 cannot be used 00:35:30.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:30.256 EAL: Requested device 0000:3d:02.6 cannot be used 00:35:30.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:30.256 EAL: Requested device 0000:3d:02.7 cannot be used 00:35:30.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:30.256 EAL: Requested device 0000:3f:01.0 cannot be used 00:35:30.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:30.256 EAL: Requested device 0000:3f:01.1 cannot be used 00:35:30.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:30.256 EAL: Requested device 0000:3f:01.2 cannot be used 00:35:30.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:30.256 EAL: Requested device 0000:3f:01.3 cannot be used 00:35:30.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:30.256 EAL: Requested device 0000:3f:01.4 cannot be used 00:35:30.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:30.256 EAL: Requested device 0000:3f:01.5 cannot be used 00:35:30.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:30.256 EAL: Requested device 0000:3f:01.6 cannot be used 00:35:30.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:30.256 EAL: Requested device 0000:3f:01.7 cannot be used 00:35:30.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:30.256 EAL: Requested device 0000:3f:02.0 cannot be used 00:35:30.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:30.256 EAL: Requested device 0000:3f:02.1 cannot be used 00:35:30.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:30.256 EAL: Requested device 0000:3f:02.2 cannot be used 00:35:30.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:30.256 EAL: Requested device 0000:3f:02.3 cannot be used 00:35:30.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:30.256 EAL: Requested device 0000:3f:02.4 cannot be used 00:35:30.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:30.257 EAL: Requested device 0000:3f:02.5 cannot be used 00:35:30.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:30.257 EAL: Requested device 0000:3f:02.6 cannot be used 00:35:30.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:30.257 EAL: Requested device 0000:3f:02.7 cannot be used 00:35:30.257 [2024-07-13 22:18:49.308058] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:30.257 [2024-07-13 22:18:49.511689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:30.257 [2024-07-13 22:18:49.532977] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:35:30.257 [2024-07-13 22:18:49.540988] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:35:30.257 [2024-07-13 22:18:49.549001] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:35:30.515 [2024-07-13 22:18:49.844916] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:35:33.799 [2024-07-13 22:18:52.439689] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:35:33.799 [2024-07-13 22:18:52.439764] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:35:33.799 [2024-07-13 22:18:52.439778] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:35:33.799 [2024-07-13 22:18:52.447719] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:35:33.799 [2024-07-13 22:18:52.447753] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:35:33.799 [2024-07-13 22:18:52.447765] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:35:33.799 [2024-07-13 22:18:52.455742] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:35:33.799 [2024-07-13 22:18:52.455771] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:35:33.799 [2024-07-13 22:18:52.455781] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:35:33.799 [2024-07-13 22:18:52.463742] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:35:33.799 [2024-07-13 22:18:52.463769] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:35:33.799 [2024-07-13 22:18:52.463779] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:35:33.799 [2024-07-13 22:18:52.651253] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:35:33.799 [2024-07-13 22:18:52.651296] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev crypto_ram 00:35:33.799 [2024-07-13 22:18:52.651320] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:35:33.799 [2024-07-13 22:18:52.652960] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:35:33.799 [2024-07-13 22:18:52.653042] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:35:33.799 [2024-07-13 22:18:52.653059] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:35:33.799 [2024-07-13 22:18:52.653104] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:35:33.799 00:35:33.799 [2024-07-13 22:18:52.653124] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:35:35.172 00:35:35.172 real 0m5.402s 00:35:35.172 user 0m4.919s 00:35:35.172 sys 0m0.428s 00:35:35.172 22:18:54 blockdev_crypto_qat.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:35.172 22:18:54 blockdev_crypto_qat.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:35:35.172 ************************************ 00:35:35.172 END TEST bdev_hello_world 00:35:35.172 ************************************ 00:35:35.172 22:18:54 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 0 00:35:35.172 22:18:54 blockdev_crypto_qat -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:35:35.172 22:18:54 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:35.172 22:18:54 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:35.172 22:18:54 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:35:35.172 ************************************ 00:35:35.172 START TEST bdev_bounds 00:35:35.172 ************************************ 00:35:35.172 22:18:54 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:35:35.172 22:18:54 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=1607530 00:35:35.172 22:18:54 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:35:35.172 22:18:54 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 1607530' 00:35:35.172 Process bdevio pid: 1607530 00:35:35.172 22:18:54 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 1607530 00:35:35.172 22:18:54 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 1607530 ']' 00:35:35.172 22:18:54 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:35.172 22:18:54 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:35.172 22:18:54 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:35.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:35.172 22:18:54 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:35.172 22:18:54 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:35:35.172 22:18:54 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@289 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:35:35.430 [2024-07-13 22:18:54.632085] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:35:35.430 [2024-07-13 22:18:54.632177] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1607530 ] 00:35:35.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.430 EAL: Requested device 0000:3d:01.0 cannot be used 00:35:35.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.430 EAL: Requested device 0000:3d:01.1 cannot be used 00:35:35.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.430 EAL: Requested device 0000:3d:01.2 cannot be used 00:35:35.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.430 EAL: Requested device 0000:3d:01.3 cannot be used 00:35:35.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.430 EAL: Requested device 0000:3d:01.4 cannot be used 00:35:35.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.430 EAL: Requested device 0000:3d:01.5 cannot be used 00:35:35.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.430 EAL: Requested device 0000:3d:01.6 cannot be used 00:35:35.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.430 EAL: Requested device 0000:3d:01.7 cannot be used 00:35:35.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.430 EAL: Requested device 0000:3d:02.0 cannot be used 00:35:35.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.430 EAL: Requested device 0000:3d:02.1 cannot be used 00:35:35.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.430 EAL: Requested device 0000:3d:02.2 cannot be used 00:35:35.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.430 EAL: Requested device 0000:3d:02.3 cannot be used 00:35:35.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.430 EAL: Requested device 0000:3d:02.4 cannot be used 00:35:35.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.430 EAL: Requested device 0000:3d:02.5 cannot be used 00:35:35.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.430 EAL: Requested device 0000:3d:02.6 cannot be used 00:35:35.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.430 EAL: Requested device 0000:3d:02.7 cannot be used 00:35:35.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.430 EAL: Requested device 0000:3f:01.0 cannot be used 00:35:35.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.430 EAL: Requested device 0000:3f:01.1 cannot be used 00:35:35.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.430 EAL: Requested device 0000:3f:01.2 cannot be used 00:35:35.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.430 EAL: Requested device 0000:3f:01.3 cannot be used 00:35:35.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.430 EAL: Requested device 0000:3f:01.4 cannot be used 00:35:35.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.430 EAL: Requested device 0000:3f:01.5 cannot be used 00:35:35.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.430 EAL: Requested device 0000:3f:01.6 cannot be used 00:35:35.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.430 EAL: Requested device 0000:3f:01.7 cannot be used 00:35:35.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.430 EAL: Requested device 0000:3f:02.0 cannot be used 00:35:35.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.431 EAL: Requested device 0000:3f:02.1 cannot be used 00:35:35.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.431 EAL: Requested device 0000:3f:02.2 cannot be used 00:35:35.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.431 EAL: Requested device 0000:3f:02.3 cannot be used 00:35:35.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.431 EAL: Requested device 0000:3f:02.4 cannot be used 00:35:35.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.431 EAL: Requested device 0000:3f:02.5 cannot be used 00:35:35.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.431 EAL: Requested device 0000:3f:02.6 cannot be used 00:35:35.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.431 EAL: Requested device 0000:3f:02.7 cannot be used 00:35:35.431 [2024-07-13 22:18:54.794008] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:35.689 [2024-07-13 22:18:55.006059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:35.689 [2024-07-13 22:18:55.006126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:35.689 [2024-07-13 22:18:55.006131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:35.689 [2024-07-13 22:18:55.027450] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:35:35.689 [2024-07-13 22:18:55.035457] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:35:35.689 [2024-07-13 22:18:55.043487] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:35:36.254 [2024-07-13 22:18:55.344740] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:35:38.807 [2024-07-13 22:18:57.952366] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:35:38.807 [2024-07-13 22:18:57.952430] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:35:38.807 [2024-07-13 22:18:57.952448] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:35:38.807 [2024-07-13 22:18:57.960383] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:35:38.807 [2024-07-13 22:18:57.960415] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:35:38.807 [2024-07-13 22:18:57.960428] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:35:38.807 [2024-07-13 22:18:57.968422] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:35:38.807 [2024-07-13 22:18:57.968451] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:35:38.807 [2024-07-13 22:18:57.968462] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:35:38.807 [2024-07-13 22:18:57.976423] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:35:38.807 [2024-07-13 22:18:57.976465] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:35:38.807 [2024-07-13 22:18:57.976476] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:35:39.372 22:18:58 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:39.372 22:18:58 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:35:39.372 22:18:58 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@294 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:35:39.629 I/O targets: 00:35:39.629 crypto_ram: 65536 blocks of 512 bytes (32 MiB) 00:35:39.629 crypto_ram1: 65536 blocks of 512 bytes (32 MiB) 00:35:39.629 crypto_ram2: 8192 blocks of 4096 bytes (32 MiB) 00:35:39.629 crypto_ram3: 8192 blocks of 4096 bytes (32 MiB) 00:35:39.629 00:35:39.629 00:35:39.629 CUnit - A unit testing framework for C - Version 2.1-3 00:35:39.629 http://cunit.sourceforge.net/ 00:35:39.629 00:35:39.629 00:35:39.629 Suite: bdevio tests on: crypto_ram3 00:35:39.629 Test: blockdev write read block ...passed 00:35:39.629 Test: blockdev write zeroes read block ...passed 00:35:39.629 Test: blockdev write zeroes read no split ...passed 00:35:39.629 Test: blockdev write zeroes read split ...passed 00:35:39.629 Test: blockdev write zeroes read split partial ...passed 00:35:39.629 Test: blockdev reset ...passed 00:35:39.629 Test: blockdev write read 8 blocks ...passed 00:35:39.629 Test: blockdev write read size > 128k ...passed 00:35:39.629 Test: blockdev write read invalid size ...passed 00:35:39.629 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:39.629 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:39.629 Test: blockdev write read max offset ...passed 00:35:39.629 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:39.629 Test: blockdev writev readv 8 blocks ...passed 00:35:39.629 Test: blockdev writev readv 30 x 1block ...passed 00:35:39.629 Test: blockdev writev readv block ...passed 00:35:39.629 Test: blockdev writev readv size > 128k ...passed 00:35:39.629 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:39.629 Test: blockdev comparev and writev ...passed 00:35:39.629 Test: blockdev nvme passthru rw ...passed 00:35:39.629 Test: blockdev nvme passthru vendor specific ...passed 00:35:39.629 Test: blockdev nvme admin passthru ...passed 00:35:39.629 Test: blockdev copy ...passed 00:35:39.629 Suite: bdevio tests on: crypto_ram2 00:35:39.629 Test: blockdev write read block ...passed 00:35:39.629 Test: blockdev write zeroes read block ...passed 00:35:39.629 Test: blockdev write zeroes read no split ...passed 00:35:39.629 Test: blockdev write zeroes read split ...passed 00:35:39.630 Test: blockdev write zeroes read split partial ...passed 00:35:39.630 Test: blockdev reset ...passed 00:35:39.630 Test: blockdev write read 8 blocks ...passed 00:35:39.630 Test: blockdev write read size > 128k ...passed 00:35:39.630 Test: blockdev write read invalid size ...passed 00:35:39.630 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:39.630 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:39.630 Test: blockdev write read max offset ...passed 00:35:39.630 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:39.630 Test: blockdev writev readv 8 blocks ...passed 00:35:39.630 Test: blockdev writev readv 30 x 1block ...passed 00:35:39.630 Test: blockdev writev readv block ...passed 00:35:39.630 Test: blockdev writev readv size > 128k ...passed 00:35:39.630 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:39.630 Test: blockdev comparev and writev ...passed 00:35:39.630 Test: blockdev nvme passthru rw ...passed 00:35:39.630 Test: blockdev nvme passthru vendor specific ...passed 00:35:39.630 Test: blockdev nvme admin passthru ...passed 00:35:39.630 Test: blockdev copy ...passed 00:35:39.630 Suite: bdevio tests on: crypto_ram1 00:35:39.630 Test: blockdev write read block ...passed 00:35:39.630 Test: blockdev write zeroes read block ...passed 00:35:39.630 Test: blockdev write zeroes read no split ...passed 00:35:39.888 Test: blockdev write zeroes read split ...passed 00:35:39.888 Test: blockdev write zeroes read split partial ...passed 00:35:39.888 Test: blockdev reset ...passed 00:35:39.888 Test: blockdev write read 8 blocks ...passed 00:35:39.888 Test: blockdev write read size > 128k ...passed 00:35:39.888 Test: blockdev write read invalid size ...passed 00:35:39.888 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:39.888 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:39.888 Test: blockdev write read max offset ...passed 00:35:39.888 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:39.888 Test: blockdev writev readv 8 blocks ...passed 00:35:39.888 Test: blockdev writev readv 30 x 1block ...passed 00:35:39.888 Test: blockdev writev readv block ...passed 00:35:39.888 Test: blockdev writev readv size > 128k ...passed 00:35:39.888 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:39.888 Test: blockdev comparev and writev ...passed 00:35:39.888 Test: blockdev nvme passthru rw ...passed 00:35:39.888 Test: blockdev nvme passthru vendor specific ...passed 00:35:39.888 Test: blockdev nvme admin passthru ...passed 00:35:39.888 Test: blockdev copy ...passed 00:35:39.888 Suite: bdevio tests on: crypto_ram 00:35:39.888 Test: blockdev write read block ...passed 00:35:39.888 Test: blockdev write zeroes read block ...passed 00:35:39.888 Test: blockdev write zeroes read no split ...passed 00:35:39.888 Test: blockdev write zeroes read split ...passed 00:35:40.146 Test: blockdev write zeroes read split partial ...passed 00:35:40.146 Test: blockdev reset ...passed 00:35:40.146 Test: blockdev write read 8 blocks ...passed 00:35:40.146 Test: blockdev write read size > 128k ...passed 00:35:40.146 Test: blockdev write read invalid size ...passed 00:35:40.146 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:40.146 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:40.146 Test: blockdev write read max offset ...passed 00:35:40.146 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:40.146 Test: blockdev writev readv 8 blocks ...passed 00:35:40.146 Test: blockdev writev readv 30 x 1block ...passed 00:35:40.146 Test: blockdev writev readv block ...passed 00:35:40.146 Test: blockdev writev readv size > 128k ...passed 00:35:40.146 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:40.146 Test: blockdev comparev and writev ...passed 00:35:40.146 Test: blockdev nvme passthru rw ...passed 00:35:40.146 Test: blockdev nvme passthru vendor specific ...passed 00:35:40.146 Test: blockdev nvme admin passthru ...passed 00:35:40.146 Test: blockdev copy ...passed 00:35:40.146 00:35:40.146 Run Summary: Type Total Ran Passed Failed Inactive 00:35:40.146 suites 4 4 n/a 0 0 00:35:40.146 tests 92 92 92 0 0 00:35:40.146 asserts 520 520 520 0 n/a 00:35:40.146 00:35:40.146 Elapsed time = 1.339 seconds 00:35:40.146 0 00:35:40.146 22:18:59 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 1607530 00:35:40.146 22:18:59 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 1607530 ']' 00:35:40.146 22:18:59 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 1607530 00:35:40.146 22:18:59 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:35:40.146 22:18:59 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:40.146 22:18:59 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1607530 00:35:40.146 22:18:59 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:40.146 22:18:59 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:40.146 22:18:59 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1607530' 00:35:40.146 killing process with pid 1607530 00:35:40.146 22:18:59 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@967 -- # kill 1607530 00:35:40.146 22:18:59 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@972 -- # wait 1607530 00:35:42.044 22:19:01 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:35:42.044 00:35:42.044 real 0m6.748s 00:35:42.044 user 0m18.433s 00:35:42.044 sys 0m0.667s 00:35:42.044 22:19:01 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:42.044 22:19:01 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:35:42.044 ************************************ 00:35:42.044 END TEST bdev_bounds 00:35:42.044 ************************************ 00:35:42.044 22:19:01 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 0 00:35:42.044 22:19:01 blockdev_crypto_qat -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '' 00:35:42.044 22:19:01 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:35:42.044 22:19:01 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:42.044 22:19:01 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:35:42.044 ************************************ 00:35:42.044 START TEST bdev_nbd 00:35:42.044 ************************************ 00:35:42.044 22:19:01 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '' 00:35:42.044 22:19:01 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:35:42.044 22:19:01 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:35:42.044 22:19:01 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:42.044 22:19:01 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:35:42.044 22:19:01 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:35:42.044 22:19:01 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:35:42.044 22:19:01 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=4 00:35:42.044 22:19:01 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:35:42.044 22:19:01 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:35:42.044 22:19:01 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:35:42.044 22:19:01 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=4 00:35:42.044 22:19:01 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:35:42.045 22:19:01 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:35:42.045 22:19:01 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:35:42.045 22:19:01 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:35:42.045 22:19:01 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=1608740 00:35:42.045 22:19:01 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:35:42.045 22:19:01 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:35:42.045 22:19:01 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 1608740 /var/tmp/spdk-nbd.sock 00:35:42.045 22:19:01 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 1608740 ']' 00:35:42.045 22:19:01 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:35:42.045 22:19:01 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:42.045 22:19:01 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:35:42.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:35:42.045 22:19:01 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:42.045 22:19:01 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:35:42.303 [2024-07-13 22:19:01.476983] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:35:42.303 [2024-07-13 22:19:01.477075] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:42.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:42.303 EAL: Requested device 0000:3d:01.0 cannot be used 00:35:42.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:42.303 EAL: Requested device 0000:3d:01.1 cannot be used 00:35:42.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:42.303 EAL: Requested device 0000:3d:01.2 cannot be used 00:35:42.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:42.303 EAL: Requested device 0000:3d:01.3 cannot be used 00:35:42.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:42.303 EAL: Requested device 0000:3d:01.4 cannot be used 00:35:42.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:42.303 EAL: Requested device 0000:3d:01.5 cannot be used 00:35:42.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:42.303 EAL: Requested device 0000:3d:01.6 cannot be used 00:35:42.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:42.303 EAL: Requested device 0000:3d:01.7 cannot be used 00:35:42.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:42.303 EAL: Requested device 0000:3d:02.0 cannot be used 00:35:42.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:42.303 EAL: Requested device 0000:3d:02.1 cannot be used 00:35:42.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:42.303 EAL: Requested device 0000:3d:02.2 cannot be used 00:35:42.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:42.303 EAL: Requested device 0000:3d:02.3 cannot be used 00:35:42.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:42.303 EAL: Requested device 0000:3d:02.4 cannot be used 00:35:42.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:42.303 EAL: Requested device 0000:3d:02.5 cannot be used 00:35:42.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:42.303 EAL: Requested device 0000:3d:02.6 cannot be used 00:35:42.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:42.303 EAL: Requested device 0000:3d:02.7 cannot be used 00:35:42.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:42.303 EAL: Requested device 0000:3f:01.0 cannot be used 00:35:42.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:42.303 EAL: Requested device 0000:3f:01.1 cannot be used 00:35:42.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:42.303 EAL: Requested device 0000:3f:01.2 cannot be used 00:35:42.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:42.303 EAL: Requested device 0000:3f:01.3 cannot be used 00:35:42.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:42.303 EAL: Requested device 0000:3f:01.4 cannot be used 00:35:42.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:42.303 EAL: Requested device 0000:3f:01.5 cannot be used 00:35:42.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:42.303 EAL: Requested device 0000:3f:01.6 cannot be used 00:35:42.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:42.303 EAL: Requested device 0000:3f:01.7 cannot be used 00:35:42.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:42.303 EAL: Requested device 0000:3f:02.0 cannot be used 00:35:42.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:42.303 EAL: Requested device 0000:3f:02.1 cannot be used 00:35:42.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:42.303 EAL: Requested device 0000:3f:02.2 cannot be used 00:35:42.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:42.303 EAL: Requested device 0000:3f:02.3 cannot be used 00:35:42.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:42.303 EAL: Requested device 0000:3f:02.4 cannot be used 00:35:42.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:42.303 EAL: Requested device 0000:3f:02.5 cannot be used 00:35:42.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:42.303 EAL: Requested device 0000:3f:02.6 cannot be used 00:35:42.303 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:42.303 EAL: Requested device 0000:3f:02.7 cannot be used 00:35:42.303 [2024-07-13 22:19:01.640699] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:42.561 [2024-07-13 22:19:01.848635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:42.561 [2024-07-13 22:19:01.869852] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:35:42.561 [2024-07-13 22:19:01.877871] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:35:42.561 [2024-07-13 22:19:01.885898] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:35:42.818 [2024-07-13 22:19:02.172039] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:35:46.099 [2024-07-13 22:19:04.786633] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:35:46.099 [2024-07-13 22:19:04.786691] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:35:46.099 [2024-07-13 22:19:04.786706] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:35:46.099 [2024-07-13 22:19:04.794653] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:35:46.099 [2024-07-13 22:19:04.794686] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:35:46.099 [2024-07-13 22:19:04.794699] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:35:46.099 [2024-07-13 22:19:04.802692] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:35:46.099 [2024-07-13 22:19:04.802719] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:35:46.099 [2024-07-13 22:19:04.802731] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:35:46.099 [2024-07-13 22:19:04.810685] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:35:46.099 [2024-07-13 22:19:04.810712] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:35:46.099 [2024-07-13 22:19:04.810723] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:35:46.358 22:19:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:46.358 22:19:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:35:46.358 22:19:05 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' 00:35:46.358 22:19:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:46.358 22:19:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:35:46.358 22:19:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:35:46.358 22:19:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' 00:35:46.358 22:19:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:46.358 22:19:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:35:46.358 22:19:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:35:46.358 22:19:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:35:46.358 22:19:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:35:46.358 22:19:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:35:46.358 22:19:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:35:46.358 22:19:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram 00:35:46.617 22:19:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:35:46.617 22:19:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:35:46.617 22:19:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:35:46.617 22:19:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:35:46.617 22:19:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:35:46.617 22:19:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:35:46.617 22:19:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:35:46.617 22:19:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:35:46.617 22:19:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:35:46.617 22:19:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:35:46.617 22:19:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:35:46.617 22:19:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:46.617 1+0 records in 00:35:46.617 1+0 records out 00:35:46.617 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340559 s, 12.0 MB/s 00:35:46.617 22:19:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:35:46.617 22:19:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:35:46.617 22:19:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:35:46.617 22:19:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:35:46.617 22:19:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:35:46.617 22:19:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:35:46.617 22:19:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:35:46.617 22:19:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram1 00:35:46.617 22:19:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:35:46.617 22:19:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:35:46.617 22:19:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:35:46.617 22:19:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:35:46.617 22:19:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:35:46.617 22:19:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:35:46.617 22:19:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:35:46.617 22:19:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:35:46.617 22:19:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:35:46.617 22:19:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:35:46.617 22:19:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:35:46.617 22:19:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:46.617 1+0 records in 00:35:46.617 1+0 records out 00:35:46.617 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000350861 s, 11.7 MB/s 00:35:46.617 22:19:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:35:46.876 22:19:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:35:46.876 22:19:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:35:46.876 22:19:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:35:46.876 22:19:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:35:46.876 22:19:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:35:46.876 22:19:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:35:46.876 22:19:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 00:35:46.876 22:19:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:35:46.876 22:19:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:35:46.876 22:19:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:35:46.876 22:19:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:35:46.876 22:19:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:35:46.876 22:19:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:35:46.876 22:19:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:35:46.876 22:19:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:35:46.876 22:19:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:35:46.876 22:19:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:35:46.876 22:19:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:35:46.876 22:19:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:46.876 1+0 records in 00:35:46.876 1+0 records out 00:35:46.876 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000344305 s, 11.9 MB/s 00:35:46.876 22:19:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:35:46.876 22:19:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:35:46.876 22:19:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:35:46.876 22:19:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:35:46.876 22:19:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:35:46.876 22:19:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:35:46.876 22:19:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:35:46.876 22:19:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 00:35:47.135 22:19:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:35:47.135 22:19:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:35:47.135 22:19:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:35:47.135 22:19:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:35:47.135 22:19:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:35:47.135 22:19:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:35:47.135 22:19:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:35:47.135 22:19:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:35:47.135 22:19:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:35:47.135 22:19:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:35:47.135 22:19:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:35:47.135 22:19:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:47.135 1+0 records in 00:35:47.135 1+0 records out 00:35:47.135 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345018 s, 11.9 MB/s 00:35:47.135 22:19:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:35:47.135 22:19:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:35:47.135 22:19:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:35:47.135 22:19:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:35:47.135 22:19:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:35:47.135 22:19:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:35:47.135 22:19:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:35:47.135 22:19:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:35:47.394 22:19:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:35:47.394 { 00:35:47.394 "nbd_device": "/dev/nbd0", 00:35:47.394 "bdev_name": "crypto_ram" 00:35:47.394 }, 00:35:47.394 { 00:35:47.394 "nbd_device": "/dev/nbd1", 00:35:47.394 "bdev_name": "crypto_ram1" 00:35:47.394 }, 00:35:47.394 { 00:35:47.394 "nbd_device": "/dev/nbd2", 00:35:47.394 "bdev_name": "crypto_ram2" 00:35:47.394 }, 00:35:47.394 { 00:35:47.394 "nbd_device": "/dev/nbd3", 00:35:47.394 "bdev_name": "crypto_ram3" 00:35:47.394 } 00:35:47.394 ]' 00:35:47.394 22:19:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:35:47.394 22:19:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:35:47.394 { 00:35:47.394 "nbd_device": "/dev/nbd0", 00:35:47.394 "bdev_name": "crypto_ram" 00:35:47.394 }, 00:35:47.394 { 00:35:47.394 "nbd_device": "/dev/nbd1", 00:35:47.394 "bdev_name": "crypto_ram1" 00:35:47.394 }, 00:35:47.394 { 00:35:47.394 "nbd_device": "/dev/nbd2", 00:35:47.394 "bdev_name": "crypto_ram2" 00:35:47.394 }, 00:35:47.394 { 00:35:47.394 "nbd_device": "/dev/nbd3", 00:35:47.394 "bdev_name": "crypto_ram3" 00:35:47.394 } 00:35:47.394 ]' 00:35:47.394 22:19:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:35:47.394 22:19:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3' 00:35:47.394 22:19:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:47.394 22:19:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3') 00:35:47.394 22:19:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:47.394 22:19:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:35:47.394 22:19:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:47.394 22:19:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:35:47.653 22:19:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:47.653 22:19:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:47.653 22:19:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:47.653 22:19:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:47.653 22:19:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:47.653 22:19:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:47.653 22:19:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:47.653 22:19:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:47.653 22:19:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:47.653 22:19:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:35:47.653 22:19:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:35:47.653 22:19:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:35:47.653 22:19:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:35:47.653 22:19:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:47.653 22:19:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:47.653 22:19:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:35:47.653 22:19:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:47.653 22:19:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:47.653 22:19:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:47.653 22:19:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:35:47.912 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:35:47.912 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:35:47.912 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:35:47.912 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:47.912 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:47.912 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:35:47.912 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:47.912 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:47.912 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:47.912 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:35:48.170 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:35:48.171 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:35:48.171 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:35:48.171 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:48.171 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:48.171 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:35:48.171 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:48.171 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:48.171 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:35:48.171 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:48.171 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:35:48.171 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:35:48.171 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:35:48.171 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:35:48.430 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:35:48.430 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:35:48.430 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:35:48.430 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:35:48.430 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:35:48.430 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:35:48.430 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:35:48.430 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:35:48.430 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:35:48.430 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:35:48.430 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:48.430 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:35:48.430 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:35:48.430 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:35:48.430 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:35:48.430 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:35:48.430 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:48.430 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:35:48.430 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:35:48.430 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:35:48.430 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:35:48.430 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:35:48.430 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:35:48.430 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:35:48.430 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram /dev/nbd0 00:35:48.430 /dev/nbd0 00:35:48.430 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:35:48.430 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:35:48.430 22:19:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:35:48.430 22:19:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:35:48.430 22:19:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:35:48.430 22:19:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:35:48.430 22:19:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:35:48.430 22:19:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:35:48.430 22:19:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:35:48.430 22:19:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:35:48.430 22:19:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:48.430 1+0 records in 00:35:48.430 1+0 records out 00:35:48.430 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002214 s, 18.5 MB/s 00:35:48.430 22:19:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:35:48.430 22:19:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:35:48.430 22:19:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:35:48.430 22:19:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:35:48.430 22:19:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:35:48.430 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:48.430 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:35:48.430 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram1 /dev/nbd1 00:35:48.689 /dev/nbd1 00:35:48.689 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:35:48.689 22:19:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:35:48.689 22:19:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:35:48.689 22:19:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:35:48.689 22:19:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:35:48.689 22:19:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:35:48.689 22:19:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:35:48.689 22:19:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:35:48.689 22:19:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:35:48.689 22:19:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:35:48.689 22:19:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:48.689 1+0 records in 00:35:48.689 1+0 records out 00:35:48.689 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000300322 s, 13.6 MB/s 00:35:48.689 22:19:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:35:48.689 22:19:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:35:48.689 22:19:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:35:48.689 22:19:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:35:48.689 22:19:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:35:48.689 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:48.689 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:35:48.689 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 /dev/nbd10 00:35:48.948 /dev/nbd10 00:35:48.948 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:35:48.948 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:35:48.948 22:19:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:35:48.948 22:19:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:35:48.948 22:19:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:35:48.948 22:19:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:35:48.948 22:19:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:35:48.948 22:19:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:35:48.948 22:19:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:35:48.948 22:19:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:35:48.948 22:19:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:48.948 1+0 records in 00:35:48.948 1+0 records out 00:35:48.948 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000303975 s, 13.5 MB/s 00:35:48.948 22:19:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:35:48.948 22:19:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:35:48.948 22:19:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:35:48.948 22:19:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:35:48.948 22:19:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:35:48.948 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:48.948 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:35:48.948 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 /dev/nbd11 00:35:49.207 /dev/nbd11 00:35:49.207 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:35:49.207 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:35:49.207 22:19:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:35:49.207 22:19:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:35:49.207 22:19:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:35:49.207 22:19:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:35:49.207 22:19:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:35:49.207 22:19:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:35:49.207 22:19:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:35:49.207 22:19:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:35:49.207 22:19:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:49.207 1+0 records in 00:35:49.207 1+0 records out 00:35:49.207 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340977 s, 12.0 MB/s 00:35:49.207 22:19:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:35:49.207 22:19:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:35:49.207 22:19:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:35:49.207 22:19:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:35:49.207 22:19:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:35:49.207 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:49.207 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:35:49.207 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:35:49.207 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:49.207 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:35:49.207 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:35:49.207 { 00:35:49.207 "nbd_device": "/dev/nbd0", 00:35:49.207 "bdev_name": "crypto_ram" 00:35:49.207 }, 00:35:49.207 { 00:35:49.207 "nbd_device": "/dev/nbd1", 00:35:49.207 "bdev_name": "crypto_ram1" 00:35:49.207 }, 00:35:49.207 { 00:35:49.207 "nbd_device": "/dev/nbd10", 00:35:49.207 "bdev_name": "crypto_ram2" 00:35:49.207 }, 00:35:49.207 { 00:35:49.207 "nbd_device": "/dev/nbd11", 00:35:49.207 "bdev_name": "crypto_ram3" 00:35:49.207 } 00:35:49.207 ]' 00:35:49.207 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:35:49.207 { 00:35:49.207 "nbd_device": "/dev/nbd0", 00:35:49.207 "bdev_name": "crypto_ram" 00:35:49.207 }, 00:35:49.207 { 00:35:49.207 "nbd_device": "/dev/nbd1", 00:35:49.207 "bdev_name": "crypto_ram1" 00:35:49.207 }, 00:35:49.207 { 00:35:49.207 "nbd_device": "/dev/nbd10", 00:35:49.207 "bdev_name": "crypto_ram2" 00:35:49.207 }, 00:35:49.207 { 00:35:49.207 "nbd_device": "/dev/nbd11", 00:35:49.207 "bdev_name": "crypto_ram3" 00:35:49.207 } 00:35:49.207 ]' 00:35:49.207 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:35:49.466 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:35:49.466 /dev/nbd1 00:35:49.466 /dev/nbd10 00:35:49.466 /dev/nbd11' 00:35:49.466 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:35:49.466 /dev/nbd1 00:35:49.466 /dev/nbd10 00:35:49.466 /dev/nbd11' 00:35:49.466 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:35:49.466 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=4 00:35:49.466 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 4 00:35:49.466 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=4 00:35:49.466 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 4 -ne 4 ']' 00:35:49.466 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' write 00:35:49.466 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:35:49.466 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:35:49.466 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:35:49.466 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:35:49.466 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:35:49.466 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:35:49.466 256+0 records in 00:35:49.466 256+0 records out 00:35:49.466 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0110561 s, 94.8 MB/s 00:35:49.466 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:35:49.466 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:35:49.466 256+0 records in 00:35:49.466 256+0 records out 00:35:49.466 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0465727 s, 22.5 MB/s 00:35:49.466 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:35:49.466 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:35:49.466 256+0 records in 00:35:49.466 256+0 records out 00:35:49.466 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0353497 s, 29.7 MB/s 00:35:49.466 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:35:49.466 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:35:49.466 256+0 records in 00:35:49.466 256+0 records out 00:35:49.466 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0426819 s, 24.6 MB/s 00:35:49.466 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:35:49.466 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:35:49.466 256+0 records in 00:35:49.466 256+0 records out 00:35:49.466 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0269405 s, 38.9 MB/s 00:35:49.466 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' verify 00:35:49.466 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:35:49.466 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:35:49.467 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:35:49.467 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:35:49.467 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:35:49.467 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:35:49.467 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:35:49.467 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:35:49.467 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:35:49.467 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:35:49.725 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:35:49.725 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd10 00:35:49.725 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:35:49.725 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd11 00:35:49.725 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:35:49.725 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:35:49.725 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:49.725 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:35:49.725 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:49.725 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:35:49.725 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:49.725 22:19:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:35:49.725 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:49.725 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:49.725 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:49.725 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:49.725 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:49.725 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:49.725 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:49.725 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:49.725 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:49.725 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:35:49.984 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:35:49.984 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:35:49.984 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:35:49.984 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:49.984 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:49.984 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:35:49.984 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:49.984 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:49.984 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:49.984 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:35:50.243 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:35:50.243 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:35:50.243 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:35:50.243 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:50.243 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:50.243 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:35:50.243 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:50.243 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:50.243 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:50.243 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:35:50.243 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:35:50.243 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:35:50.243 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:35:50.243 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:50.243 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:50.243 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:35:50.243 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:50.243 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:50.502 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:35:50.502 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:50.502 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:35:50.502 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:35:50.502 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:35:50.502 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:35:50.502 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:35:50.502 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:35:50.502 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:35:50.502 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:35:50.502 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:35:50.502 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:35:50.502 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:35:50.502 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:35:50.502 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:35:50.502 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:35:50.502 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:50.502 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:35:50.502 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:35:50.502 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:35:50.502 22:19:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:35:50.760 malloc_lvol_verify 00:35:50.760 22:19:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:35:51.018 0b0a848a-e635-4554-8c49-3c36c4ecf0cf 00:35:51.018 22:19:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:35:51.019 642d90df-a676-4330-b59a-efa259faaf78 00:35:51.019 22:19:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:35:51.277 /dev/nbd0 00:35:51.277 22:19:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:35:51.277 mke2fs 1.46.5 (30-Dec-2021) 00:35:51.277 Discarding device blocks: 0/4096 done 00:35:51.277 Creating filesystem with 4096 1k blocks and 1024 inodes 00:35:51.277 00:35:51.277 Allocating group tables: 0/1 done 00:35:51.277 Writing inode tables: 0/1 done 00:35:51.277 Creating journal (1024 blocks): done 00:35:51.277 Writing superblocks and filesystem accounting information: 0/1 done 00:35:51.277 00:35:51.277 22:19:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:35:51.277 22:19:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:35:51.277 22:19:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:51.277 22:19:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:35:51.277 22:19:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:51.277 22:19:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:35:51.277 22:19:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:51.277 22:19:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:35:51.535 22:19:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:51.535 22:19:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:51.535 22:19:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:51.535 22:19:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:51.535 22:19:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:51.535 22:19:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:51.535 22:19:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:51.535 22:19:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:51.535 22:19:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:35:51.535 22:19:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:35:51.535 22:19:10 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 1608740 00:35:51.535 22:19:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 1608740 ']' 00:35:51.535 22:19:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 1608740 00:35:51.535 22:19:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:35:51.535 22:19:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:51.535 22:19:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1608740 00:35:51.535 22:19:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:51.535 22:19:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:51.535 22:19:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1608740' 00:35:51.535 killing process with pid 1608740 00:35:51.535 22:19:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@967 -- # kill 1608740 00:35:51.535 22:19:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@972 -- # wait 1608740 00:35:53.466 22:19:12 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:35:53.466 00:35:53.466 real 0m11.368s 00:35:53.466 user 0m13.376s 00:35:53.466 sys 0m3.339s 00:35:53.466 22:19:12 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:53.466 22:19:12 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:35:53.466 ************************************ 00:35:53.466 END TEST bdev_nbd 00:35:53.466 ************************************ 00:35:53.466 22:19:12 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 0 00:35:53.466 22:19:12 blockdev_crypto_qat -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:35:53.466 22:19:12 blockdev_crypto_qat -- bdev/blockdev.sh@764 -- # '[' crypto_qat = nvme ']' 00:35:53.466 22:19:12 blockdev_crypto_qat -- bdev/blockdev.sh@764 -- # '[' crypto_qat = gpt ']' 00:35:53.466 22:19:12 blockdev_crypto_qat -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:35:53.466 22:19:12 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:53.466 22:19:12 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:53.466 22:19:12 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:35:53.466 ************************************ 00:35:53.466 START TEST bdev_fio 00:35:53.466 ************************************ 00:35:53.466 22:19:12 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:35:53.466 22:19:12 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:35:53.467 22:19:12 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:35:53.467 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:35:53.467 22:19:12 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:35:53.467 22:19:12 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:35:53.467 22:19:12 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:35:53.467 22:19:12 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:35:53.467 22:19:12 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:35:53.467 22:19:12 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:35:53.467 22:19:12 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:35:53.467 22:19:12 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:35:53.467 22:19:12 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:35:53.467 22:19:12 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:35:53.467 22:19:12 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:35:53.467 22:19:12 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:35:53.467 22:19:12 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:35:53.467 22:19:12 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:35:53.467 22:19:12 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:35:53.467 22:19:12 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:35:53.467 22:19:12 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:35:53.467 22:19:12 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:35:53.467 22:19:12 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:35:53.725 22:19:12 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:35:53.725 22:19:12 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:35:53.725 22:19:12 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:35:53.725 22:19:12 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram]' 00:35:53.725 22:19:12 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram 00:35:53.725 22:19:12 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:35:53.725 22:19:12 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram1]' 00:35:53.725 22:19:12 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram1 00:35:53.725 22:19:12 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:35:53.725 22:19:12 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram2]' 00:35:53.725 22:19:12 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram2 00:35:53.725 22:19:12 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:35:53.725 22:19:12 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram3]' 00:35:53.725 22:19:12 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram3 00:35:53.725 22:19:12 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:35:53.725 22:19:12 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:35:53.725 22:19:12 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:35:53.725 22:19:12 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:53.725 22:19:12 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:35:53.725 ************************************ 00:35:53.725 START TEST bdev_fio_rw_verify 00:35:53.725 ************************************ 00:35:53.725 22:19:12 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:35:53.725 22:19:12 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:35:53.725 22:19:12 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:53.725 22:19:12 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:53.725 22:19:12 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:53.725 22:19:12 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:35:53.725 22:19:12 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:35:53.725 22:19:12 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:53.725 22:19:12 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:53.725 22:19:12 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:35:53.725 22:19:12 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:35:53.725 22:19:12 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:53.725 22:19:12 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:35:53.725 22:19:12 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:35:53.726 22:19:12 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:35:53.726 22:19:12 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:53.726 22:19:12 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:35:53.983 job_crypto_ram: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:35:53.983 job_crypto_ram1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:35:53.983 job_crypto_ram2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:35:53.983 job_crypto_ram3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:35:53.983 fio-3.35 00:35:53.983 Starting 4 threads 00:35:54.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:54.240 EAL: Requested device 0000:3d:01.0 cannot be used 00:35:54.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:54.240 EAL: Requested device 0000:3d:01.1 cannot be used 00:35:54.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:54.240 EAL: Requested device 0000:3d:01.2 cannot be used 00:35:54.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:54.240 EAL: Requested device 0000:3d:01.3 cannot be used 00:35:54.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:54.240 EAL: Requested device 0000:3d:01.4 cannot be used 00:35:54.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:54.240 EAL: Requested device 0000:3d:01.5 cannot be used 00:35:54.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:54.240 EAL: Requested device 0000:3d:01.6 cannot be used 00:35:54.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:54.240 EAL: Requested device 0000:3d:01.7 cannot be used 00:35:54.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:54.240 EAL: Requested device 0000:3d:02.0 cannot be used 00:35:54.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:54.240 EAL: Requested device 0000:3d:02.1 cannot be used 00:35:54.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:54.240 EAL: Requested device 0000:3d:02.2 cannot be used 00:35:54.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:54.240 EAL: Requested device 0000:3d:02.3 cannot be used 00:35:54.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:54.240 EAL: Requested device 0000:3d:02.4 cannot be used 00:35:54.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:54.240 EAL: Requested device 0000:3d:02.5 cannot be used 00:35:54.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:54.240 EAL: Requested device 0000:3d:02.6 cannot be used 00:35:54.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:54.240 EAL: Requested device 0000:3d:02.7 cannot be used 00:35:54.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:54.240 EAL: Requested device 0000:3f:01.0 cannot be used 00:35:54.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:54.240 EAL: Requested device 0000:3f:01.1 cannot be used 00:35:54.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:54.240 EAL: Requested device 0000:3f:01.2 cannot be used 00:35:54.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:54.240 EAL: Requested device 0000:3f:01.3 cannot be used 00:35:54.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:54.240 EAL: Requested device 0000:3f:01.4 cannot be used 00:35:54.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:54.240 EAL: Requested device 0000:3f:01.5 cannot be used 00:35:54.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:54.240 EAL: Requested device 0000:3f:01.6 cannot be used 00:35:54.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:54.240 EAL: Requested device 0000:3f:01.7 cannot be used 00:35:54.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:54.240 EAL: Requested device 0000:3f:02.0 cannot be used 00:35:54.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:54.240 EAL: Requested device 0000:3f:02.1 cannot be used 00:35:54.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:54.240 EAL: Requested device 0000:3f:02.2 cannot be used 00:35:54.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:54.240 EAL: Requested device 0000:3f:02.3 cannot be used 00:35:54.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:54.240 EAL: Requested device 0000:3f:02.4 cannot be used 00:35:54.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:54.240 EAL: Requested device 0000:3f:02.5 cannot be used 00:35:54.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:54.240 EAL: Requested device 0000:3f:02.6 cannot be used 00:35:54.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:54.240 EAL: Requested device 0000:3f:02.7 cannot be used 00:36:09.097 00:36:09.097 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=1611550: Sat Jul 13 22:19:26 2024 00:36:09.097 read: IOPS=28.0k, BW=109MiB/s (114MB/s)(1092MiB/10001msec) 00:36:09.097 slat (usec): min=13, max=511, avg=48.96, stdev=30.28 00:36:09.097 clat (usec): min=16, max=2260, avg=275.71, stdev=176.77 00:36:09.097 lat (usec): min=46, max=2523, avg=324.67, stdev=192.62 00:36:09.097 clat percentiles (usec): 00:36:09.097 | 50.000th=[ 223], 99.000th=[ 848], 99.900th=[ 996], 99.990th=[ 1483], 00:36:09.097 | 99.999th=[ 2180] 00:36:09.097 write: IOPS=30.7k, BW=120MiB/s (126MB/s)(1173MiB/9779msec); 0 zone resets 00:36:09.097 slat (usec): min=21, max=375, avg=58.82, stdev=30.34 00:36:09.097 clat (usec): min=15, max=1637, avg=306.75, stdev=186.05 00:36:09.097 lat (usec): min=47, max=1782, avg=365.57, stdev=201.38 00:36:09.097 clat percentiles (usec): 00:36:09.097 | 50.000th=[ 265], 99.000th=[ 906], 99.900th=[ 1045], 99.990th=[ 1139], 00:36:09.097 | 99.999th=[ 1418] 00:36:09.097 bw ( KiB/s): min=103744, max=154048, per=97.79%, avg=120153.26, stdev=2805.89, samples=76 00:36:09.097 iops : min=25936, max=38512, avg=30038.32, stdev=701.47, samples=76 00:36:09.097 lat (usec) : 20=0.01%, 50=0.03%, 100=8.20%, 250=43.81%, 500=35.21% 00:36:09.097 lat (usec) : 750=9.65%, 1000=2.93% 00:36:09.097 lat (msec) : 2=0.17%, 4=0.01% 00:36:09.097 cpu : usr=99.31%, sys=0.27%, ctx=75, majf=0, minf=28219 00:36:09.097 IO depths : 1=3.7%, 2=27.5%, 4=55.0%, 8=13.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:09.097 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.097 complete : 0=0.0%, 4=87.9%, 8=12.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.097 issued rwts: total=279552,300387,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.097 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:09.097 00:36:09.097 Run status group 0 (all jobs): 00:36:09.097 READ: bw=109MiB/s (114MB/s), 109MiB/s-109MiB/s (114MB/s-114MB/s), io=1092MiB (1145MB), run=10001-10001msec 00:36:09.097 WRITE: bw=120MiB/s (126MB/s), 120MiB/s-120MiB/s (126MB/s-126MB/s), io=1173MiB (1230MB), run=9779-9779msec 00:36:09.354 ----------------------------------------------------- 00:36:09.355 Suppressions used: 00:36:09.355 count bytes template 00:36:09.355 4 47 /usr/src/fio/parse.c 00:36:09.355 3095 297120 /usr/src/fio/iolog.c 00:36:09.355 1 8 libtcmalloc_minimal.so 00:36:09.355 1 904 libcrypto.so 00:36:09.355 ----------------------------------------------------- 00:36:09.355 00:36:09.613 00:36:09.613 real 0m15.844s 00:36:09.613 user 0m53.952s 00:36:09.613 sys 0m0.877s 00:36:09.613 22:19:28 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:09.613 22:19:28 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:36:09.613 ************************************ 00:36:09.613 END TEST bdev_fio_rw_verify 00:36:09.613 ************************************ 00:36:09.613 22:19:28 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:36:09.613 22:19:28 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:36:09.613 22:19:28 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:36:09.613 22:19:28 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:36:09.613 22:19:28 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:36:09.613 22:19:28 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:36:09.613 22:19:28 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:36:09.613 22:19:28 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:36:09.613 22:19:28 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:36:09.613 22:19:28 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:36:09.613 22:19:28 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:36:09.613 22:19:28 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:36:09.613 22:19:28 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:36:09.613 22:19:28 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:36:09.613 22:19:28 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:36:09.613 22:19:28 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:36:09.613 22:19:28 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:36:09.613 22:19:28 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:36:09.613 22:19:28 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "3af95f34-1a53-51c1-a185-74ea945644ee"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "3af95f34-1a53-51c1-a185-74ea945644ee",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_qat_cbc"' ' }' ' }' '}' '{' ' "name": "crypto_ram1",' ' "aliases": [' ' "d9a66eaf-1c21-5d5f-99e5-d09f47ef4e64"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "d9a66eaf-1c21-5d5f-99e5-d09f47ef4e64",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram1",' ' "key_name": "test_dek_qat_xts"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "e0550d64-31b8-5673-9de5-c1bfd9d035f4"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "e0550d64-31b8-5673-9de5-c1bfd9d035f4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_qat_cbc2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "2c1c9e6a-86a0-5e59-aa5c-b9bccc051b4c"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "2c1c9e6a-86a0-5e59-aa5c-b9bccc051b4c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_qat_xts2"' ' }' ' }' '}' 00:36:09.613 22:19:28 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n crypto_ram 00:36:09.613 crypto_ram1 00:36:09.613 crypto_ram2 00:36:09.613 crypto_ram3 ]] 00:36:09.613 22:19:28 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:36:09.614 22:19:28 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "3af95f34-1a53-51c1-a185-74ea945644ee"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "3af95f34-1a53-51c1-a185-74ea945644ee",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_qat_cbc"' ' }' ' }' '}' '{' ' "name": "crypto_ram1",' ' "aliases": [' ' "d9a66eaf-1c21-5d5f-99e5-d09f47ef4e64"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "d9a66eaf-1c21-5d5f-99e5-d09f47ef4e64",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram1",' ' "key_name": "test_dek_qat_xts"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "e0550d64-31b8-5673-9de5-c1bfd9d035f4"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "e0550d64-31b8-5673-9de5-c1bfd9d035f4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_qat_cbc2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "2c1c9e6a-86a0-5e59-aa5c-b9bccc051b4c"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "2c1c9e6a-86a0-5e59-aa5c-b9bccc051b4c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_qat_xts2"' ' }' ' }' '}' 00:36:09.614 22:19:28 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:36:09.614 22:19:28 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram]' 00:36:09.614 22:19:28 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram 00:36:09.614 22:19:28 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:36:09.614 22:19:28 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram1]' 00:36:09.614 22:19:28 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram1 00:36:09.614 22:19:28 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:36:09.614 22:19:28 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram2]' 00:36:09.614 22:19:28 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram2 00:36:09.614 22:19:28 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:36:09.614 22:19:28 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram3]' 00:36:09.614 22:19:28 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram3 00:36:09.614 22:19:28 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:36:09.614 22:19:28 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:36:09.614 22:19:28 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:09.614 22:19:28 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:36:09.614 ************************************ 00:36:09.614 START TEST bdev_fio_trim 00:36:09.614 ************************************ 00:36:09.614 22:19:28 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:36:09.614 22:19:28 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:36:09.614 22:19:28 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:09.614 22:19:28 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:09.614 22:19:28 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:09.614 22:19:28 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:36:09.614 22:19:28 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:36:09.614 22:19:28 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:09.614 22:19:28 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:09.614 22:19:28 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:36:09.614 22:19:28 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:09.614 22:19:28 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:36:09.872 22:19:29 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:36:09.872 22:19:29 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:36:09.872 22:19:29 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1347 -- # break 00:36:09.872 22:19:29 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:09.872 22:19:29 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:36:10.130 job_crypto_ram: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:36:10.130 job_crypto_ram1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:36:10.130 job_crypto_ram2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:36:10.130 job_crypto_ram3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:36:10.130 fio-3.35 00:36:10.130 Starting 4 threads 00:36:10.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:10.130 EAL: Requested device 0000:3d:01.0 cannot be used 00:36:10.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:10.130 EAL: Requested device 0000:3d:01.1 cannot be used 00:36:10.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:10.130 EAL: Requested device 0000:3d:01.2 cannot be used 00:36:10.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:10.130 EAL: Requested device 0000:3d:01.3 cannot be used 00:36:10.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:10.130 EAL: Requested device 0000:3d:01.4 cannot be used 00:36:10.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:10.130 EAL: Requested device 0000:3d:01.5 cannot be used 00:36:10.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:10.130 EAL: Requested device 0000:3d:01.6 cannot be used 00:36:10.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:10.130 EAL: Requested device 0000:3d:01.7 cannot be used 00:36:10.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:10.130 EAL: Requested device 0000:3d:02.0 cannot be used 00:36:10.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:10.130 EAL: Requested device 0000:3d:02.1 cannot be used 00:36:10.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:10.130 EAL: Requested device 0000:3d:02.2 cannot be used 00:36:10.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:10.130 EAL: Requested device 0000:3d:02.3 cannot be used 00:36:10.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:10.130 EAL: Requested device 0000:3d:02.4 cannot be used 00:36:10.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:10.130 EAL: Requested device 0000:3d:02.5 cannot be used 00:36:10.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:10.130 EAL: Requested device 0000:3d:02.6 cannot be used 00:36:10.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:10.130 EAL: Requested device 0000:3d:02.7 cannot be used 00:36:10.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:10.130 EAL: Requested device 0000:3f:01.0 cannot be used 00:36:10.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:10.130 EAL: Requested device 0000:3f:01.1 cannot be used 00:36:10.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:10.130 EAL: Requested device 0000:3f:01.2 cannot be used 00:36:10.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:10.130 EAL: Requested device 0000:3f:01.3 cannot be used 00:36:10.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:10.130 EAL: Requested device 0000:3f:01.4 cannot be used 00:36:10.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:10.130 EAL: Requested device 0000:3f:01.5 cannot be used 00:36:10.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:10.130 EAL: Requested device 0000:3f:01.6 cannot be used 00:36:10.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:10.130 EAL: Requested device 0000:3f:01.7 cannot be used 00:36:10.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:10.130 EAL: Requested device 0000:3f:02.0 cannot be used 00:36:10.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:10.130 EAL: Requested device 0000:3f:02.1 cannot be used 00:36:10.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:10.130 EAL: Requested device 0000:3f:02.2 cannot be used 00:36:10.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:10.130 EAL: Requested device 0000:3f:02.3 cannot be used 00:36:10.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:10.130 EAL: Requested device 0000:3f:02.4 cannot be used 00:36:10.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:10.130 EAL: Requested device 0000:3f:02.5 cannot be used 00:36:10.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:10.130 EAL: Requested device 0000:3f:02.6 cannot be used 00:36:10.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:10.130 EAL: Requested device 0000:3f:02.7 cannot be used 00:36:24.993 00:36:24.993 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=1614327: Sat Jul 13 22:19:42 2024 00:36:24.993 write: IOPS=51.0k, BW=199MiB/s (209MB/s)(1993MiB/10001msec); 0 zone resets 00:36:24.993 slat (usec): min=13, max=601, avg=46.78, stdev=28.72 00:36:24.993 clat (usec): min=18, max=1217, avg=164.11, stdev=98.09 00:36:24.993 lat (usec): min=31, max=1671, avg=210.89, stdev=114.30 00:36:24.993 clat percentiles (usec): 00:36:24.993 | 50.000th=[ 145], 99.000th=[ 502], 99.900th=[ 603], 99.990th=[ 693], 00:36:24.993 | 99.999th=[ 1074] 00:36:24.993 bw ( KiB/s): min=186848, max=264352, per=100.00%, avg=204565.47, stdev=6933.08, samples=76 00:36:24.993 iops : min=46712, max=66088, avg=51141.37, stdev=1733.27, samples=76 00:36:24.993 trim: IOPS=51.0k, BW=199MiB/s (209MB/s)(1993MiB/10001msec); 0 zone resets 00:36:24.993 slat (nsec): min=4521, max=76628, avg=11993.62, stdev=5216.81 00:36:24.993 clat (usec): min=30, max=1671, avg=208.88, stdev=115.36 00:36:24.993 lat (usec): min=35, max=1699, avg=220.87, stdev=117.42 00:36:24.993 clat percentiles (usec): 00:36:24.993 | 50.000th=[ 182], 99.000th=[ 603], 99.900th=[ 709], 99.990th=[ 857], 00:36:24.993 | 99.999th=[ 1401] 00:36:24.993 bw ( KiB/s): min=186848, max=264352, per=100.00%, avg=204565.89, stdev=6933.00, samples=76 00:36:24.993 iops : min=46712, max=66088, avg=51141.47, stdev=1733.25, samples=76 00:36:24.993 lat (usec) : 20=0.01%, 50=2.72%, 100=17.53%, 250=58.61%, 500=19.01% 00:36:24.993 lat (usec) : 750=2.10%, 1000=0.02% 00:36:24.993 lat (msec) : 2=0.01% 00:36:24.993 cpu : usr=99.57%, sys=0.04%, ctx=72, majf=0, minf=7672 00:36:24.993 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:24.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:24.993 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:24.993 issued rwts: total=0,510216,510216,0 short=0,0,0,0 dropped=0,0,0,0 00:36:24.993 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:24.993 00:36:24.993 Run status group 0 (all jobs): 00:36:24.993 WRITE: bw=199MiB/s (209MB/s), 199MiB/s-199MiB/s (209MB/s-209MB/s), io=1993MiB (2090MB), run=10001-10001msec 00:36:24.993 TRIM: bw=199MiB/s (209MB/s), 199MiB/s-199MiB/s (209MB/s-209MB/s), io=1993MiB (2090MB), run=10001-10001msec 00:36:25.559 ----------------------------------------------------- 00:36:25.559 Suppressions used: 00:36:25.559 count bytes template 00:36:25.559 4 47 /usr/src/fio/parse.c 00:36:25.559 1 8 libtcmalloc_minimal.so 00:36:25.559 1 904 libcrypto.so 00:36:25.559 ----------------------------------------------------- 00:36:25.559 00:36:25.559 00:36:25.559 real 0m15.834s 00:36:25.559 user 0m53.397s 00:36:25.559 sys 0m0.726s 00:36:25.559 22:19:44 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:25.559 22:19:44 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:36:25.559 ************************************ 00:36:25.559 END TEST bdev_fio_trim 00:36:25.559 ************************************ 00:36:25.559 22:19:44 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:36:25.559 22:19:44 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f 00:36:25.559 22:19:44 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@369 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:36:25.559 22:19:44 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@370 -- # popd 00:36:25.559 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:36:25.559 22:19:44 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:36:25.559 00:36:25.559 real 0m32.030s 00:36:25.559 user 1m47.529s 00:36:25.559 sys 0m1.800s 00:36:25.559 22:19:44 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:25.559 22:19:44 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:36:25.559 ************************************ 00:36:25.559 END TEST bdev_fio 00:36:25.559 ************************************ 00:36:25.559 22:19:44 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 0 00:36:25.559 22:19:44 blockdev_crypto_qat -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:36:25.559 22:19:44 blockdev_crypto_qat -- bdev/blockdev.sh@777 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:36:25.559 22:19:44 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:36:25.559 22:19:44 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:25.559 22:19:44 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:36:25.559 ************************************ 00:36:25.559 START TEST bdev_verify 00:36:25.559 ************************************ 00:36:25.559 22:19:44 blockdev_crypto_qat.bdev_verify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:36:25.817 [2024-07-13 22:19:45.026972] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:25.817 [2024-07-13 22:19:45.027059] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1616233 ] 00:36:25.817 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.817 EAL: Requested device 0000:3d:01.0 cannot be used 00:36:25.817 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.817 EAL: Requested device 0000:3d:01.1 cannot be used 00:36:25.817 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.817 EAL: Requested device 0000:3d:01.2 cannot be used 00:36:25.817 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.817 EAL: Requested device 0000:3d:01.3 cannot be used 00:36:25.817 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.817 EAL: Requested device 0000:3d:01.4 cannot be used 00:36:25.817 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.817 EAL: Requested device 0000:3d:01.5 cannot be used 00:36:25.817 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.817 EAL: Requested device 0000:3d:01.6 cannot be used 00:36:25.817 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.817 EAL: Requested device 0000:3d:01.7 cannot be used 00:36:25.817 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.817 EAL: Requested device 0000:3d:02.0 cannot be used 00:36:25.817 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.818 EAL: Requested device 0000:3d:02.1 cannot be used 00:36:25.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.818 EAL: Requested device 0000:3d:02.2 cannot be used 00:36:25.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.818 EAL: Requested device 0000:3d:02.3 cannot be used 00:36:25.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.818 EAL: Requested device 0000:3d:02.4 cannot be used 00:36:25.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.818 EAL: Requested device 0000:3d:02.5 cannot be used 00:36:25.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.818 EAL: Requested device 0000:3d:02.6 cannot be used 00:36:25.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.818 EAL: Requested device 0000:3d:02.7 cannot be used 00:36:25.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.818 EAL: Requested device 0000:3f:01.0 cannot be used 00:36:25.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.818 EAL: Requested device 0000:3f:01.1 cannot be used 00:36:25.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.818 EAL: Requested device 0000:3f:01.2 cannot be used 00:36:25.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.818 EAL: Requested device 0000:3f:01.3 cannot be used 00:36:25.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.818 EAL: Requested device 0000:3f:01.4 cannot be used 00:36:25.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.818 EAL: Requested device 0000:3f:01.5 cannot be used 00:36:25.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.818 EAL: Requested device 0000:3f:01.6 cannot be used 00:36:25.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.818 EAL: Requested device 0000:3f:01.7 cannot be used 00:36:25.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.818 EAL: Requested device 0000:3f:02.0 cannot be used 00:36:25.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.818 EAL: Requested device 0000:3f:02.1 cannot be used 00:36:25.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.818 EAL: Requested device 0000:3f:02.2 cannot be used 00:36:25.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.818 EAL: Requested device 0000:3f:02.3 cannot be used 00:36:25.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.818 EAL: Requested device 0000:3f:02.4 cannot be used 00:36:25.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.818 EAL: Requested device 0000:3f:02.5 cannot be used 00:36:25.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.818 EAL: Requested device 0000:3f:02.6 cannot be used 00:36:25.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.818 EAL: Requested device 0000:3f:02.7 cannot be used 00:36:25.818 [2024-07-13 22:19:45.185773] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:26.076 [2024-07-13 22:19:45.393261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:26.076 [2024-07-13 22:19:45.393272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:26.076 [2024-07-13 22:19:45.414582] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:36:26.076 [2024-07-13 22:19:45.422598] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:36:26.076 [2024-07-13 22:19:45.430609] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:36:26.335 [2024-07-13 22:19:45.710714] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:36:29.615 [2024-07-13 22:19:48.318435] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:36:29.615 [2024-07-13 22:19:48.318493] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:36:29.615 [2024-07-13 22:19:48.318510] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:36:29.615 [2024-07-13 22:19:48.326453] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:36:29.615 [2024-07-13 22:19:48.326483] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:36:29.615 [2024-07-13 22:19:48.326495] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:36:29.615 [2024-07-13 22:19:48.334490] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:36:29.615 [2024-07-13 22:19:48.334519] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:36:29.615 [2024-07-13 22:19:48.334530] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:36:29.615 [2024-07-13 22:19:48.342498] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:36:29.615 [2024-07-13 22:19:48.342525] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:36:29.615 [2024-07-13 22:19:48.342535] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:36:29.615 Running I/O for 5 seconds... 00:36:34.922 00:36:34.922 Latency(us) 00:36:34.922 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:34.922 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:34.922 Verification LBA range: start 0x0 length 0x1000 00:36:34.922 crypto_ram : 5.05 617.63 2.41 0.00 0.00 206385.96 2490.37 137573.17 00:36:34.922 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:36:34.922 Verification LBA range: start 0x1000 length 0x1000 00:36:34.922 crypto_ram : 5.05 620.49 2.42 0.00 0.00 205555.92 2044.72 137573.17 00:36:34.922 Job: crypto_ram1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:34.922 Verification LBA range: start 0x0 length 0x1000 00:36:34.922 crypto_ram1 : 5.05 620.62 2.42 0.00 0.00 205179.09 2136.47 128345.70 00:36:34.922 Job: crypto_ram1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:36:34.922 Verification LBA range: start 0x1000 length 0x1000 00:36:34.922 crypto_ram1 : 5.06 621.95 2.43 0.00 0.00 204744.10 2175.80 128345.70 00:36:34.922 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:34.922 Verification LBA range: start 0x0 length 0x1000 00:36:34.922 crypto_ram2 : 5.03 4857.19 18.97 0.00 0.00 26177.08 6186.60 20971.52 00:36:34.922 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:36:34.922 Verification LBA range: start 0x1000 length 0x1000 00:36:34.922 crypto_ram2 : 5.04 4863.63 19.00 0.00 0.00 26131.46 2739.40 21076.38 00:36:34.922 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:34.922 Verification LBA range: start 0x0 length 0x1000 00:36:34.922 crypto_ram3 : 5.04 4873.44 19.04 0.00 0.00 26046.27 1848.12 21705.52 00:36:34.922 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:36:34.922 Verification LBA range: start 0x1000 length 0x1000 00:36:34.922 crypto_ram3 : 5.04 4872.70 19.03 0.00 0.00 26040.79 1461.45 21705.52 00:36:34.922 =================================================================================================================== 00:36:34.922 Total : 21947.65 85.73 0.00 0.00 46420.25 1461.45 137573.17 00:36:36.292 00:36:36.292 real 0m10.660s 00:36:36.292 user 0m19.900s 00:36:36.292 sys 0m0.440s 00:36:36.292 22:19:55 blockdev_crypto_qat.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:36.292 22:19:55 blockdev_crypto_qat.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:36:36.292 ************************************ 00:36:36.292 END TEST bdev_verify 00:36:36.292 ************************************ 00:36:36.292 22:19:55 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 0 00:36:36.292 22:19:55 blockdev_crypto_qat -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:36:36.292 22:19:55 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:36:36.292 22:19:55 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:36.292 22:19:55 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:36:36.550 ************************************ 00:36:36.550 START TEST bdev_verify_big_io 00:36:36.550 ************************************ 00:36:36.550 22:19:55 blockdev_crypto_qat.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:36:36.550 [2024-07-13 22:19:55.775531] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:36.550 [2024-07-13 22:19:55.775619] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1618064 ] 00:36:36.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:36.550 EAL: Requested device 0000:3d:01.0 cannot be used 00:36:36.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:36.550 EAL: Requested device 0000:3d:01.1 cannot be used 00:36:36.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:36.550 EAL: Requested device 0000:3d:01.2 cannot be used 00:36:36.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:36.550 EAL: Requested device 0000:3d:01.3 cannot be used 00:36:36.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:36.550 EAL: Requested device 0000:3d:01.4 cannot be used 00:36:36.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:36.550 EAL: Requested device 0000:3d:01.5 cannot be used 00:36:36.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:36.550 EAL: Requested device 0000:3d:01.6 cannot be used 00:36:36.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:36.550 EAL: Requested device 0000:3d:01.7 cannot be used 00:36:36.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:36.550 EAL: Requested device 0000:3d:02.0 cannot be used 00:36:36.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:36.550 EAL: Requested device 0000:3d:02.1 cannot be used 00:36:36.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:36.550 EAL: Requested device 0000:3d:02.2 cannot be used 00:36:36.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:36.550 EAL: Requested device 0000:3d:02.3 cannot be used 00:36:36.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:36.550 EAL: Requested device 0000:3d:02.4 cannot be used 00:36:36.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:36.550 EAL: Requested device 0000:3d:02.5 cannot be used 00:36:36.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:36.550 EAL: Requested device 0000:3d:02.6 cannot be used 00:36:36.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:36.550 EAL: Requested device 0000:3d:02.7 cannot be used 00:36:36.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:36.550 EAL: Requested device 0000:3f:01.0 cannot be used 00:36:36.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:36.550 EAL: Requested device 0000:3f:01.1 cannot be used 00:36:36.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:36.550 EAL: Requested device 0000:3f:01.2 cannot be used 00:36:36.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:36.550 EAL: Requested device 0000:3f:01.3 cannot be used 00:36:36.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:36.550 EAL: Requested device 0000:3f:01.4 cannot be used 00:36:36.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:36.550 EAL: Requested device 0000:3f:01.5 cannot be used 00:36:36.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:36.550 EAL: Requested device 0000:3f:01.6 cannot be used 00:36:36.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:36.550 EAL: Requested device 0000:3f:01.7 cannot be used 00:36:36.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:36.550 EAL: Requested device 0000:3f:02.0 cannot be used 00:36:36.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:36.550 EAL: Requested device 0000:3f:02.1 cannot be used 00:36:36.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:36.550 EAL: Requested device 0000:3f:02.2 cannot be used 00:36:36.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:36.550 EAL: Requested device 0000:3f:02.3 cannot be used 00:36:36.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:36.550 EAL: Requested device 0000:3f:02.4 cannot be used 00:36:36.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:36.550 EAL: Requested device 0000:3f:02.5 cannot be used 00:36:36.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:36.550 EAL: Requested device 0000:3f:02.6 cannot be used 00:36:36.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:36.550 EAL: Requested device 0000:3f:02.7 cannot be used 00:36:36.550 [2024-07-13 22:19:55.933753] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:36.807 [2024-07-13 22:19:56.138967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:36.807 [2024-07-13 22:19:56.138978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:36.807 [2024-07-13 22:19:56.160303] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:36:36.807 [2024-07-13 22:19:56.168322] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:36:36.807 [2024-07-13 22:19:56.176336] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:36:37.371 [2024-07-13 22:19:56.466758] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:36:39.893 [2024-07-13 22:19:59.063268] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:36:39.893 [2024-07-13 22:19:59.063327] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:36:39.893 [2024-07-13 22:19:59.063344] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:36:39.893 [2024-07-13 22:19:59.071279] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:36:39.893 [2024-07-13 22:19:59.071311] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:36:39.893 [2024-07-13 22:19:59.071322] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:36:39.893 [2024-07-13 22:19:59.079315] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:36:39.893 [2024-07-13 22:19:59.079341] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:36:39.893 [2024-07-13 22:19:59.079352] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:36:39.893 [2024-07-13 22:19:59.087326] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:36:39.893 [2024-07-13 22:19:59.087352] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:36:39.893 [2024-07-13 22:19:59.087363] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:36:40.150 Running I/O for 5 seconds... 00:36:40.718 [2024-07-13 22:19:59.896450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.718 [2024-07-13 22:19:59.896998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.718 [2024-07-13 22:19:59.897275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.897543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.897601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.897639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.897673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.897706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.898044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.898070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.898083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.898097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.900714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.900768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.900803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.900835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.901209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.901249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.901293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.901333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.901634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.901651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.901664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.901677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.904228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.904277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.904312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.904344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.904716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.904753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.904787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.904828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.905166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.905183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.905196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.905210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.907801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.907848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.907881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.907920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.908257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.908293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.908325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.908357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.908691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.908708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.908721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.908734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.911129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.911176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.911213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.911257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.911661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.911699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.911732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.911764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.912100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.912117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.912130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.912144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.914594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.914638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.914671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.914703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.915076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.915114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.915147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.915179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.915485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.915501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.915513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.915526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.918028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.918074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.918108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.918141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.918519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.918557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.918590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.918622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.918911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.918929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.918942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.918954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.921467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.921513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.921546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.921579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.921939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.921988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.922021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.922054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.922323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.922340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.922352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.922365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.924700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.924746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.924779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.924831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.925202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.719 [2024-07-13 22:19:59.925252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.925284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.925328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.925617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.925633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.925646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.925660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.928150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.928196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.928228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.928264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.928593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.928632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.928667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.928709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.929068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.929085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.929098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.929292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.931744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.931802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.931835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.931866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.932208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.932246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.932279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.932311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.932635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.932652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.932665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.932678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.935087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.935133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.935164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.935197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.935537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.935576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.935610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.935642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.935997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.936015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.936032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.936044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.938503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.938550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.938594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.938627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.939009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.939048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.939083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.939115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.939439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.939455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.939467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.939479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.941821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.941868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.941907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.941940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.942289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.942328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.942360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.942391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.942710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.942727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.942741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.942754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.945093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.945139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.945170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.945203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.945581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.945618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.945651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.945683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.946033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.946050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.946063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.946077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.948332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.948377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.948408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.948439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.948815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.948852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.948884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.948924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.949234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.949248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.949260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.949271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.951615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.951660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.951692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.951724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.952109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.952145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.952183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.952225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.720 [2024-07-13 22:19:59.952488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.952502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.952513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.952529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.954876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.954932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.954967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.955001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.955327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.955366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.955399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.955432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.955721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.955737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.955749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.955763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.958247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.958301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.958347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.958392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.958760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.958810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.958853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.958885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.959175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.959192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.959204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.959217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.961536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.961582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.961628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.961661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.962042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.962085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.962118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.962149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.962469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.962486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.962500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.962514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.964681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.964728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.964761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.964804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.965182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.965221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.965254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.965286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.965623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.965640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.965653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.965666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.967861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.967913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.967946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.967978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.968335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.968373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.968406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.968437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.968731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.968747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.968759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.968772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.970893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.970947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.970979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.971013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.971381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.971429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.971463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.971494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.971818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.971835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.971859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.971872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.974102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.974148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.974181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.974220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.974608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.974645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.974681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.974712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.975048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.975065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.975078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.975093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.977188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.977233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.977265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.977296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.977659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.977696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.977733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.977764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.978068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.978086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.978099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.978112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.721 [2024-07-13 22:19:59.980272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.980318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.980350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.980382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.980735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.980787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.980839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.980873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.981177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.981195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.981207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.981220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.983409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.983454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.983487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.983520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.983848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.983885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.983936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.983968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.984337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.984353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.984365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.984378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.986487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.986539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.986572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.986602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.986963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.987002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.987046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.987079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.987422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.987440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.987452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.987466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.989571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.989618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.989652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.989685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.990040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.990080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.990114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.990147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.990491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.990508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.990521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.990537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.992440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.992485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.992516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.992549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.992809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.992844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.992882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.992927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.993200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.993216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.993228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.993241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.994860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.994915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.994949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.994982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.995340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.995378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.995420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.995453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.995796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.995813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.995825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.995839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.997451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.997496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.997531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.997561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.997881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.997926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.997960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.997991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.998231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.998246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.998259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:19:59.998271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:20:00.000221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:20:00.000515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:20:00.000971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:20:00.001818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:20:00.003223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:20:00.004280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:20:00.005063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:20:00.005938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:20:00.006178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:20:00.006194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:20:00.006207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:20:00.006220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:20:00.008386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:20:00.008731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:20:00.009686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:20:00.010769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.722 [2024-07-13 22:20:00.012181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.012880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.013606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.014545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.014792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.014808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.014821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.014833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.016967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.017265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.017936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.018780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.020110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.020892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.021899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.022794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.023042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.023063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.023075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.023088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.025304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.025885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.026724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.027795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.028935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.029917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.030788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.031857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.032108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.032126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.032138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.032151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.034780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.035652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.036722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.037805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.039098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.040014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.041070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.042157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.042473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.042491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.042504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.042518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.045335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.046405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.047468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.048108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.049224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.050258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.051309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.051605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.051977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.051996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.052010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.052023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.054962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.056038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.056721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.057795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.059171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.060232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.060528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.060816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.061174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.061198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.061212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.061228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.064201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.064963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.066062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.067054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.068377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.068688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.068987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.069279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.069625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.069643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.069660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.723 [2024-07-13 22:20:00.069674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.072190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.072823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.074083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.075142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.076479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.076781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.077077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.077366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.077737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.077755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.077768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.077781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.080158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.081225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.082170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.083223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.083893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.084195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.084482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.084768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.085124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.085143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.085157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.085170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.087464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.088345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.089425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.090502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.091139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.091432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.091720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.092017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.092325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.092342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.092354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.092366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.094715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.095802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.096887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.097244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.097897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.098197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.098485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.099103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.099370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.099386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.099398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.099411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.102036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.103134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.103530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.103831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.104490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.724 [2024-07-13 22:20:00.104784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.986 [2024-07-13 22:20:00.105646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.986 [2024-07-13 22:20:00.106518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.986 [2024-07-13 22:20:00.106766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.986 [2024-07-13 22:20:00.106783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.986 [2024-07-13 22:20:00.106796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.986 [2024-07-13 22:20:00.106813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.986 [2024-07-13 22:20:00.109415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.986 [2024-07-13 22:20:00.109720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.986 [2024-07-13 22:20:00.110017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.986 [2024-07-13 22:20:00.110305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.986 [2024-07-13 22:20:00.110995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.986 [2024-07-13 22:20:00.111930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.986 [2024-07-13 22:20:00.112779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.986 [2024-07-13 22:20:00.113850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.986 [2024-07-13 22:20:00.114100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.986 [2024-07-13 22:20:00.114118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.986 [2024-07-13 22:20:00.114130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.986 [2024-07-13 22:20:00.114143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.986 [2024-07-13 22:20:00.115955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.986 [2024-07-13 22:20:00.116255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.986 [2024-07-13 22:20:00.116563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.986 [2024-07-13 22:20:00.116863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.986 [2024-07-13 22:20:00.118274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.986 [2024-07-13 22:20:00.119231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.986 [2024-07-13 22:20:00.120309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.986 [2024-07-13 22:20:00.121455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.986 [2024-07-13 22:20:00.121846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.986 [2024-07-13 22:20:00.121863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.986 [2024-07-13 22:20:00.121875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.986 [2024-07-13 22:20:00.121888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.986 [2024-07-13 22:20:00.123660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.986 [2024-07-13 22:20:00.123968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.986 [2024-07-13 22:20:00.124262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.986 [2024-07-13 22:20:00.124551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.986 [2024-07-13 22:20:00.125703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.986 [2024-07-13 22:20:00.126792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.986 [2024-07-13 22:20:00.127955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.986 [2024-07-13 22:20:00.128474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.986 [2024-07-13 22:20:00.128741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.986 [2024-07-13 22:20:00.128757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.986 [2024-07-13 22:20:00.128770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.986 [2024-07-13 22:20:00.128783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.986 [2024-07-13 22:20:00.130745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.986 [2024-07-13 22:20:00.131050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.986 [2024-07-13 22:20:00.131339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.986 [2024-07-13 22:20:00.132446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.133759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.134973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.135530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.136394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.136640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.136657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.136670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.136683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.138740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.139047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.140259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.141370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.142710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.143167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.144054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.145142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.145388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.145405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.145418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.145430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.147563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.148719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.149935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.151093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.151835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.152681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.153774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.154821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.155109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.155126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.155139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.155153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.158550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.159642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.160843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.162004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.163194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.164275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.165363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.166065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.166429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.166445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.166457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.166470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.169451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.170591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.171622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.172406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.173711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.174800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.175387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.175687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.176057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.176075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.176087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.176101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.178968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.180021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.180740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.181581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.182872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.183540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.183829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.184130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.184455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.184472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.184484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.184497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.187046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.187795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.188642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.189665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.190583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.190889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.191209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.191499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.191849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.191867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.191893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.191913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.193883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.194728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.195778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.196893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.197497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.197784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.198072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.198358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.198604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.198621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.198633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.198646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.200860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.201935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.202982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.203485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.204124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.987 [2024-07-13 22:20:00.204412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.204692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.205474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.205730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.205747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.205759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.205772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.208273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.209366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.209954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.210242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.210893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.211200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.211933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.212741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.213007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.213024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.213037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.213050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.215458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.216081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.216368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.216650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.217238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.217823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.218671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.219686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.219935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.219952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.219965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.219979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.222252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.222544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.222830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.223148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.223942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.224763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.225733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.226561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.226899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.226921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.226950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.226963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.228748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.229049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.229335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.229618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.230201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.230485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.230764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.231050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.231369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.231387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.231399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.231412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.233485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.233780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.234077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.234361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.234971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.235256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.235538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.235830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.236151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.236169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.236181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.236194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.238356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.238656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.238946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.238983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.239584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.239871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.240187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.240480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.240828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.240850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.240863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.240877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.242913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.243222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.243525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.243811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.243849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.244210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.244520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.244807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.245098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.245383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.245685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.245701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.245713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.245727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.247480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.247528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.247561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.247594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.247946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.247995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.988 [2024-07-13 22:20:00.248030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.248062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.248095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.248410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.248426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.248441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.248455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.250245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.250293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.250327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.250378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.250731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.250778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.250813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.250846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.250879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.251197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.251214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.251227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.251239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.253041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.253097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.253129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.253161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.253490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.253538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.253572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.253605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.253639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.253916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.253934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.253947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.253960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.255709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.255756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.255792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.255826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.256176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.256242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.256289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.256324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.256368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.256638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.256654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.256666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.256680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.258486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.258531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.258577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.258609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.258914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.258991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.259048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.259094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.259138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.259486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.259503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.259516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.259529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.261345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.261391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.261424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.261456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.261750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.261804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.261840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.261875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.261915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.262256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.262276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.262289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.262303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.264081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.264140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.264183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.264216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.264526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.264573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.264606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.264639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.264672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.265009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.265026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.265039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.265052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.266808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.266856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.266894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.266949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.267288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.267335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.267371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.267404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.267438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.267749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.267766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.267779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.267792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.269601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.989 [2024-07-13 22:20:00.269652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.269684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.269717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.270084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.270143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.270179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.270212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.270244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.270575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.270592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.270605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.270620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.272421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.272468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.272501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.272537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.272853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.272907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.272944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.272978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.273012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.273349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.273366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.273379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.273392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.275280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.275326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.275358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.275391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.275700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.275745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.275784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.275818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.275851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.276124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.276141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.276153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.276166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.277980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.278037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.278075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.278109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.278431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.278486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.278534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.278583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.278626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.278960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.278977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.278990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.279004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.280784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.280832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.280865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.280898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.281189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.281247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.281301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.281335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.281390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.281680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.281699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.281712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.281725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.283607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.283670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.283712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.283746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.284042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.284098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.284133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.284166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.284198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.284536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.284552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.284564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.284579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.286391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.286448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.286482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.990 [2024-07-13 22:20:00.286528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.286839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.286895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.286937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.286971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.287004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.287324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.287341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.287354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.287368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.289119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.289166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.289207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.289244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.289606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.289655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.289690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.289722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.289755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.290068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.290102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.290114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.290133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.291915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.291962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.291995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.292029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.292371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.292421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.292457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.292490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.292529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.292899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.292924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.292936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.292950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.294704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.294751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.294784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.294817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.295147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.295194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.295233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.295266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.295299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.295627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.295644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.295656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.295670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.297443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.297501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.297534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.297566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.297936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.297985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.298020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.298052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.298085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.298376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.298392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.298404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.298417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.300205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.300252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.300291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.300325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.300668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.300715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.300750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.300796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.300830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.301173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.301191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.301207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.301220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.303058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.303105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.303139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.303174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.303486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.303546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.303581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.303614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.303647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.303940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.303959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.303971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.303994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.305634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.305679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.305711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.305742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.306079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.306125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.306161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.306193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.306226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.306448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.306464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.306476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.991 [2024-07-13 22:20:00.306489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.307854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.307908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.307948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.307987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.308211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.308266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.308302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.308334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.308367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.308587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.308603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.308615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.308628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.310452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.310498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.310536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.310570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.310852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.310907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.310942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.310974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.311006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.311249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.311264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.311276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.311289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.312638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.312683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.312715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.312747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.312979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.313035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.313076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.313117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.313150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.313371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.313388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.313400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.313413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.315299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.315347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.315385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.315417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.315651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.315702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.315765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.315798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.315832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.316065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.316082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.316094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.316107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.317457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.317502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.317533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.317564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.317805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.317856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.317891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.317931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.317970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.318249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.318266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.318279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.318295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.320109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.320153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.320189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.320221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.320452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.320501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.320535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.320575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.320611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.320829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.320845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.320858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.320870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.322176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.322230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.322261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.322293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.322569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.322625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.322660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.322694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.322727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.323125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.323143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.323156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.323170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.324849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.324894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.324933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.324972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.325197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.325246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.325278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.992 [2024-07-13 22:20:00.325310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.325342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.325561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.325576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.325588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.325601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.327302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.327360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.327639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.327675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.328014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.328064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.328098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.328129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.328161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.328514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.328542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.328555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.328569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.329917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.329968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.330003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.330723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.330995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.331061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.331095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.331127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.331162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.331382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.331397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.331410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.331423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.333307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.333788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.334621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.335668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.335905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.336761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.337706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.338543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.339584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.339814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.339829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.339842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.339855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.342181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.343024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.344082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.345105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.345379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.346303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.347135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.348154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.349175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.349570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.349586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.349599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.349616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.352419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.353468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.354512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.355336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.355581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.356444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.357488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.358535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.358863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.359227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.359244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.359257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.359272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.361975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.363033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.363862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.364789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.365043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.366118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.367159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.367536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.367816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.368178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.368196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.368209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.368227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.370943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:40.993 [2024-07-13 22:20:00.371695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.372879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.374154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.374401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.375562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.375846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.376130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.376409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.376728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.376745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.376762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.376775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.378680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.379826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.380951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.382136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.382372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.382669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.382956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.383236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.383514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.383846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.383863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.383875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.383889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.386266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.387292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.388353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.389493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.389825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.390132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.390414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.390693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.390978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.391234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.391251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.391265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.391278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.393450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.394510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.395611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.395899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.396249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.396548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.396827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.397114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.398171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.398403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.398419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.398432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.398445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.400770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.401861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.402156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.402437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.402768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.403070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.403351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.404475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.405658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.405889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.405910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.405923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.405936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.408337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.408633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.408919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.409201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.409573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.409867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.411066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.412136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.413308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.413538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.413554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.413566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.413579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.415224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.415519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.415799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.416085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.416431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.417562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.418563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.419659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.256 [2024-07-13 22:20:00.420840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.421119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.421136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.421148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.421160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.422809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.423110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.423392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.423672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.423900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.424818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.425826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.426877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.427341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.427615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.427631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.427643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.427655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.429360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.429660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.429946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.431010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.431273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.432310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.433326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.433687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.434674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.434908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.434924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.434937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.434949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.436732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.437016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.437815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.438638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.438868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.439912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.440470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.441624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.442637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.442875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.442891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.442907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.442919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.444844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.445481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.446296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.447324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.447555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.448337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.449319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.450168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.451191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.451421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.451437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.451449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.451461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.453614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.454457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.455477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.456501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.456740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.457510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.458341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.459360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.460388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.460663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.460679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.460691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.460703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.463616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.464729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.465749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.466707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.466968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.467801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.468807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.469819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.470408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.470759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.470775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.470789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.470802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.473515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.474709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.475783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.476457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.476722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.477770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.478806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.479480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.479760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.480112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.480129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.480143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.480156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.482735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.483918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.257 [2024-07-13 22:20:00.484499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.485328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.485559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.486606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.487351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.487631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.487914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.488224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.488240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.488252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.488265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.490707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.491174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.492073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.493126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.493357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.494232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.494514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.494791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.495080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.495417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.495434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.495447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.495460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.497081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.497985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.498991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.500025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.500256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.500554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.500834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.501122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.501402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.501718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.501738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.501750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.501763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.504186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.505338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.506382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.507323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.507627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.507929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.508210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.508489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.508942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.509207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.509223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.509236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.509248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.511592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.512791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.513835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.514123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.514451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.514745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.515033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.515453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.516286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.516517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.516534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.516546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.516558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.519046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.520049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.520333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.520612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.520911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.521204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.521486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.522643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.523697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.523935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.523951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.523963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.523977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.525620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.525918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.526200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.526480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.526754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.527060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.527344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.527623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.527907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.528248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.528264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.528276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.528290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.530278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.530568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.530850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.531145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.531457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.531752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.532042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.532322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.532603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.532916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.258 [2024-07-13 22:20:00.532933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.532946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.532958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.535098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.535398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.535691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.535979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.536317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.536611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.536892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.537185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.537482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.537839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.537855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.537869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.537883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.539914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.540215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.540495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.540784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.541131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.541430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.541719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.542009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.542296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.542601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.542618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.542634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.542646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.544634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.544930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.545211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.545492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.545782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.546088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.546372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.546652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.546938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.547245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.547261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.547274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.547287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.549337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.549630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.549924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.550209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.550541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.550836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.551124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.551406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.551692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.551986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.552004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.552018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.552031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.554049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.554348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.554394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.554672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.555001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.555295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.555577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.555869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.556161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.556536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.556553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.556566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.556579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.558616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.558914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.559195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.559233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.559584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.559875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.560166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.560452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.560748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.561097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.561115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.561129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.561143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.562951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.562997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.563029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.563061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.563405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.563452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.563485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.563522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.563556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.563869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.563885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.563898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.563919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.565619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.565665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.565698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.565729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.566068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.566116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.259 [2024-07-13 22:20:00.566150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.566181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.566213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.566531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.566548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.566560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.566574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.568270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.568316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.568349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.568384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.568717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.568763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.568799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.568834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.568866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.569194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.569212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.569227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.569240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.570976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.571020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.571052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.571084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.571435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.571481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.571514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.571547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.571580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.571865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.571880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.571894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.571913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.573640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.573685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.573718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.573750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.574085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.574148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.574191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.574223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.574255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.574511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.574526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.574538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.574551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.576377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.576423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.576466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.576503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.576851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.576911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.576963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.577007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.577052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.577371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.577388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.577400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.577413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.579194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.579238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.579304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.579354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.579664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.579719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.579755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.579788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.579819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.580145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.580163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.580176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.580189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.581921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.581967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.582004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.582036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.582355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.582402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.582436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.582473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.582505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.260 [2024-07-13 22:20:00.582836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.582853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.582865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.582877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.584685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.584731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.584763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.584795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.585140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.585192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.585226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.585259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.585291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.585632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.585647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.585662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.585675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.587395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.587442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.587475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.587507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.587806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.587852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.587885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.587923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.587956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.588289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.588306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.588319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.588335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.590013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.590057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.590101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.590135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.590479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.590524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.590558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.590589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.590622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.590930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.590946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.590959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.590972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.592764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.592809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.592845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.592877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.593223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.593269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.593303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.593345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.593377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.593599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.593614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.593627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.593639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.595318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.595362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.595394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.595426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.595776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.595825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.595858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.595890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.595928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.596266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.596282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.596295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.596307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.598108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.598153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.598185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.598215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.598532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.598577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.598610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.598642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.598675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.598916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.598933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.598945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.598958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.600290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.600334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.600365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.600397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.600642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.600691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.600724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.600755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.600799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.601024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.601041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.601055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.601068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.602722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.602766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.602802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.602836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.603174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.261 [2024-07-13 22:20:00.603219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.603257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.603289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.603319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.603582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.603598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.603610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.603622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.604930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.604978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.605010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.605042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.605264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.605317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.605351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.605383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.605415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.605630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.605645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.605657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.605670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.607481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.607527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.607559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.607591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.607870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.607925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.607959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.607991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.608021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.608262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.608278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.608290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.608302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.609625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.609670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.609701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.609733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.609962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.610013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.610052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.610088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.610119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.610335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.610351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.610363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.610376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.612146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.612191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.612238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.612270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.612498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.612547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.612584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.612616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.612647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.612864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.612879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.612892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.612910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.614226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.614270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.614301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.614332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.614556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.614609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.614642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.614675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.614707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.615116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.615133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.615145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.615159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.616950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.616994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.617024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.617056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.617318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.617367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.617401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.617432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.617463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.617682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.617699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.617711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.617724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.619078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.619123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.619154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.619193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.619416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.619466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.619501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.619534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.619567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.619875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.619890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.619909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.619921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.621665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.262 [2024-07-13 22:20:00.621715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.621751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.621787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.622015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.622066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.622100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.622132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.622163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.622380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.622396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.622408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.622420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.623728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.623773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.623803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.623835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.624112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.624165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.624209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.624244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.624275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.624617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.624635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.624647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.624662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.626307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.626351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.626383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.626414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.626634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.626683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.626717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.626748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.626782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.627010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.627027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.627049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.627061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.628359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.628403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.628442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.628475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.628782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.628830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.628863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.628894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.628934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.629259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.629275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.629288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.629302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.630856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.630909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.630955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.630987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.631209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.631258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.631291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.631323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.631355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.631663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.631679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.631692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.631704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.632969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.633014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.633045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.633092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.633419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.633478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.633512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.633545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.633577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.633897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.633926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.633938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.633951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.635509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.635553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.635585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.635617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.635839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.635888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.635928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.635961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.635998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.636264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.636280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.636292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.636305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.637686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.637749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.638050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.638088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.638399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.638446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.638487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.638519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.638553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.638890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.263 [2024-07-13 22:20:00.638913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.264 [2024-07-13 22:20:00.638926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.264 [2024-07-13 22:20:00.638941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.264 [2024-07-13 22:20:00.640294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.264 [2024-07-13 22:20:00.640349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.264 [2024-07-13 22:20:00.640383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.525 [2024-07-13 22:20:00.641584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.525 [2024-07-13 22:20:00.641838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.525 [2024-07-13 22:20:00.641897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.525 [2024-07-13 22:20:00.641943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.525 [2024-07-13 22:20:00.641976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.642009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.642245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.642261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.642273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.642286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.644315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.645484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.646545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.647676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.647918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.648387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.649216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.650238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.651267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.651518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.651534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.651547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.651560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.654309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.655241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.656316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.657465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.657854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.658748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.659773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.660789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.661606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.661912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.661929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.661941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.661953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.664501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.665523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.666544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.666917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.667161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.668339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.669381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.670331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.670621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.670968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.670985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.670999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.671013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.673560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.674616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.674983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.675961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.676194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.677393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.678461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.678743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.679030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.679351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.679370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.679383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.679395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.681935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.682427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.683515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.684660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.684897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.686065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.686353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.686631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.686915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.687253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.687270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.687286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.687299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.689354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.690404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.691341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.692436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.692673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.692998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.693284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.693565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.693848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.694186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.694203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.694216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.694230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.696364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.697211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.698272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.699289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.699583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.699879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.700165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.700455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.700733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.700972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.700990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.701002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.701014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.703146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.704205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.705258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.526 [2024-07-13 22:20:00.705595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.705968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.706262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.706544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.706824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.707735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.707995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.708012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.708024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.708037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.710424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.711470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.711821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.712105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.712454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.712749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.713041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.714128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.715085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.715318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.715334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.715346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.715359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.717733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.718230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.718525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.718813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.719144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.719453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.720243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.721060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.722101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.722332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.722349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.722361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.722374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.724407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.724700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.724985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.725265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.725607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.726403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.727220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.728249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.729260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.729551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.729567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.729583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.729596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.731138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.731437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.731720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.732008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.732297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.733127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.734157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.735198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.735940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.736173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.736188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.736200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.736213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.737928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.738218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.738497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.739122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.739382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.740419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.741445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.742210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.743180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.743438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.743454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.743467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.743479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.745260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.745553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.745872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.746791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.747029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.748182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.749205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.749970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.750793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.751032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.751048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.751060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.751072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.752987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.753432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.754247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.755281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.755518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.756464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.757276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.758114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.759145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.759380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.759395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.527 [2024-07-13 22:20:00.759407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.759420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.761432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.762510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.763689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.764787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.765026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.765613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.766447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.767487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.768528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.768810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.768827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.768839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.768852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.771812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.772934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.773968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.774987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.775261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.776104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.777097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.778113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.778723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.779048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.779065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.779078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.779090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.781576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.782613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.783627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.783980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.784217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.785387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.786487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.787509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.787790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.788116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.788133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.788146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.788163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.790739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.791794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.792190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.793087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.793324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.794394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.795307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.795589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.795870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.796173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.796191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.796203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.796216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.798455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.799513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.800458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.801417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.801748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.802050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.802330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.802618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.802906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.803139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.803155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.803167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.803180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.805393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.806471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.807526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.807969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.808297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.808593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.808875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.809164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.809445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.809720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.809736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.809748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.809762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.811843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.812148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.812435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.812718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.813023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.813319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.813602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.813888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.814181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.814548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.814565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.814578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.814591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.816645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.816944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.817228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.817512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.817860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.528 [2024-07-13 22:20:00.818161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.818446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.818726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.819015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.819356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.819372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.819384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.819397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.821354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.821653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.821942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.822225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.822584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.822895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.823182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.823460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.823742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.824072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.824089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.824102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.824115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.826136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.826430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.826718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.827013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.827375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.827668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.827955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.828233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.828519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.828813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.828829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.828843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.828860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.830976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.831274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.831556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.831834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.832173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.832466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.832749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.833041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.833327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.833656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.833672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.833686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.833699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.835711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.836009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.836290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.836569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.836888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.837194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.837480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.837763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.838048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.838359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.838375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.838388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.838401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.840380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.840675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.840961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.841244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.841521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.841818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.842106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.842387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.842673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.843002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.843019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.843032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.843044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.845104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.845398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.845685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.845977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.846325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.846618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.846910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.847190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.847474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.847783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.847799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.847812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.847825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.850195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.850491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.850783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.851073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.851410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.851706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.851996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.852281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.852567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.852899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.852922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.852936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.852949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.854955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.529 [2024-07-13 22:20:00.855251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.855289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.855570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.855919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.856218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.856505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.856795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.857080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.857423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.857440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.857453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.857466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.859506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.859799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.860089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.860129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.860457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.860753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.861051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.861336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.861618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.861961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.861978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.861993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.862006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.863806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.863853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.863886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.863925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.864211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.864265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.864298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.864331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.864362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.864690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.864707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.864720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.864737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.866430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.866477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.866517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.866561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.866928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.866975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.867010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.867042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.867075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.867406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.867421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.867433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.867446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.869200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.869245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.869278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.869310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.869643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.869692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.869728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.869792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.869825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.870107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.870123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.870135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.870148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.871943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.871989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.872024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.872058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.872349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.872404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.872438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.872471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.872503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.872781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.872798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.872811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.872823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.874758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.874815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.874846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.874883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.530 [2024-07-13 22:20:00.875113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.875172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.875214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.875245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.875279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.875543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.875559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.875571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.875584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.877268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.877314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.877347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.877379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.877710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.877766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.877800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.877843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.877892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.878238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.878254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.878267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.878280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.880079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.880128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.880162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.880195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.880417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.880464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.880503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.880535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.880572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.880793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.880808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.880820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.880833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.882139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.882204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.882241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.882273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.882494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.882543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.882577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.882608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.882641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.882953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.882971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.882983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.882996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.884880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.884931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.884964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.884995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.885251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.885304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.885338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.885370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.885401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.885623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.885638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.885650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.885663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.887013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.887058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.887096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.887133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.887357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.887409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.887443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.887475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.887508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.887813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.887829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.887841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.887854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.889561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.889606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.889643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.889674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.889895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.889952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.889987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.890020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.890051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.890271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.890286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.890299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.890311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.891644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.891698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.891733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.891764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.892071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.892129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.892164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.892197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.892229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.892551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.892572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.892585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.892598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.531 [2024-07-13 22:20:00.894226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.894270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.894301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.894332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.894555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.894615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.894655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.894687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.894719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.894965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.894983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.894995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.895008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.896253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.896298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.896330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.896363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.896691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.896744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.896779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.896811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.896843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.897170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.897189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.897202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.897216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.898940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.898989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.899021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.899052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.899275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.899324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.899357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.899389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.899422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.899770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.899786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.899799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.899811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.901145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.901202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.901239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.901271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.901596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.901643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.901678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.901711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.901744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.902057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.902074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.902087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.902099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.903642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.903694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.903729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.903761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.903995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.904048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.904086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.904118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.904159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.904383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.904398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.904411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.904424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.905817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.905864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.905897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.905936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.906272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.906317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.906360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.906403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.906438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.906790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.906807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.906822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.906836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.908322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.908373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.908417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.908451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.908713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.908762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.908796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.908832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.908865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.909111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.909127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.909146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.909159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.910731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.910780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.910818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.910859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.911202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.911258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.911293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.911326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.532 [2024-07-13 22:20:00.911360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.911757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.911784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.911799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.911820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.913240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.913288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.913321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.913358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.913582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.913635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.913676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.913712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.913745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.913976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.913993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.914006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.914029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.915695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.915741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.915777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.915811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.916158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.916208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.916243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.916276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.916307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.916555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.916571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.916583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.916595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.917985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.918029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.918062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.918093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.918317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.918368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.918418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.918451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.918492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.918719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.918735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.918748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.918762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.920576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.920622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.920662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.920696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.920927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.920976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.921011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.921052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.921095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.921317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.921332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.921345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.921357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.922710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.922764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.922797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.922829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.793 [2024-07-13 22:20:00.923059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.923112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.923146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.923178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.923210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.923497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.923513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.923526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.923541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.925494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.925544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.925577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.925611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.925864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.925924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.925960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.925993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.926035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.926256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.926272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.926288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.926301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.927706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.927758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.927795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.927828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.928057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.928106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.928140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.928173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.928207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.928563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.928579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.928594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.928607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.930399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.930451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.930487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.930519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.930742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.930792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.930825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.930858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.930889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.931118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.931133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.931146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.931159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.932537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.932583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.932622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.932659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.932927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.932982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.933016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.933048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.933082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.933408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.933425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.933438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.933451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.934995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.935039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.935078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.935115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.935337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.935387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.935421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.935452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.935484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.935729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.935745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.935757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.935771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.937110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.937155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.937451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.937493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.937824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.937870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.937911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.937962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.937996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.938326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.938353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.938366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.938380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.939785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.939831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.939865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.940289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.940559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.940612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.940646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.940678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.940709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.940934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.940952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.794 [2024-07-13 22:20:00.940964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.940977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.942876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.943174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.944240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.945473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.945705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.946876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.947466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.948307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.949349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.949578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.949594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.949607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.949624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.951653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.952758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.953947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.955045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.955276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.955859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.956708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.957755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.958790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.959140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.959157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.959170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.959182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.962207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.963355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.964405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.965325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.965580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.966433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.967475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.968522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.969048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.969421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.969440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.969453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.969466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.972280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.973346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.974317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.975181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.975468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.976512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.977594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.978108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.978394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.978730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.978747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.978759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.978774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.981597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.982629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.983434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.984283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.984512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.985584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.986121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.986416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.986699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.987019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.987036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.987048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.987060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.989630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.990308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.991149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.992181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.992434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.993183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.993468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.993748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.994041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.994382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.994398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.994411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.994425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.996200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.997063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.998102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.999123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.999380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.999678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:00.999967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:01.000250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:01.000531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:01.000818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:01.000833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:01.000845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:01.000858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:01.003081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:01.004152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:01.005194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:01.005819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:01.006141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:01.006439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.795 [2024-07-13 22:20:01.006722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.007010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.007693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.007972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.007990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.008003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.008015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.010463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.011520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.012162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.012446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.012777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.013081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.013363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.014016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.014841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.015075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.015092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.015104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.015116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.017499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.018219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.018503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.018786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.019089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.019387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.020014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.020838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.021878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.022115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.022132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.022144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.022157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.024275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.024569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.024851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.025141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.025492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.026118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.026940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.027979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.029032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.029348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.029365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.029378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.029390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.031004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.031301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.031582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.031863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.032157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.032998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.034023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.035060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.035708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.035944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.035962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.035974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.035986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.037707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.038008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.038291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.038957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.039238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.040292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.041333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.042016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.043093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.043350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.043366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.043378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.043390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.045164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.045454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.046001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.046823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.047058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.048133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.048925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.049913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.050785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.051023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.051040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.051052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.051064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.052967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.053469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.054296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.055347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.055579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.056431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.057388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.058239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.059290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.059520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.059536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.059549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.059561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.061693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.796 [2024-07-13 22:20:01.062749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.063795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.064488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.064737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.065794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.066848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.067250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.067541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.067877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.067894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.067915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.067929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.070681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.071621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.072195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.073039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.073273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.074347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.075050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.075336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.075617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.075922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.075940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.075953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.075965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.077952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.078245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.078527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.078810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.079141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.079456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.079740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.080027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.080312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.080663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.080680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.080693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.080707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.082778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.083077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.083364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.083662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.083993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.084290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.084583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.084865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.085154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.085484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.085501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.085513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.085526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.087582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.087897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.088190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.088471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.088783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.089085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.089368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.089652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.089949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.090312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.090332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.090344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.090358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.092391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.092711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.093007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.093307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.093649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.093951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.094237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.094519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.094800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.095122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.095139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.095152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.095165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.097155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.097448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.097731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.098021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.797 [2024-07-13 22:20:01.098347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.098644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.098938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.099218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.099500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.099845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.099862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.099874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.099889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.101978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.102274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.102555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.102841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.103153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.103449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.103732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.104021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.104303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.104588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.104605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.104618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.104631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.106771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.107081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.107369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.107650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.107998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.108306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.108590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.108870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.109167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.109418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.109434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.109446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.109459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.111428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.111721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.112014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.112987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.113313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.113612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.114368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.114865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.115151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.115449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.115466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.115478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.115491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.117951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.118376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.118659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.118947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.119250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.119759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.120507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.120789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.121200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.121437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.121455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.121467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.121480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.123948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.124241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.124692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.125498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.125835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.126142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.126432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.126714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.127858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.128211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.128231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.128244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.128258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.130231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.130537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.131549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.131831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.132169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.133241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.133529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.133809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.134101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.134381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.134397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.134410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.134423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.136250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.136543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.136826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.137119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.137365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.137801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.138090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.138824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.139346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.139677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.139694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.139707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.798 [2024-07-13 22:20:01.139720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.141579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.142443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.142822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.143111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.143393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.143690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.144245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.144974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.145265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.145589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.145606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.145618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.145631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.147838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.148517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.148561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.148840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.149126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.149958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.150241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.150522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.150806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.151099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.151116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.151129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.151142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.152988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.153283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.153569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.153616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.153944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.154997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.155283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.155563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.156680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.157042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.157059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.157073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.157086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.158618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.158664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.158697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.158729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.158962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.159012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.159057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.159099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.159131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.159483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.159500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.159512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.159526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.161183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.161229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.161261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.161293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.161567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.161621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.161655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.161689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.161738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.162102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.162121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.162138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.162152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.163958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.164020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.164053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.164086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.164396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.164451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.164485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.164517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.164552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.164874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.164891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.164911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.164924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.166500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.166556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.166588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.166620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.166841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.166890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.166930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.166962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.166993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.167300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.167316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.167327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.167340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.168737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.168793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.168825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.168859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.169203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.169253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.169287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.169320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.799 [2024-07-13 22:20:01.169352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.169633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.169649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.169661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.169674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.171193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.171247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.171279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.171311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.171553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.171601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.171635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.171667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.171703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.171930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.171948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.171960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.171973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.173442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.173486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.173529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.173561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.173880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.173943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.173977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.174013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.174043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.174361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.174378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.174393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.174406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.175762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.175816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.175852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.175885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.176159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.176218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.176251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.176283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.176320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.176559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.176575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.176587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.176599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.178407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.178471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.178506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.178539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.178926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.178992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.179030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.179063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.179096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.179349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.179366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.179380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:41.800 [2024-07-13 22:20:01.179400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.180933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.180987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.181020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.181053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.181281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.181333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.181368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.181400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.181443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.181662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.181678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.181691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.181703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.183595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.183643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.183677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.183709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.183959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.184008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.184041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.184072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.184109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.184329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.184345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.184358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.184371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.185715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.185766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.185806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.185842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.186071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.186122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.186155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.186187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.186219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.186496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.186512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.186524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.186538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.188477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.188522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.188555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.188586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.188825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.188878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.188917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.188949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.188983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.189201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.189216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.189228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.189240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.190636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.190681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.190729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.190761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.190992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.191041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.191074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.191108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.191143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.191472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.191488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.191501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.191515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.193322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.193373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.193406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.193438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.193665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.193715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.062 [2024-07-13 22:20:01.193750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.193784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.193816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.194060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.194078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.194090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.194103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.195454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.195505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.195541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.195573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.195868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.195927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.195961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.195994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.196026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.196350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.196367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.196380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.196397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.199135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.199182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.199213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.199246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.199470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.199520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.199559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.199591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.199627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.199847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.199864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.199876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.199889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.202866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.202918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.202952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.202995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.203219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.203270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.203303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.203339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.203373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.203590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.203605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.203617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.203630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.207114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.207160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.207193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.207225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.207573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.207628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.207662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.207696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.207728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.208065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.208083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.208096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.208109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.210889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.210942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.210982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.211015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.211239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.211289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.211324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.211356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.211387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.211606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.211621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.211633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.211645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.214829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.214878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.214922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.214954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.215198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.215251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.215284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.215316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.215351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.215586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.215602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.215614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.215627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.219311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.219367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.219401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.219434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.219761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.219810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.219855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.219887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.219929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.220202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.220218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.220231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.220243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.222987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.063 [2024-07-13 22:20:01.223032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.223064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.223095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.223348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.223398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.223432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.223463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.223495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.223714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.223730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.223742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.223755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.226980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.227026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.227061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.227092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.227335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.227384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.227417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.227447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.227486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.227705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.227721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.227733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.227746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.231226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.231270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.231300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.231329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.231643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.231691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.231726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.231757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.231788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.232104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.232121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.232133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.232146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.234865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.234916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.234949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.234980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.235219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.235271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.235304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.235341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.235376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.235595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.235612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.235634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.235646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.238454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.238499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.238537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.238571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.238795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.238843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.238886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.238925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.238957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.239179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.239194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.239207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.239220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.242441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.242489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.242896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.242941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.242974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.243297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.243315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.243329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.246222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.246271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.246309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.246342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.246651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.250095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.250141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.250172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.250210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.250465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.250500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.250542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.250575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.254310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.254355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.254401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.254658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.254693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.254996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.255045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.255078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.255110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.255142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.256772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.256817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.256850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.256896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.064 [2024-07-13 22:20:01.258097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.258327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.258342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.258390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.258428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.258469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.258500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.260748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.261777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.262669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.263689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.263920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.263937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.263997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.264355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.264405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.265206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.265257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.266251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.267283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.267546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.267563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.267575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.270815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.271921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.273078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.274198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.274480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.274496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.275333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.276344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.277355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.278130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.278456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.278472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.278489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.281150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.282265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.283464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.284036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.284290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.284307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.285350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.286522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.287051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.287343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.287680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.287697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.287710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.290477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.291549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.292263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.293103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.293337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.293354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.294404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.295013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.295298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.295576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.295883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.295899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.295916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.298479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.299143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.299969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.300998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.301236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.301253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.301930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.302213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.302490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.302766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.303107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.303125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.303138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.304984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.305809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.306817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.307861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.308156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.308173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.308469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.308748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.309033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.309311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.309557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.309572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.309584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.311776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.312810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.313835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.314308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.314656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.314674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.314971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.315253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.315535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.316504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.316746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.316763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.065 [2024-07-13 22:20:01.316776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.319135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.320174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.320562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.320842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.321196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.321215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.321506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.321786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.322780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.323669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.323898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.323919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.323931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.326338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.326716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.327003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.327282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.327608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.327626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.327924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.328906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.329806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.330831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.331066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.331083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.331095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.333003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.333295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.333574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.333864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.334225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.334244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.335212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.336048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.337082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.338116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.338407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.338423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.338436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.340080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.340370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.340660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.340945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.341175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.341191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.342025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.342978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.343425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.344495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.344724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.344739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.344751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.347165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.347587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.347867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.348777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.349030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.349051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.350127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.351169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.351522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.352437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.352666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.352683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.352697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.354688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.354985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.356065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.357038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.357268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.357285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.358360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.358650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.359695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.360842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.361075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.361092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.361104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.363068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.363362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.066 [2024-07-13 22:20:01.363650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.363940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.364269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.364287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.364578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.364859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.365155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.365446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.365747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.365765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.365778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.367860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.368164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.368445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.368723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.369072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.369089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.369383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.369674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.369979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.370262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.370592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.370609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.370622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.372658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.372973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.373255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.373537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.373858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.373874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.374174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.374464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.374742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.375034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.375397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.375417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.375430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.377495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.377789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.378081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.378367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.378690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.378706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.379004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.379284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.379563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.379843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.380165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.380183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.380195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.382273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.382567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.382849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.383139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.383446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.383462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.383749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.384041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.384326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.384619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.384984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.385003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.385015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.387038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.387334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.387613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.387892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.388236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.388259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.388550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.388834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.389125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.389404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.389753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.389770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.389783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.391813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.392122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.392411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.392699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.392986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.393009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.393314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.393601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.393885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.394180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.394502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.394519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.394531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.396587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.396889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.397200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.397492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.397833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.397852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.398183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.398469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.398757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.399058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.399394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.067 [2024-07-13 22:20:01.399412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.399424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.401871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.402183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.402485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.402772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.403130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.403147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.403442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.403730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.404030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.404319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.404650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.404668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.404681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.406673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.406982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.407270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.407556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.407863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.407879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.408198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.408483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.408762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.409049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.409406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.409423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.409436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.411541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.411840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.412134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.412423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.412754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.412770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.413072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.413354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.413642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.413931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.414220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.414237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.414249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.416299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.416606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.416892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.417181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.417533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.417551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.417838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.418126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.418406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.418689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.419015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.419031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.419054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.421145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.421437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.421716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.422002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.422321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.422338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.422635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.422932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.423217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.423496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.423827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.423844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.423856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.426342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.426635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.426921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.427200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.427524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.427542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.427833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.428137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.428422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.428701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.429045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.429062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.429076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.431133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.431428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.431706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.432078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.432314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.432331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.433467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.434524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.435423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.436330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.436580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.436602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.436614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.438424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.438714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.439169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.440013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.440246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.440262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.441323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.068 [2024-07-13 22:20:01.442137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.069 [2024-07-13 22:20:01.443099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.069 [2024-07-13 22:20:01.443978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.069 [2024-07-13 22:20:01.444216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.069 [2024-07-13 22:20:01.444232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.069 [2024-07-13 22:20:01.444245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.069 [2024-07-13 22:20:01.446293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.069 [2024-07-13 22:20:01.447281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.069 [2024-07-13 22:20:01.448186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.339 [2024-07-13 22:20:01.449347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.339 [2024-07-13 22:20:01.449596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.339 [2024-07-13 22:20:01.449614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.339 [2024-07-13 22:20:01.450158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.451010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.452063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.453119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.453416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.453433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.453446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.456655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.456711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.457630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.458497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.458757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.458774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.459843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.460951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.461353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.461663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.462020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.462039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.462052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.464659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.465404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.466453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.466499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.466733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.466749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.467813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.469000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.469040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.469316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.469654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.469671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.469683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.471283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.471328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.471361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.471400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.471626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.471640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.471689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.472651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.472689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.472722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.472956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.472973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.472985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.474383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.474428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.474461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.474493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.474829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.474846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.474889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.474930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.474966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.474998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.475333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.475350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.475363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.476765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.476810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.476842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.476873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.477200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.477217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.477266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.477300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.477332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.477363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.477616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.477636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.477648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.479195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.479240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.479273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.479305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.479623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.479640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.479682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.479718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.479750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.479784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.480134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.480153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.480165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.481453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.481498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.481531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.481567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.481794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.481809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.481855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.481892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.481931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.481963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.482182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.482198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.482210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.483939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.483985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.340 [2024-07-13 22:20:01.484017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.484053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.484391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.484408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.484449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.484483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.484516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.484548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.484782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.484797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.484809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.486156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.486200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.486232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.486264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.486516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.486530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.486577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.486611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.486649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.486685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.486914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.486931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.486943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.488634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.488681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.488719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.488751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.489091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.489109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.489155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.489191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.489224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.489255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.489525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.489541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.489553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.490893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.490948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.490981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.491012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.491239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.491253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.491303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.491336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.491367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.491398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.491623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.491638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.491650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.493497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.493544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.493576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.493609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.493840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.493855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.493908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.493948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.493980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.494017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.494243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.494259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.494275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.495603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.495654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.495691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.495722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.495954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.495968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.496016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.496048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.496080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.496112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.496392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.496407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.496419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.498346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.498390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.498425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.498457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.498695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.498710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.498756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.498788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.498819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.498851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.499081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.499096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.499108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.500458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.500502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.500534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.500566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.500791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.500806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.500860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.500895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.500947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.500979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.341 [2024-07-13 22:20:01.501285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.501300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.501314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.503019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.503069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.503104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.503138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.503360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.503375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.503422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.503454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.503486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.503517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.503736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.503750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.503761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.505126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.505170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.505202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.505233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.505541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.505557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.505610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.505644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.505680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.505711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.506052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.506070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.506083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.507722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.507767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.507801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.507832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.508064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.508079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.508126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.508167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.508200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.508231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.508455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.508470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.508481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.509737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.509782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.509814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.509847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.510169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.510186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.510233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.510266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.510300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.510331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.510668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.510684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.510696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.512269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.512314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.512345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.512376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.512599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.512614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.512661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.512695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.512733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.512767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.513078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.513094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.513107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.514455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.514501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.514534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.514565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.514890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.514915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.514957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.514993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.515027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.515058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.515363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.515378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.515390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.516793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.516852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.516888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.516927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.517184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.517199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.517244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.517278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.517310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.517346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.517567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.517583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.517596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.519028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.519073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.519123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.519157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.519505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.519522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.342 [2024-07-13 22:20:01.519574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.519609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.519642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.519673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.520038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.520056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.520069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.521390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.521434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.521473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.521508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.521811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.521827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.521872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.521912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.521944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.521980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.522222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.522238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.522250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.523778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.523823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.523855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.523888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.524216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.524232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.524283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.524317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.524350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.524382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.524721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.524738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.524750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.526038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.526082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.526121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.526153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.526381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.526395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.526446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.526480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.526511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.526543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.526764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.526779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.526790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.528446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.528491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.528523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.528555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.528872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.528888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.528941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.528976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.529025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.529069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.529293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.529308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.529320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.530632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.530677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.530709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.530740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.530990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.531006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.531052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.531085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.531135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.531170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.531391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.531407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.531419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.533158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.533203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.533235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.533268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.533601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.533621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.533665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.533699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.533731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.533763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.534019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.534036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.534048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.535372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.535420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.535454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.535486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.535711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.535726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.535773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.535806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.535837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.535869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.536104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.536120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.536132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.537948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.537993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.343 [2024-07-13 22:20:01.538025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.538058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.538285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.538300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.538347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.538386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.538417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.538458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.538681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.538697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.538709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.540022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.540071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.540106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.540137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.540361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.540375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.540421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.540455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.540486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.540518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.540797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.540813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.540825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.542714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.543599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.543640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.543672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.543895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.543918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.543966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.543999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.544031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.544063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.544285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.544300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.544312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.545691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.545739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.545771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.546081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.546445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.546464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.546510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.546555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.546832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.546867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.547201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.547218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.547231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.549191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.550389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.551496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.552691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.552938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.552955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.553246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.553282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.553558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.553835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.554193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.554211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.554224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.556121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.557286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.558419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.559585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.559818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.559834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.560138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.560418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.560696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.560983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.561304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.561320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.561333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.563584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.564537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.565595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.566770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.567069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.567087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.344 [2024-07-13 22:20:01.567380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.567659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.567943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.568222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.568454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.568470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.568483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.570622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.571692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.572743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.573038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.573397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.573417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.573705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.573992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.574271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.575356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.575593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.575608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.575621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.577994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.579029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.579325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.579610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.579968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.579985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.580280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.580560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.581697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.582692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.582931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.582948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.582960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.585309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.585815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.586115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.586396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.586750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.586767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.587065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.587999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.588808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.589824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.590064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.590080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.590093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.592165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.592459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.592742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.593047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.593390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.593408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.593854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.594933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.595977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.596584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.596841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.596857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.596870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.598684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.599006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.599443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.600261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.600498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.600515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.601570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.602417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.603325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.604168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.604403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.604419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.604431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.606299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.606788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.607638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.608701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.608943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.608959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.609795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.610443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.611283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.612344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.612584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.612600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.612612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.614636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.614937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.615226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.615507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.615860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.615877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.616174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.616454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.616734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.617028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.617359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.617375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.617388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.619438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.619730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.345 [2024-07-13 22:20:01.620028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.620308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.620654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.620670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.620971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.621257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.621541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.621822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.622157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.622178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.622190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.624239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.624532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.624813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.625104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.625399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.625415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.625709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.625998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.626277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.626559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.626896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.626921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.626935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.629045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.629336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.629621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.629914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.630252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.630268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.630560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.630839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.631125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.631407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.631698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.631715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.631728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.633806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.634111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.634401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.634681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.634999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.635017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.635306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.635589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.635875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.636165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.636509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.636526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.636540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.638532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.638823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.639111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.639390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.639702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.639718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.640031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.640325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.640607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.640885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.641267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.641285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.641300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.643294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.643589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.643873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.644163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.644526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.644544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.644835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.645149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.645440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.645743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.646041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.646058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.646082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.648095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.648388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.648673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.648962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.649311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.649328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.649616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.649896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.650189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.650459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.650738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.650754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.650765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.652802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.653109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.653389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.653669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.654023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.654043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.654336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.654629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.654924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.655206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.346 [2024-07-13 22:20:01.655548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.655564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.655581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.657556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.657846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.658135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.658417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.658710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.658727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.659028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.659315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.659594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.659873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.660232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.660249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.660264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.662303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.662596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.662879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.663170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.663488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.663504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.663795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.664083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.664361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.664647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.664947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.664964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.664976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.667037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.667336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.667619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.667908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.668220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.668236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.668524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.668805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.669101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.669388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.669748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.669765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.669777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.671859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.672159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.672440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.672718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.673067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.673085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.673941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.674326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.675298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.675585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.675946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.675964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.675977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.678017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.678309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.678588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.678866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.679216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.679234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.679526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.679812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.680117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.680396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.680754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.680771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.680783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.682630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.682932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.684066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.684346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.684685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.684702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.685871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.686996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.688190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.689234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.689507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.689524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.689536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.691142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.691431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.691711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.691997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.692230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.692246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.693308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.694510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.695680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.696327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.696588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.696605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.696620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.698191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.698483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.699606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.699888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.347 [2024-07-13 22:20:01.700240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.348 [2024-07-13 22:20:01.700258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.348 [2024-07-13 22:20:01.701409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.348 [2024-07-13 22:20:01.702438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.348 [2024-07-13 22:20:01.703544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.348 [2024-07-13 22:20:01.704736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.348 [2024-07-13 22:20:01.705020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.348 [2024-07-13 22:20:01.705037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.348 [2024-07-13 22:20:01.705050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.348 [2024-07-13 22:20:01.706643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.348 [2024-07-13 22:20:01.706942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.348 [2024-07-13 22:20:01.707231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.348 [2024-07-13 22:20:01.707510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.348 [2024-07-13 22:20:01.707746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.348 [2024-07-13 22:20:01.707762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.348 [2024-07-13 22:20:01.708652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.348 [2024-07-13 22:20:01.709668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.348 [2024-07-13 22:20:01.710720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.348 [2024-07-13 22:20:01.711191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.348 [2024-07-13 22:20:01.711463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.348 [2024-07-13 22:20:01.711479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.348 [2024-07-13 22:20:01.711491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.348 [2024-07-13 22:20:01.713056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.348 [2024-07-13 22:20:01.713352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.348 [2024-07-13 22:20:01.714462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.348 [2024-07-13 22:20:01.714750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.348 [2024-07-13 22:20:01.715107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.348 [2024-07-13 22:20:01.715125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.348 [2024-07-13 22:20:01.716175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.348 [2024-07-13 22:20:01.717099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.348 [2024-07-13 22:20:01.718139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.348 [2024-07-13 22:20:01.719250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.348 [2024-07-13 22:20:01.719653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.348 [2024-07-13 22:20:01.719669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.348 [2024-07-13 22:20:01.719681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.348 [2024-07-13 22:20:01.721347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.348 [2024-07-13 22:20:01.721655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.348 [2024-07-13 22:20:01.721955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.348 [2024-07-13 22:20:01.722252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.348 [2024-07-13 22:20:01.722486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.348 [2024-07-13 22:20:01.722503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.348 [2024-07-13 22:20:01.723530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.348 [2024-07-13 22:20:01.724678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.609 [2024-07-13 22:20:01.725781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.726700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.726957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.726975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.726987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.728616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.729086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.729913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.730201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.730521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.730538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.731364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.732401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.733448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.734215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.734455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.734482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.734495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.736166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.736215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.736493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.736770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.737055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.737072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.737884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.738952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.740052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.740775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.741029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.741048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.741061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.743058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.743353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.744014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.744057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.744375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.744392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.744683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.745476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.745517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.746376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.746612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.746629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.746642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.747987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.748033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.748065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.748097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.748323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.748338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.748385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.748670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.748706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.748738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.749066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.749086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.749099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.750698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.750744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.750777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.750817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.751054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.751080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.751138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.751175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.751208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.751239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.751485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.751500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.751512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.752759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.752804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.752837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.752869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.753104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.753123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.753171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.753212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.753245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.753278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.753604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.753621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.753633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.755293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.755338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.755370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.755401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.755668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.755683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.755730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.755763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.755798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.755840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.756088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.756105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.756118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.757501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.757554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.757589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.610 [2024-07-13 22:20:01.757621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.757844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.757860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.757915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.757950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.757982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.758028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.758392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.758407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.758420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.760082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.760132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.760166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.760198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.760426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.760441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.760491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.760525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.760557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.760591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.760814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.760828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.760841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.762250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.762295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.762326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.762358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.762694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.762711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.762760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.762793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.762826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.762857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.763172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.763189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.763201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.764945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.764994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.765034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.765067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.765297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.765312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.765361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.765395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.765427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.765469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.765690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.765705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.765718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.767066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.767110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.767142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.767183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.767398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.767413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.767461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.767492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.767525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.767562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.767874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.767890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.767910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.769640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.769684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.769715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.769748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.769993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.770015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.770063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.770098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.770140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.770172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.770392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.770407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.770418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.771736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.771788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.771821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.771853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.772121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.772138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.772184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.772217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.772253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.772288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.772509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.772524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.772538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.774270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.774316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.774353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.774386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.774636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.774651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.774695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.774728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.774766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.774803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.611 [2024-07-13 22:20:01.775037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.775054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.775067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.776393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.776438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.776470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.776507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.776727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.776741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.776806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.776841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.776872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.776912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.777155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.777172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.777184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.779047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.779095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.779130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.779162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.779406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.779420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.779469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.779502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.779534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.779566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.779787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.779803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.779815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.781146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.781195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.781227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.781259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.781480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.781495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.781542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.781575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.781606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.781645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.781956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.781972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.781985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.783573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.783619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.783657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.783690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.784027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.784045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.784086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.784123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.784156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.784188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.784457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.784473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.784485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.785780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.785827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.785865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.785897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.786129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.786144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.786198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.786232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.786263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.786294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.786513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.786527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.786539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.788310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.788355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.788387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.788420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.788676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.788691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.788735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.788777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.788809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.788847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.789077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.789093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.789105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.790399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.790444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.790476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.790510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.790731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.790745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.790793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.790829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.790861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.790894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.791177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.791197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.791209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.792724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.792774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.792810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.792841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.793182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.793199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.612 [2024-07-13 22:20:01.793241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.793275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.793308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.793340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.793587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.793603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.793615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.794963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.795006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.795038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.795071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.795318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.795333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.795384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.795418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.795450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.795488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.795709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.795725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.795737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.797445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.797491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.797526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.797558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.797887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.797911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.797953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.797991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.798022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.798053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.798306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.798322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.798334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.799639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.799688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.799723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.799755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.799988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.800002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.800049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.800082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.800114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.800145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.800366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.800381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.800392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.802016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.802060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.802095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.802127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.802413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.802429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.802474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.802512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.802545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.802578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.802915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.802932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.802945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.804210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.804256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.804289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.804320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.804544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.804560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.804612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.804648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.804683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.804715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.804944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.804961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.804973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.806559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.806629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.806662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.806695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.807060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.807078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.807120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.807154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.807187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.807219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.807477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.807497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.807509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.808889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.808942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.808975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.809006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.809260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.809275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.809322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.809355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.809387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.809417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.809638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.809653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.809666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.811209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.613 [2024-07-13 22:20:01.811255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.811290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.811323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.811548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.811573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.811624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.811660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.811692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.811725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.812065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.812082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.812095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.813429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.813474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.813505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.813546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.813823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.813839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.813886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.813928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.813960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.813992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.814265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.814281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.814293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.815768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.815813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.815845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.815878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.816229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.816245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.816287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.816320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.816353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.816385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.816712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.816728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.816741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.818017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.818967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.819007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.819046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.819270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.819286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.819331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.819384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.819417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.819449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.819664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.819680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.819692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.821313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.821370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.821405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.822375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.822714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.822731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.822775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.822808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.823268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.823306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.823549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.823565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.823577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.828264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.828558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.828838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.829127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.829454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.829470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.829821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.829860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.830675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.831701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.831934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.831951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.831967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.834367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.834877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.835627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.835914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.836217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.614 [2024-07-13 22:20:01.836233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.837173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.837454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.837798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.838712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.838946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.838963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.838975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.842462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.842752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.843042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.843321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.843659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.843676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.844865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.845947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.847127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.848311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.848585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.848601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.848613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.851005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.851303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.851583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.852668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.853025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.853043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.853338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.854369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.855284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.856219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.856554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.856570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.856583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.859588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.860232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.861043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.862061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.862293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.862310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.863049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.864074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.864966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.865978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.866208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.866225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.866237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.868782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.869082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.869469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.870298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.870529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.870546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.871591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.872480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.873346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.874168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.874399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.874414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.874426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.878610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.879743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.880790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.881701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.881989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.882006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.882854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.883922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.884973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.885620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.885872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.885889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.885908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.887736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.888042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.888328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.888612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.888969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.888987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.889275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.889554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.889834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.890132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.890387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.890403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.890415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.893395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.893689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.893995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.894283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.894621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.894639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.894958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.895255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.895603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.896549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.896881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.896898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.896918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.898802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.615 [2024-07-13 22:20:01.899107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.899391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.899669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.900026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.900044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.900331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.900616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.900911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.901989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.902317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.902335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.902348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.904755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.905053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.905334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.905612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.905890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.905915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.906207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.906940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.907460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.907737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.908015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.908032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.908044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.909958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.910253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.910532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.910810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.911130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.911147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.911440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.911979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.912692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.912983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.913280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.913296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.913308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.916339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.916629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.916921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.917213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.917498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.917515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.918591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.918872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.919158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.920273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.920618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.920635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.920647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.922674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.922975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.923258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.923549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.923851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.923868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.925086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.925367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.925645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.926828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.927204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.927221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.927234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.929988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.930287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.930583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.931571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.931940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.931960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.932250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.933338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.933619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.933899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.934185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.934201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.934213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.936235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.936550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.936834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.937985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.938346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.938363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.938656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.939768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.940062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.940346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.940634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.940650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.940663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.943292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.944205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.944544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.944824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.945094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.945111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.945642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.945937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.946233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.616 [2024-07-13 22:20:01.946517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.946809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.946825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.946837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.948885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.949510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.950150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.950431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.950720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.950741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.951515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.951795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.952085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.952372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.952679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.952695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.952708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.955973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.956265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.956567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.957524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.957875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.957893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.958194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.958481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.958764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.959052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.959391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.959407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.959419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.962387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.962691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.962979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.964163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.964507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.964525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.964815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.965106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.965390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.965677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.966023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.966041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.966054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.968447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.969242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.969707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.969993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.970275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.970292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.971105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.971664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.972360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.972642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.972982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.972999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.973013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.975415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.975909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.976191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.976908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.977177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.977193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.977487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.977770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.978064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.978560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.978792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.978809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.978820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.981444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.982130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.982411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.982967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.983202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.983219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.983515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.984026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.984838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.985862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.986095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.986111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.986124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.988500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.988987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.989753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.990038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.990360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.990377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.991333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.991614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.992006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.992893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.993145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.993164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.617 [2024-07-13 22:20:01.993176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.879 [2024-07-13 22:20:01.997308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.879 [2024-07-13 22:20:01.997974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.879 [2024-07-13 22:20:01.998257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.879 [2024-07-13 22:20:01.998871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.879 [2024-07-13 22:20:01.999131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.879 [2024-07-13 22:20:01.999148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.879 [2024-07-13 22:20:01.999446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.879 [2024-07-13 22:20:01.999982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.879 [2024-07-13 22:20:02.000809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.879 [2024-07-13 22:20:02.001848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.879 [2024-07-13 22:20:02.002084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.879 [2024-07-13 22:20:02.002101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.879 [2024-07-13 22:20:02.002124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.879 [2024-07-13 22:20:02.004472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.879 [2024-07-13 22:20:02.005151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.879 [2024-07-13 22:20:02.005727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.879 [2024-07-13 22:20:02.006010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.879 [2024-07-13 22:20:02.006297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.879 [2024-07-13 22:20:02.006314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.879 [2024-07-13 22:20:02.007097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.879 [2024-07-13 22:20:02.007378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.879 [2024-07-13 22:20:02.007936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.879 [2024-07-13 22:20:02.008776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.879 [2024-07-13 22:20:02.009017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.879 [2024-07-13 22:20:02.009035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.879 [2024-07-13 22:20:02.009048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.879 [2024-07-13 22:20:02.013106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.879 [2024-07-13 22:20:02.013746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.879 [2024-07-13 22:20:02.014054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.879 [2024-07-13 22:20:02.014655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.879 [2024-07-13 22:20:02.014917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.879 [2024-07-13 22:20:02.014935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.879 [2024-07-13 22:20:02.015229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.879 [2024-07-13 22:20:02.015760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.879 [2024-07-13 22:20:02.016584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.879 [2024-07-13 22:20:02.017632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.879 [2024-07-13 22:20:02.017863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.879 [2024-07-13 22:20:02.017879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.879 [2024-07-13 22:20:02.017892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.879 [2024-07-13 22:20:02.020280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.879 [2024-07-13 22:20:02.020952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.879 [2024-07-13 22:20:02.021533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.879 [2024-07-13 22:20:02.021812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.879 [2024-07-13 22:20:02.022099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.022127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.022875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.023161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.023725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.024556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.024783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.024799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.024811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.028805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.029470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.029752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.030319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.030557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.030573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.030886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.031460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.032290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.033331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.033560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.033577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.033589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.035914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.036580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.037158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.037442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.037716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.037731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.038460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.038741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.039324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.040171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.040399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.040415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.040427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.044488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.044539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.045038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.045319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.045585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.045601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.046292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.046587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.047269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.048119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.048345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.048361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.048373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.050746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.051651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.052420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.052461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.052774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.052789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.053097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.053885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.053929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.054250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.054590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.054606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.054619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.058202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.058247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.058279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.058311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.058535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.058550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.058597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.059735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.059779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.059812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.060067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.060084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.060096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.061707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.061753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.061786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.061817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.062152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.062170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.062216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.062249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.062280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.062311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.062555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.062574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.062587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.066003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.066048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.066080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.066111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.066405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.066420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.066467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.066503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.066535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.066566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.066823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.880 [2024-07-13 22:20:02.066838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.066850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.068538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.068586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.068619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.068651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.068872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.068888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.068943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.068982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.069017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.069051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.069270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.069285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.069298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.072601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.072653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.072691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.072722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.072956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.072973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.073018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.073051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.073090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.073123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.073460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.073477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.073491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.075162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.075207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.075238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.075270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.075508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.075522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.075569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.075603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.075634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.075666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.075882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.075915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.075928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.079571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.079621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.079652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.079684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.079917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.079933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.079978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.080015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.080047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.080080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.080404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.080420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.080433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.081948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.081994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.082034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.082065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.082289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.082303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.082351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.082384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.082415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.082447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.082666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.082680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.082693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.085480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.085524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.085561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.085594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.085945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.085964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.086010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.086044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.086077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.086109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.086394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.086409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.086425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.087954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.087998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.088030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.088069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.088290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.088305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.088354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.088389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.088420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.088451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.088706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.088723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.088735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.092349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.092394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.092427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.092460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.092777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.092794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.881 [2024-07-13 22:20:02.092835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.092869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.092908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.092945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.093166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.093182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.093193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.094794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.094840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.094876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.094917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.095138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.095153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.095199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.095233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.095273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.095308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.095644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.095659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.095672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.098347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.098393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.098425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.098458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.098750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.098765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.098813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.098846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.098888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.098926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.099225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.099241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.099253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.100688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.100741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.100773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.100804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.101053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.101068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.101115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.101149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.101185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.101221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.101438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.101454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.101466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.104893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.104945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.104981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.105018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.105241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.105256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.105302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.105336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.105366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.105398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.105716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.105732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.105745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.107074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.107117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.107158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.107192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.107491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.107507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.107551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.107583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.107615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.107646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.107909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.107926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.107942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.111366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.111410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.111442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.111473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.111783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.111799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.111848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.111882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.111922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.111954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.112272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.112288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.112300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.113570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.113616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.113647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.113683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.113909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.113926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.113973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.114010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.114042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.114073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.114293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.114308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.114320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.117152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.882 [2024-07-13 22:20:02.117203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.117235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.117266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.117599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.117616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.117659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.117693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.117725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.117757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.118008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.118024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.118036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.119390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.119435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.119471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.119502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.119748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.119762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.119809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.119842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.119873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.119930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.120152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.120167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.120180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.122871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.122923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.122956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.122988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.123330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.123347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.123393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.123430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.123465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.123497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.123743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.123758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.123770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.125090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.125139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.125171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.125203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.125425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.125440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.125486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.125519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.125549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.125580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.125800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.125814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.125826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.128149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.128205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.128238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.128270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.128503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.128518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.128572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.128604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.128641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.128673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.128898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.128921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.128933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.130225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.130268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.130315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.130350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.130573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.130588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.130635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.130668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.130699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.130731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.131009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.131026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.131038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.135226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.135270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.135301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.135332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.883 [2024-07-13 22:20:02.135577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.135592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.135643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.135676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.135707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.135738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.135969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.135985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.135997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.137336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.137380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.137412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.137447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.137671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.137691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.137737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.137771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.137803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.137835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.138069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.138086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.138098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.140321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.140372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.140421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.140455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.140678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.140693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.140740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.140773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.140804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.140835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.141064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.141081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.141093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.142436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.142479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.142514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.142546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.142836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.142852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.142917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.142954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.142986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.143021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.143246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.143262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.143274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.145296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.145341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.145372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.145403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.145624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.145639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.145690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.145729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.145764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.145796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.146022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.146039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.146051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.147339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.147391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.147422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.147454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.147685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.147699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.147741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.147774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.147805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.147843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.148214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.148231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.148243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.150246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.151287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.151327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.151359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.151628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.151645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.151692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.151728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.151760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.151791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.152073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.152091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.152103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.155753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.155814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.155850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.156858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.157204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.157221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.157265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.157299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.157736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.157773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.158033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.158049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.884 [2024-07-13 22:20:02.158061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.162705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.163243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.163980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.164260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.164564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.164584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.165462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.165501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.165777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.166326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.166581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.166597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.166609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.171264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.171809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.172539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.172822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.173135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.173152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.174070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.174350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.174631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.175781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.176021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.176038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.176050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.179675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.179971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.181031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.181318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.181657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.181674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.182635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.183472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.184503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.185538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.185841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.185857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.185870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.188992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.189811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.190256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.190537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.190794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.190809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.191632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.192668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.193696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.194167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.194400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.194417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.194429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.198370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.198972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.199252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.199899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.200194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.200210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.201273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.202325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.202965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.203784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.204066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.204083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.204095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.209002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.209303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.209584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.209866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.210161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.210178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.211315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.211604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.211884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.212929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.213310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.213326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.213339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.217071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.217451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.217731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.218019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.218312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.218329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.218978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.219579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.219858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.220406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.220648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.220664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.220677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.223509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.224309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.224591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.224873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.225192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.225209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.885 [2024-07-13 22:20:02.225509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.226542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.226822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.227111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.227344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.227360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.227372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.229843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.231017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.231307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.231586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.231911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.231929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.232223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.233154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.233467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.233748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.233998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.234015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.234027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.236487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.237308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.237747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.238032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.238315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.238332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.238629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.239228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.239886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.240169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.240473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.240492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.240504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.243364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.243851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.244606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.244888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.245215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.245231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.245529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.245816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.246899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.247196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.247534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.247550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.247563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.250757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.251058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.252238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.252517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.252871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.252888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.253188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.253474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.254445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.254738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.255088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.255105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.255118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.258391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.258681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.259596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.259938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.260282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.260300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.260591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.260875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.261606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.262111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.262463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.262482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:42.886 [2024-07-13 22:20:02.262495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.148 [2024-07-13 22:20:02.265868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.148 [2024-07-13 22:20:02.266189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.148 [2024-07-13 22:20:02.267255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.148 [2024-07-13 22:20:02.267545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.148 [2024-07-13 22:20:02.267889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.148 [2024-07-13 22:20:02.267915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.148 [2024-07-13 22:20:02.268223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.148 [2024-07-13 22:20:02.268507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.148 [2024-07-13 22:20:02.269513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.148 [2024-07-13 22:20:02.269799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.148 [2024-07-13 22:20:02.270145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.148 [2024-07-13 22:20:02.270163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.148 [2024-07-13 22:20:02.270175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.148 [2024-07-13 22:20:02.273377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.148 [2024-07-13 22:20:02.273670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.148 [2024-07-13 22:20:02.274504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.148 [2024-07-13 22:20:02.274922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.148 [2024-07-13 22:20:02.275260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.148 [2024-07-13 22:20:02.275278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.148 [2024-07-13 22:20:02.275571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.148 [2024-07-13 22:20:02.275859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.148 [2024-07-13 22:20:02.276397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.148 [2024-07-13 22:20:02.277157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.148 [2024-07-13 22:20:02.277512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.148 [2024-07-13 22:20:02.277530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.148 [2024-07-13 22:20:02.277545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.148 [2024-07-13 22:20:02.280767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.148 [2024-07-13 22:20:02.281075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.148 [2024-07-13 22:20:02.281689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.148 [2024-07-13 22:20:02.282330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.148 [2024-07-13 22:20:02.282671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.148 [2024-07-13 22:20:02.282688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.148 [2024-07-13 22:20:02.282991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.148 [2024-07-13 22:20:02.283278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.148 [2024-07-13 22:20:02.283575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.148 [2024-07-13 22:20:02.284536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.148 [2024-07-13 22:20:02.284890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.148 [2024-07-13 22:20:02.284912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.148 [2024-07-13 22:20:02.284926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.148 [2024-07-13 22:20:02.288220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.288511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.288789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.289962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.290338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.290356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.290648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.290939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.291223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.292079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.292405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.292426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.292438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.295229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.295760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.296069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.296717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.297010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.297027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.297332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.297616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.297907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.298206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.298439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.298456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.298469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.300909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.302093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.302383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.302660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.302898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.302920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.303256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.303536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.303817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.304130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.304437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.304452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.304465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.308201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.308651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.309443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.310232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.310582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.310600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.311036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.311856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.312143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.312434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.312729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.312745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.312757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.315311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.315608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.315913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.317039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.317400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.317417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.317706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.318738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.319031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.319311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.319583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.319600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.319613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.321921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.322215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.322499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.323415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.323728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.323744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.324045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.324793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.325310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.325592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.325863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.325878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.325890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.329337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.330185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.330996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.331436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.331779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.331797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.332435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.333060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.333339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.334035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.334300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.334316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.334329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.338608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.339343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.339862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.340146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.340424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.340440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.341160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.341441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.149 [2024-07-13 22:20:02.342013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.342833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.343073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.343090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.343106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.347187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.347855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.348142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.348680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.348920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.348937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.349229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.349669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.350486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.351508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.351740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.351756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.351769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.355757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.356053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.356388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.357317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.357662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.357679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.357978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.358971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.360088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.361131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.361362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.361378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.361392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.364866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.365163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.366247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.366518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.366850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.366867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.367959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.368985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.370115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.371291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.371571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.371587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.371599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.374542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.375660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.375956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.376237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.376473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.376490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.377380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.378395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.379423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.379851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.380125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.380141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.380153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.384554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.384891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.385177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.386140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.386392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.386408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.387440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.388454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.388822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.389803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.390040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.390057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.390070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.393593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.393884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.394670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.395486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.395721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.395738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.396778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.397305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.398443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.399483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.399718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.399745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.399757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.402877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.402933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.403721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.404531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.404763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.404779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.405818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.406362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.407508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.408638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.408870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.408886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.408898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.411963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.412628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.150 [2024-07-13 22:20:02.413458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.413500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.413732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.413748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.414807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.415278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.415318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.416246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.416476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.416493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.416506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.419420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.419468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.419506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.419538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.419881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.419897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.419945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.420299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.420338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.420370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.420621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.420637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.420650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.424087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.424133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.424165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.424196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.424475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.424489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.424537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.424574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.424607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.424639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.424866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.424882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.424894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.426956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.427001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.427034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.427066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.427289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.427303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.427352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.427394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.427432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.427464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.427688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.427704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.427715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.431169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.431215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.431246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.431279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.431619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.431636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.431678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.431712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.431746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.431792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.432028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.432045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.432057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.434866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.434919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.434953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.434990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.435216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.435231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.435277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.435315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.435347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.435379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.435603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.435618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.435631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.438436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.438492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.438525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.438556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.438868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.438885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.438933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.438967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.439000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.439034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.439264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.439279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.439292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.441948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.441993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.442025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.442057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.442280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.442295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.442343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.442384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.442417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.442450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.442771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.151 [2024-07-13 22:20:02.442787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.442799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.445433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.445478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.445513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.445559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.445785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.445800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.445849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.445883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.445922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.445965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.446188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.446204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.446215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.449095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.449139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.449170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.449204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.449540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.449562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.449608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.449642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.449676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.449711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.449945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.449962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.449974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.453225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.453273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.453313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.453345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.453571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.453587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.453634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.453668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.453700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.453734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.454062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.454080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.454093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.457072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.457120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.457152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.457184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.457438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.457452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.457499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.457532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.457570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.457604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.457838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.457853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.457865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.460759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.460805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.460841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.460874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.461213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.461230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.461275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.461309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.461341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.461372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.461620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.461635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.461647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.465153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.465199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.465231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.465263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.465527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.465541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.465591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.465636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.465671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.465705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.466066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.466083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.466096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.152 [2024-07-13 22:20:02.468850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.468899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.468938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.468969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.469283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.469299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.469348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.469381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.469413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.469445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.469699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.469714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.469726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.472681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.472728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.472761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.472793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.473089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.473105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.473150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.473182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.473214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.473246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.473488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.473503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.473515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.476733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.476778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.476814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.476846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.477179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.477196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.477250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.477283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.477315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.477347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.477681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.477698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.477711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.480281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.480326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.480366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.480398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.480656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.480670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.480717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.480751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.480783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.480817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.481048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.481064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.481075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.484206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.484252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.484284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.484318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.484542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.484557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.484608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.484653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.484691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.484723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.484956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.484972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.484984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.488488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.488534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.488566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.488598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.488958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.488975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.489018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.489052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.489085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.489117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.489434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.489450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.489463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.492198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.492244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.492291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.492326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.492552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.492567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.492626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.492661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.492694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.492724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.492955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.492972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.492983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.496261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.496306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.496341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.496373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.496643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.496657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.496710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.496743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.153 [2024-07-13 22:20:02.496775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.496807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.497036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.497052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.497065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.500766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.500812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.500844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.500877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.501207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.501224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.501267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.501302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.501335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.501366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.501682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.501697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.501709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.504427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.504471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.504502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.504534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.504819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.504833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.504884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.504925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.504963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.505000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.505223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.505238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.505250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.508063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.508116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.508152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.508189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.508416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.508431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.508478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.508512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.508543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.508575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.508797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.508813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.508824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.511754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.511801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.511837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.511872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.512204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.512221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.512277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.512312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.512345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.512378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.512713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.512733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.512746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.515325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.515382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.515417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.515448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.515675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.515689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.515736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.515771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.515802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.515847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.516201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.516219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.516232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.519209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.519254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.519286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.519317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.519594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.519609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.519656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.519691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.519723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.519755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.520004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.520020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.520033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.522977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.523023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.523063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.523098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.523403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.523428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.523473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.523506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.523538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.523569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.523812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.523827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.523839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.527057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.527102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.527132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.154 [2024-07-13 22:20:02.527162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.155 [2024-07-13 22:20:02.527487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.155 [2024-07-13 22:20:02.527502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.155 [2024-07-13 22:20:02.527556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.155 [2024-07-13 22:20:02.527588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.155 [2024-07-13 22:20:02.527619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.155 [2024-07-13 22:20:02.527648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.155 [2024-07-13 22:20:02.527996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.155 [2024-07-13 22:20:02.528014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.155 [2024-07-13 22:20:02.528027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.155 [2024-07-13 22:20:02.530645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.155 [2024-07-13 22:20:02.530706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.155 [2024-07-13 22:20:02.530758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.155 [2024-07-13 22:20:02.530801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.155 [2024-07-13 22:20:02.531085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.155 [2024-07-13 22:20:02.531102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.155 [2024-07-13 22:20:02.531150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.155 [2024-07-13 22:20:02.531187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.155 [2024-07-13 22:20:02.531227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.155 [2024-07-13 22:20:02.531259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.155 [2024-07-13 22:20:02.531487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.155 [2024-07-13 22:20:02.531503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.155 [2024-07-13 22:20:02.531516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.418 [2024-07-13 22:20:02.534867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.418 [2024-07-13 22:20:02.535728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.418 [2024-07-13 22:20:02.535771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.418 [2024-07-13 22:20:02.535804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.418 [2024-07-13 22:20:02.536036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.418 [2024-07-13 22:20:02.536053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.418 [2024-07-13 22:20:02.536102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.418 [2024-07-13 22:20:02.536136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.418 [2024-07-13 22:20:02.536169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.418 [2024-07-13 22:20:02.536201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.418 [2024-07-13 22:20:02.536421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.418 [2024-07-13 22:20:02.536437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.418 [2024-07-13 22:20:02.536449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.418 [2024-07-13 22:20:02.539215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.418 [2024-07-13 22:20:02.539263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.418 [2024-07-13 22:20:02.539296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.418 [2024-07-13 22:20:02.539572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.418 [2024-07-13 22:20:02.539898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.418 [2024-07-13 22:20:02.539920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.418 [2024-07-13 22:20:02.539965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.418 [2024-07-13 22:20:02.540000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.418 [2024-07-13 22:20:02.540278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.418 [2024-07-13 22:20:02.540313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.418 [2024-07-13 22:20:02.540597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.418 [2024-07-13 22:20:02.540613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.418 [2024-07-13 22:20:02.540629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.418 [2024-07-13 22:20:02.543309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.418 [2024-07-13 22:20:02.543622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.418 [2024-07-13 22:20:02.543908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.418 [2024-07-13 22:20:02.544188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.418 [2024-07-13 22:20:02.544515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.418 [2024-07-13 22:20:02.544532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.418 [2024-07-13 22:20:02.544822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.418 [2024-07-13 22:20:02.544864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.418 [2024-07-13 22:20:02.545153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.418 [2024-07-13 22:20:02.545437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.418 [2024-07-13 22:20:02.545770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.418 [2024-07-13 22:20:02.545787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.418 [2024-07-13 22:20:02.545799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.418 [2024-07-13 22:20:02.548465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.418 [2024-07-13 22:20:02.548763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.418 [2024-07-13 22:20:02.549088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.418 [2024-07-13 22:20:02.549393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.418 [2024-07-13 22:20:02.549747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.418 [2024-07-13 22:20:02.549763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.418 [2024-07-13 22:20:02.550071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.418 [2024-07-13 22:20:02.550359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.418 [2024-07-13 22:20:02.550643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.418 [2024-07-13 22:20:02.550939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.418 [2024-07-13 22:20:02.551266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.551283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.551296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.553984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.554283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.554571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.554862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.555203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.555221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.555520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.555811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.556108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.556396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.556725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.556741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.556755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.559338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.559638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.559940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.560230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.560566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.560584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.560881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.561177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.561468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.561762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.562085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.562102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.562115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.564724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.565024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.565306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.565585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.565893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.565916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.566210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.566492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.566774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.567080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.567432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.567452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.567466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.570079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.570375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.570659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.570957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.571267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.571282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.571570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.571851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.572147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.572440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.572786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.572802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.572815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.575452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.575741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.576027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.576314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.576634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.576650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.576948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.577229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.577506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.577786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.578073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.578091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.578108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.580655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.580961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.581241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.581519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.581844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.581861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.582167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.582462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.582748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.583040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.583392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.583408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.583420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.586008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.586300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.586586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.586872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.587226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.587244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.587533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.587812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.588097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.588384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.588677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.588696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.588709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.591278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.591568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.591848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.592133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.592444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.419 [2024-07-13 22:20:02.592460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.592754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.593048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.593328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.593605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.593965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.593984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.593997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.596589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.597753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.598049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.599136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.599510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.599528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.599820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.600107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.600387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.600668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.601021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.601038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.601050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.603774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.604080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.604363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.604641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.604996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.605015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.605305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.605589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.605874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.606162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.606466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.606482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.606495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.609055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.609741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.610570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.611612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.611848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.611865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.612491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.613685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.614767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.615948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.616178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.616194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.616206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.619401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.620465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.620886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.621917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.622152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.622168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.623362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.624412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.624691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.624975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.625282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.625299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.625311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.629124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.630235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.631288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.632171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.632496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.632513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.632806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.633091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.633369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.633988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.634243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.634258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.634271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.636712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.637772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.638581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.638862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.639204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.639222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.639512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.639793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.640449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.641289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.641518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.641534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.641547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.643917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.644749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.645036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.645318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.645614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.645634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.645934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.646607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.647456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.648522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.648754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.648770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.648782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.420 [2024-07-13 22:20:02.650975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.651266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.651547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.651826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.652166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.652184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.652816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.653630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.654647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.655669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.655925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.655942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.655954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.657518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.657808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.658095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.658373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.658661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.658678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.659495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.660505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.661518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.662247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.662476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.662493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.662505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.664128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.664424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.664706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.665237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.665491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.665508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.666549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.667576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.668363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.669332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.669579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.669595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.669607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.671415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.671706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.672186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.673010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.673243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.673259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.674293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.675156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.676043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.676864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.677100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.677116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.677128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.678999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.679349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.680262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.681313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.681544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.681561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.682616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.683328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.684164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.685198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.685429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.685446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.685458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.687520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.688523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.689650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.690695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.690921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.690955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.691598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.692421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.693431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.694440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.694691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.694708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.694720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.697727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.698787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.699879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.701043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.701352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.701371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.702194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.703180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.704209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.704996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.705324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.705340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.705352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.707911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.708945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.709995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.710452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.710728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.710745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.711784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.712811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.713689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.421 [2024-07-13 22:20:02.713973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.714319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.714337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.714361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.716935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.717978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.718412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.719234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.719470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.719486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.720510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.721435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.721716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.722001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.722317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.722333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.722346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.724851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.725303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.726344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.727496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.727732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.727749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.728882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.729167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.729445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.729723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.730080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.730099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.730112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.732102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.733250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.734397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.735532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.735767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.735783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.736082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.736361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.736640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.736928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.737244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.737260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.737272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.739745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.740909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.742014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.743012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.743321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.743337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.743625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.743911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.744189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.744631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.744885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.744907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.744920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.747290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.748518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.749568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.749850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.750181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.750208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.750500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.750779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.751215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.752032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.752263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.752278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.752291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.754758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.754808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.755646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.755932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.756278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.756295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.756587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.756866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.757427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.758241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.758471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.758487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.422 [2024-07-13 22:20:02.758500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.760945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.761949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.762230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.762267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.762609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.762626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.762923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.763204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.763241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.763972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.764219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.764235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.764247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.765607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.765651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.765683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.765715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.765948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.765964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.766012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.767026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.767064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.767101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.767373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.767393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.767406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.769153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.769197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.769228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.769267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.769491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.769506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.769550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.769585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.769627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.769658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.769880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.769896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.769915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.771235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.771280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.771313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.771344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.771623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.771639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.771686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.771730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.771763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.771795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.772152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.772170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.772182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.773869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.773920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.773956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.773989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.774213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.774227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.774273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.774308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.774348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.774381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.774601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.774616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.774628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.775918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.775969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.776001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.776034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.776342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.776358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.776399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.776434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.776468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.776500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.776828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.776847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.776860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.778593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.778637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.778669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.778701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.778932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.778948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.778996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.779035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.779067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.779105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.779384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.779400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.779412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.780783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.780828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.780860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.780891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.781213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.781230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.781272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.423 [2024-07-13 22:20:02.781307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.781339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.781372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.781693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.781709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.781721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.783196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.783249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.783286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.783318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.783568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.783583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.783628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.783662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.783696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.783733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.783961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.783982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.783995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.785422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.785468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.785500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.785532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.785864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.785879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.785933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.785968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.786000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.786031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.786334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.786352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.786365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.787688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.787732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.787771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.787806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.788111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.788127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.788172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.788205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.788238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.788269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.788512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.788527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.788539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.790091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.790138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.790171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.790206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.790531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.790547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.790591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.790625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.790657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.790689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.791030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.791058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.791070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.792389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.792434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.792471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.792503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.792723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.792739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.792786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.792832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.792865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.792897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.793135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.793151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.793164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.795000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.795044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.795080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.795112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.795337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.795351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.795417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.795454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.795497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.795528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.795751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.795766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.795778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.797120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.797165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.797196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.797227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.797450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.797465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.797513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.797545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.797576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.797615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.797923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.797941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.797953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.799686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.799732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.799763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.424 [2024-07-13 22:20:02.799794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.425 [2024-07-13 22:20:02.800050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.425 [2024-07-13 22:20:02.800065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.425 [2024-07-13 22:20:02.800111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.425 [2024-07-13 22:20:02.800144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.425 [2024-07-13 22:20:02.800183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.425 [2024-07-13 22:20:02.800218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.425 [2024-07-13 22:20:02.800455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.425 [2024-07-13 22:20:02.800470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.425 [2024-07-13 22:20:02.800487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.425 [2024-07-13 22:20:02.801883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.425 [2024-07-13 22:20:02.801943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.425 [2024-07-13 22:20:02.801981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.425 [2024-07-13 22:20:02.802013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.425 [2024-07-13 22:20:02.802293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.425 [2024-07-13 22:20:02.802309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.425 [2024-07-13 22:20:02.802356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.425 [2024-07-13 22:20:02.802390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.425 [2024-07-13 22:20:02.802424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.425 [2024-07-13 22:20:02.802456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.425 [2024-07-13 22:20:02.802793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.425 [2024-07-13 22:20:02.802828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.425 [2024-07-13 22:20:02.802847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.425 [2024-07-13 22:20:02.804516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.425 [2024-07-13 22:20:02.804563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.425 [2024-07-13 22:20:02.804601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.425 [2024-07-13 22:20:02.804650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.804987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.805020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.805087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.805124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.805156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.805195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.805471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.805488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.805501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.806909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.806956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.806988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.807025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.807349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.807367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.807408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.807442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.807473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.807507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.807804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.807820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.807833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.809286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.809335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.809367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.809399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.809655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.809673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.809721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.809756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.809790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.809822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.810067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.810083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.810096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.811478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.811522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.811558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.811591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.811938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.811956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.811999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.812033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.812081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.812113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.812471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.812487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.812500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.814313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.814359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.814391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.814423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.814744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.814760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.814807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.814841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.814873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.814912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.815245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.687 [2024-07-13 22:20:02.815261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.815274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.817054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.817099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.817131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.817163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.817540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.817566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.817617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.817652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.817686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.817731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.818029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.818046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.818059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.819894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.819947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.819981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.820015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.820322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.820339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.820400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.820435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.820468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.820502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.820801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.820817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.820830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.822753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.822809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.822848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.822894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.823299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.823315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.823367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.823414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.823448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.823480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.823773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.823789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.823802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.825583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.825628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.825678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.825718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.826024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.826041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.826099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.826135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.826167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.826198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.826520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.826537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.826549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.828284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.828329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.828361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.828392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.828721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.828737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.828780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.828814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.828847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.828881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.829195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.829212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.829224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.831027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.831071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.831103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.831134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.831460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.831477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.831518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.831552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.831585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.831631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.831995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.832013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.832026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.833779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.833823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.833860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.833893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.834216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.834232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.834271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.834303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.834334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.834364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.834684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.834700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.834714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.836465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.836521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.836554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.836586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.836940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.836957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.688 [2024-07-13 22:20:02.837002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.837038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.837071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.837102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.837393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.837409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.837422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.839205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.839251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.839283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.839315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.839644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.839660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.839703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.839746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.839780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.839825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.840125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.840142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.840154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.842016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.842312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.842359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.842392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.842687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.842703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.842757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.842792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.842824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.842856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.843195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.843212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.843226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.845017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.845065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.845115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.845401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.845735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.845755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.845802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.845836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.846121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.846157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.846472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.846490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.846503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.848530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.848833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.849131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.849403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.849740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.849756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.850061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.850103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.850379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.850657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.851008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.851025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.851038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.853032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.853324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.853609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.853899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.854249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.854278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.854567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.854846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.855135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.855424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.855704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.855720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.855732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.857913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.858210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.858499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.858777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.859110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.859128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.859417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.859700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.860000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.860267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.860610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.860626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.860639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.862595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.862887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.863172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.863451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.863742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.863758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.864058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.864341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.864618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.864897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.865234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.865251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.865263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.689 [2024-07-13 22:20:02.867278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.867572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.867854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.868149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.868485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.868502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.868792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.869082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.869362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.869641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.869934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.869951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.869964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.871968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.872927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.873214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.874289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.874617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.874635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.874933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.875212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.875490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.875769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.876103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.876120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.876133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.878192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.878487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.878774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.879062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.879404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.879421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.879713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.879999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.880280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.880562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.880879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.880895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.880913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.882923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.883221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.883500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.883778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.884123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.884141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.884433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.884719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.885011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.885296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.885594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.885610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.885622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.888219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.888815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.889994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.891060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.891290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.891306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.892428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.892715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.892998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.893276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.893626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.893642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.893656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.895741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.896848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.897855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.898937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.899168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.899184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.899478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.899759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.900043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.900327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.900663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.900681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.900694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.902656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.903505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.904566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.905620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.905934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.905951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.906248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.906526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.906819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.907105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.690 [2024-07-13 22:20:02.907332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.907348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.907360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.909494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.910548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.911601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.912102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.912446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.912463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.912756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.913046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.913328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.914147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.914402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.914418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.914430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.916779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.917827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.918598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.918882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.919232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.919250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.919539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.919819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.920529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.921360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.921591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.921607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.921620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.923993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.924855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.925146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.925429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.925715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.925732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.926035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.926521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.927350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.928392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.928620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.928636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.928649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.930916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.931210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.931494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.931759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.932085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.932103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.932794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.933631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.934653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.935684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.935999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.936016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.936028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.937570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.937867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.938156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.938438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.938734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.938751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.939584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.940615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.941659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.942370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.942601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.942620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.942632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.944283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.944573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.944852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.945402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.945651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.945667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.946722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.947764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.948583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.949576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.949829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.691 [2024-07-13 22:20:02.949846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.949858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.951625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.951924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.952227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.953175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.953405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.953421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.954523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.955500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.956284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.957117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.957351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.957368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.957381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.959281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.959614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.960520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.961541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.961772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.961788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.962800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.963586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.964441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.965486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.965716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.965731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.965744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.967761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.968879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.970069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.971146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.971377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.971393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.972026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.972857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.973892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.974947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.975294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.975311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.975323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.978307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.979409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.980460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.981363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.981641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.981658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.982504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.983562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.984607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.984989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.985323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.985340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.985355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.988108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.989228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.990214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.991005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.991260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.991276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.992350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.993393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.993841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.994126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.994465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.994482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.994495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.997396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.998444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.999132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:02.999984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:03.000218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:03.000235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:03.001344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:03.002015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:03.002307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:03.002594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:03.002917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:03.002934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:03.002953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:03.005514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:03.005895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:03.006799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:03.007875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:03.008127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:03.008145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:03.009083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:03.009364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:03.009643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:03.009931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:03.010245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:03.010262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:03.010275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:03.012056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:03.013108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:03.014249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:03.015315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:03.015549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:03.015567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:03.015862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.692 [2024-07-13 22:20:03.016151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.016431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.016734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.017113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.017131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.017143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.019310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.020165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.021232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.022277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.022579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.022596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.022886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.023172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.023450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.023728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.023966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.023983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.023996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.026127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.027181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.028255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.028666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.029017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.029036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.029336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.029624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.029920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.030831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.031091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.031108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.031120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.033574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.034665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.035329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.035610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.035983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.036002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.036301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.036588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.037910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.038992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.039224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.039240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.039252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.041690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.042082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.042367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.042646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.043013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.043031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.043325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.044287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.045171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.046204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.046434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.046449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.046462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.048485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.048793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.049096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.049375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.049718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.049735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.050528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.051338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.052327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.053367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.053645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.053672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.053688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.055290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.055344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.055622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.055910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.056243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.056260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.056917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.057744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.058780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.059805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.060110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.060127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.060140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.061691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.061991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.062281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.062319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.062663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.062681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.063397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.064228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.064268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.065300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.065533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.065549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.065562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.066910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.066956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.693 [2024-07-13 22:20:03.066987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.694 [2024-07-13 22:20:03.067019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.694 [2024-07-13 22:20:03.067319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.694 [2024-07-13 22:20:03.067335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.694 [2024-07-13 22:20:03.067385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.694 [2024-07-13 22:20:03.067664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.694 [2024-07-13 22:20:03.067699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.694 [2024-07-13 22:20:03.067732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.694 [2024-07-13 22:20:03.068054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.694 [2024-07-13 22:20:03.068072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.694 [2024-07-13 22:20:03.068086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.694 [2024-07-13 22:20:03.069694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.694 [2024-07-13 22:20:03.069744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.694 [2024-07-13 22:20:03.069777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.694 [2024-07-13 22:20:03.069810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.694 [2024-07-13 22:20:03.070200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.694 [2024-07-13 22:20:03.070218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.694 [2024-07-13 22:20:03.070267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.694 [2024-07-13 22:20:03.070301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.694 [2024-07-13 22:20:03.070335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.694 [2024-07-13 22:20:03.070368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.694 [2024-07-13 22:20:03.070620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.694 [2024-07-13 22:20:03.070636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.694 [2024-07-13 22:20:03.070649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.694 [2024-07-13 22:20:03.072354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.694 [2024-07-13 22:20:03.072405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.694 [2024-07-13 22:20:03.072439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.694 [2024-07-13 22:20:03.072473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.694 [2024-07-13 22:20:03.072819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.694 [2024-07-13 22:20:03.072837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.694 [2024-07-13 22:20:03.072879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.694 [2024-07-13 22:20:03.072921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.694 [2024-07-13 22:20:03.072960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.694 [2024-07-13 22:20:03.073005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.954 [2024-07-13 22:20:03.073387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.954 [2024-07-13 22:20:03.073423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.954 [2024-07-13 22:20:03.073437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.954 [2024-07-13 22:20:03.074797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.954 [2024-07-13 22:20:03.074847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.954 [2024-07-13 22:20:03.074881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.954 [2024-07-13 22:20:03.074922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.954 [2024-07-13 22:20:03.075154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.954 [2024-07-13 22:20:03.075169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.954 [2024-07-13 22:20:03.075218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.954 [2024-07-13 22:20:03.075251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.954 [2024-07-13 22:20:03.075284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.954 [2024-07-13 22:20:03.075316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.954 [2024-07-13 22:20:03.075538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.954 [2024-07-13 22:20:03.075553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.954 [2024-07-13 22:20:03.075565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.077385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.077432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.077466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.077498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.077765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.077781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.077829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.077863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.077897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.077948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.078177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.078193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.078206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.079562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.079607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.079640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.079691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.079922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.079939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.079988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.080021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.080053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.080085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.080336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.080353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.080367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.082294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.082343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.082375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.082407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.082657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.082672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.082719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.082752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.082785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.082817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.083049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.083066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.083078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.084388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.084433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.084470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.084501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.084729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.084747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.084794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.084835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.084868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.084918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.085226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.085243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.085257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.087047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.087098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.087138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.087171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.087403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.087419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.087473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.087511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.087544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.087576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.087811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.087827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.087840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.089166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.089211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.089243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.089275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.089566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.089582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.089628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.089674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.089708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.089744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.090098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.090117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.090130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.091778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.091822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.091854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.091886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.092118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.092135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.092183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.092216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.092254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.092290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.092510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.092526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.092539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.093870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.093924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.093962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.094008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.094371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.094387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.955 [2024-07-13 22:20:03.094439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.094474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.094507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.094540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.094866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.094883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.094896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.096615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.096664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.096696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.096728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.097060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.097078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.097125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.097161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.097194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.097226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.097531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.097547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.097560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.099407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.099452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.099484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.099516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.099847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.099864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.099914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.099948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.099982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.100014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.100336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.100351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.100364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.102083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.102129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.102163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.102195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.102516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.102536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.102579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.102613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.102645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.102676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.103009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.103028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.103040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.104749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.104795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.104832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.104865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.105209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.105225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.105276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.105310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.105343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.105376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.105704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.105719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.105731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.107482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.107527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.107559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.107591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.107948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.107965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.108010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.108045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.108078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.108110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.108398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.108414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.108427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.110139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.110185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.110219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.110251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.110569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.110586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.110636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.110709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.110754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.110785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.111074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.111091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.111103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.112914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.112960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.112992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.113025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.113312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.113328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.113379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.113425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.113458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.113508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.113795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.113811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.113823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.115726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.956 [2024-07-13 22:20:03.115787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.115847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.115890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.116204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.116220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.116271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.116304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.116337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.116369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.116696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.116711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.116724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.118512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.118564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.118608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.118648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.118957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.118974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.119028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.119063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.119096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.119128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.119438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.119455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.119470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.121223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.121268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.121301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.121333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.121667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.121683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.121728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.121761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.121794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.121826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.122172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.122189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.122202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.123959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.124004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.124035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.124067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.124385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.124403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.124443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.124477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.124511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.124544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.124855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.124872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.124884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.126594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.126639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.126677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.126710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.127064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.127082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.127124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.127158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.127190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.127221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.127550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.127567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.127580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.129285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.129330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.129363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.129394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.129712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.129728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.129783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.129817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.129852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.129885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.130229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.130247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.130259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.132036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.132080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.132112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.132145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.132468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.132485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.132527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.132560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.132594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.132626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.132880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.132896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.132915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.134654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.134699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.134736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.134768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.135108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.135125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.135169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.135212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.135247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.957 [2024-07-13 22:20:03.135303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.135652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.135669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.135681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.137548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.137593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.137632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.137666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.137948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.137974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.138024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.138058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.138092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.138135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.138407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.138423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.138435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.140522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.140578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.140632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.140677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.141062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.141079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.141141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.141189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.141222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.141256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.141566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.141581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.141593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.143394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.143440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.143489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.143531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.143817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.143833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.143894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.143946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.143979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.144011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.144339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.144356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.144369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.146116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.146418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.146457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.146489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.146817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.146839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.146883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.146929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.146962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.146998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.147345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.147365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.147378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.149147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.149193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.149237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.149519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.149873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.149890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.149946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.149981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.150262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.150315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.150602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.150618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.150631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.152823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.153134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.153423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.153703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.154041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.154060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.154353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.154393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.154674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.154967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.155312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.155329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.155341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.157697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.158001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.158301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.158586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.158922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.158939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.159230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.159722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.160478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.161070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.161312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.161328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.161341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.163540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.958 [2024-07-13 22:20:03.163836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.164132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.164415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.164731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.164747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.165051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.165335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.165618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.165898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.166246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.166263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.166276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.168229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.168521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.168801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.169090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.169444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.169460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.169752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.170053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.170340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.170618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.170942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.170959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.170971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.173006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.173310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.173592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.174618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.174896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.174919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.175967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.176992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.177329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.178261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.178491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.178507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.178520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.180331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.180621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.181340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.182197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.182429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.182445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.183514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.184017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.185103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.186287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.186517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.186533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.186549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.188502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.189252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.190082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.191122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.191359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.191375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.191941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.193059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.194230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.195309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.195539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.195556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.195569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.197886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.198741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.199738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.200778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.201050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.201067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.202137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.203102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.204158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.205331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.205622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.205638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.205651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.208253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.209279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.210327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.210952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.211187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.211203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.959 [2024-07-13 22:20:03.212123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.213138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.214186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.214474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.214832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.214850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.214863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.217661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.218746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.219734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.220495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.220748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.220765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.221809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.222838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.223382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.223675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.224016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.224035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.224047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.226672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.227834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.228403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.229234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.229468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.229485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.230531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.231278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.231564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.231845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.232135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.232152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.232164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.234631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.235109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.236150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.237309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.237543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.237560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.238703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.238994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.239276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.239564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.239909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.239928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.239941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.242031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.243136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.244103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.245140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.245374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.245390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.245684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.245973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.246252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.246536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.246869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.246886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.246900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.248772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.249647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.250699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.251740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.252052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.252069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.252366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.252649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.252938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.253222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.253485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.253501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.253512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.255709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.256735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.257780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.258471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.258788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.258804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.259105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.259387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.259667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.260250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.260500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.260517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.260529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.262893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.264050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.265100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.265383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.265732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.265749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.266050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.266331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.266608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.267620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.267851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.267867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.267879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.270260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.271225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.960 [2024-07-13 22:20:03.271511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.271792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.272102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.272120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.272412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.272714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.273667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.274764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.275001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.275018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.275031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.277382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.277681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.277970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.278251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.278595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.278613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.278912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.279980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.280875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.281926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.282157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.282174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.282187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.284014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.284313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.284593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.284874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.285215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.285233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.286072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.286909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.287933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.288972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.289335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.289351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.289364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.290930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.291229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.291513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.291795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.292075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.292094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.292949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.293982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.295060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.295606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.295844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.295861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.295873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.297649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.297961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.298252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.298927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.299205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.299221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.300310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.301390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.302091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.303198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.303462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.303479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.303491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.305300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.305601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.305995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.306889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.307131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.307149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.308252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.309250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.310087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.310953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.311192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.311208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.311220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.313154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.313453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.314511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.315694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.315942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.315964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.317090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.317813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.318670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.319727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.319969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.319987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.319999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.322020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.323206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.324289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.325483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.325726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.325742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.326276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.327114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.328156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.329214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.329503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.961 [2024-07-13 22:20:03.329520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.962 [2024-07-13 22:20:03.329532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.962 [2024-07-13 22:20:03.332362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.962 [2024-07-13 22:20:03.333262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.962 [2024-07-13 22:20:03.334331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.962 [2024-07-13 22:20:03.335409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.962 [2024-07-13 22:20:03.335718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.962 [2024-07-13 22:20:03.335735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.962 [2024-07-13 22:20:03.336644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.962 [2024-07-13 22:20:03.337698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.962 [2024-07-13 22:20:03.338797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.962 [2024-07-13 22:20:03.339525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.962 [2024-07-13 22:20:03.339898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.962 [2024-07-13 22:20:03.339941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:43.962 [2024-07-13 22:20:03.339968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.222 [2024-07-13 22:20:03.342910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.222 [2024-07-13 22:20:03.344027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.222 [2024-07-13 22:20:03.344727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.222 [2024-07-13 22:20:03.345761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.222 [2024-07-13 22:20:03.346061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.222 [2024-07-13 22:20:03.346079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.222 [2024-07-13 22:20:03.347160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.222 [2024-07-13 22:20:03.348245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.222 [2024-07-13 22:20:03.348552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.222 [2024-07-13 22:20:03.348841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.222 [2024-07-13 22:20:03.349185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.222 [2024-07-13 22:20:03.349203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.222 [2024-07-13 22:20:03.349215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.222 [2024-07-13 22:20:03.352002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.222 [2024-07-13 22:20:03.352865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.222 [2024-07-13 22:20:03.353834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.222 [2024-07-13 22:20:03.354712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.222 [2024-07-13 22:20:03.354975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.222 [2024-07-13 22:20:03.354993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.222 [2024-07-13 22:20:03.356086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.222 [2024-07-13 22:20:03.356383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.222 [2024-07-13 22:20:03.356672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.222 [2024-07-13 22:20:03.356965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.222 [2024-07-13 22:20:03.357314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.222 [2024-07-13 22:20:03.357331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.222 [2024-07-13 22:20:03.357344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.222 [2024-07-13 22:20:03.359831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.222 [2024-07-13 22:20:03.360609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.222 [2024-07-13 22:20:03.361462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.222 [2024-07-13 22:20:03.362514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.222 [2024-07-13 22:20:03.362755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.222 [2024-07-13 22:20:03.362771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.222 [2024-07-13 22:20:03.363275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.222 [2024-07-13 22:20:03.363561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.222 [2024-07-13 22:20:03.363839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.364132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.364474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.364490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.364504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.366367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.367222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.368268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.369320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.369625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.369642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.369944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.370229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.370509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.370789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.371087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.371104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.371117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.373491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.374695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.374991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.375275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.375594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.375611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.375917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.376198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.377313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.378466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.378699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.378714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.378726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.381149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.381205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.381489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.381770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.382093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.382111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.382406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.382687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.383783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.384958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.385186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.385203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.385215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.387589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.387887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.388177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.388216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.388542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.388558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.388853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.389144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.389183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.390296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.390529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.390545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.390557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.391937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.391993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.392029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.392060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.392285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.392299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.392347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.393042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.393080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.393122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.393445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.393462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.393476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.395162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.395207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.395244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.395276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.395501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.395515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.395564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.395597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.395628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.395661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.395882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.395898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.395916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.397374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.397419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.397455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.397492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.397762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.397776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.397827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.397861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.397893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.397933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.398287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.398304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.398318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.399930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.399975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.400006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.400038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.223 [2024-07-13 22:20:03.400259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.400274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.400325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.400364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.400395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.400427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.400650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.400666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.400678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.402010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.402065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.402098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.402131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.402386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.402400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.402454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.402488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.402520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.402556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.402886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.402908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.402924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.404633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.404688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.404720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.404752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.405098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.405116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.405158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.405191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.405224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.405257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.405558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.405573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.405585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.407377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.407423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.407456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.407489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.407831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.407849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.407895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.407949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.407983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.408015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.408340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.408363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.408376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.410164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.410210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.410248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.410280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.410571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.410586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.410635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.410671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.410703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.410737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.411081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.411100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.411113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.413060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.413116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.413151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.413184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.413523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.413539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.413584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.413618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.413651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.413684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.413986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.414004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.414017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.415772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.415818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.415856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.415889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.416225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.416243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.416288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.416338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.416381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.416419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.416706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.416722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.416735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.418667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.418714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.418747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.418780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.419062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.419078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.419129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.419162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.419194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.419226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.224 [2024-07-13 22:20:03.419527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.419543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.419555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.421478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.421538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.421586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.421632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.421946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.421964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.422017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.422067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.422113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.422147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.422422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.422438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.422451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.424316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.424362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.424395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.424440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.424724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.424741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.424796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.424842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.424885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.424925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.425248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.425265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.425277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.427049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.427103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.427138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.427171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.427460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.427477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.427527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.427562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.427597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.427630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.427974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.427993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.428010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.429899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.429953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.429985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.430017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.430330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.430347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.430388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.430422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.430455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.430487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.430781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.430797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.430809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.432528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.432575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.432612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.432644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.432983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.433000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.433054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.433089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.433121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.433153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.433494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.433511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.433523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.435239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.435287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.435322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.435358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.435677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.435693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.435746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.435781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.435813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.435846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.436191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.436208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.436220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.437949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.437995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.438027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.438058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.438377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.438393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.438439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.438473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.438507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.438540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.438807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.438822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.438834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.440631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.440679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.440714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.440747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.441083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.225 [2024-07-13 22:20:03.441100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.441151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.441197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.441246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.441290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.441618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.441634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.441646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.443258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.443302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.443337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.443369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.443711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.443726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.443778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.443812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.443845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.443878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.444203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.444219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.444232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.445988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.446033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.446065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.446097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.446326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.446341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.446389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.446422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.446459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.446505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.446845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.446862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.446878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.448446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.448491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.448523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.448555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.448893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.448917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.448975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.449010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.449045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.449078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.449306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.449322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.449334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.451143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.451191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.451224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.451256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.451486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.451502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.451562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.451595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.451626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.451659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.452008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.452028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.452040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.453646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.453714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.453750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.453797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.454090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.454107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.454156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.454189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.454221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.454252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.454514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.454530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.454542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.456345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.456391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.456435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.456477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.456755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.456771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.456826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.456861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.456925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.226 [2024-07-13 22:20:03.456970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.457259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.457274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.457287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.458826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.458871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.458910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.458944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.459286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.459302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.459344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.459378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.459427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.459459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.459745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.459761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.459773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.461415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.461459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.461491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.461521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.461789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.461804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.461853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.461887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.461932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.461964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.462293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.462309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.462322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.463920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.463967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.464003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.464036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.464342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.464357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.464402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.464434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.464466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.464497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.464756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.464771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.464784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.466441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.466491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.466533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.466565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.466915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.466934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.466980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.467014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.467047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.467079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.467353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.467368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.467380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.469032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.469078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.469110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.469141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.469364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.469378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.469421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.469459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.469503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.469536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.469867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.469884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.469896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.471642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.472740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.472782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.472814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.473076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.473095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.473145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.473178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.473210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.473242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.473567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.473582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.473594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.475187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.475232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.475263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.475544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.475837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.475853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.475912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.475948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.476228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.476268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.476497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.476512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.476525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.478491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.478791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.479100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.480004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.227 [2024-07-13 22:20:03.480330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.480347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.480639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.480680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.481525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.481922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.482253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.482271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.482284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.484126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.485022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.485370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.485653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.485939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.485956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.486251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.486849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.487509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.487791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.488097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.488114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.488127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.490085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.490378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.490658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.490948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.491280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.491297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.491591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.491874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.492165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.492445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.492761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.492777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.492789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.495302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.496124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.496954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.497991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.498221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.498238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.498826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.499130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.499427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.499715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.500058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.500076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.500089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.501907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.502732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.503761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.504791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.505085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.505102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.505396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.505679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.505968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.506249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.506537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.506553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.506566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.508861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.509911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.510957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.511780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.512079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.512100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.512392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.512673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.512960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.513424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.513691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.513708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.513720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.516302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.517378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.518339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.518623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.518963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.518981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.519274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.519555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.519884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.520804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.521039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.521057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.521069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.523540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.524645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.524940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.525224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.525534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.525550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.525841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.526129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.527172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.528307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.528538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.528554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.228 [2024-07-13 22:20:03.528567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.531046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.531345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.531627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.531914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.532261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.532279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.532568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.533695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.534835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.535976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.536203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.536219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.536232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.537824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.538125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.538406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.538690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.539047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.539067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.540164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.541150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.542217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.543364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.543689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.543705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.543718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.545382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.545676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.545980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.546260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.546491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.546507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.547394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.548428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.549543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.550007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.550279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.550295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.550308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.552116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.552427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.552709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.553722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.553991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.554008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.555045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.556063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.556407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.557385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.557617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.557633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.557645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.559523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.559814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.560610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.561435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.561666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.561682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.562722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.563251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.564406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.565471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.565699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.565715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.565732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.567695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.568328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.569104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.570106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.570337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.570353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.571138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.572109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.572946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.573953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.574182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.574198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.574211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.576335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.577304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.578372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.579400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.579630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.579648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.580362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.581192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.582208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.583249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.583539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.583556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.583568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.586607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.587765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.588814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.589797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.590059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.590076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.590925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.591938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.229 [2024-07-13 22:20:03.592969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.230 [2024-07-13 22:20:03.593522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.230 [2024-07-13 22:20:03.593852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.230 [2024-07-13 22:20:03.593868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.230 [2024-07-13 22:20:03.593881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.230 [2024-07-13 22:20:03.596508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.230 [2024-07-13 22:20:03.597580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.230 [2024-07-13 22:20:03.598720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.230 [2024-07-13 22:20:03.599225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.230 [2024-07-13 22:20:03.599485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.230 [2024-07-13 22:20:03.599501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.230 [2024-07-13 22:20:03.600545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.230 [2024-07-13 22:20:03.601561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.230 [2024-07-13 22:20:03.602412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.230 [2024-07-13 22:20:03.602691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.230 [2024-07-13 22:20:03.603033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.230 [2024-07-13 22:20:03.603050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.230 [2024-07-13 22:20:03.603063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.230 [2024-07-13 22:20:03.605650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.230 [2024-07-13 22:20:03.606760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.230 [2024-07-13 22:20:03.607344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.608301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.608585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.608608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.609696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.610060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.610339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.610618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.610965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.610982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.610995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.613281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.614230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.615077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.616115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.616345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.616362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.616732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.617021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.617300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.617579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.617929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.617946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.617959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.620131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.621004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.622115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.623297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.623688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.623706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.624006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.624290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.624569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.624853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.625092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.625108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.625121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.627450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.628582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.629735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.630020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.630368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.630386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.630675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.630961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.631239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.632270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.632501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.632517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.632529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.634893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.636025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.636317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.636596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.636932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.636949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.637240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.637520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.638624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.639747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.639984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.640004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.640016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.642429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.642726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.643011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.643292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.643650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.643668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.643964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.645104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.490 [2024-07-13 22:20:03.646181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.647333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.647563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.647580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.647592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.649211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.649507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.649786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.650097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.650441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.650458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.651614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.652699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.653838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.654962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.655241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.655257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.655269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.656936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.657228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.657508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.657789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.658028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.658046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.659035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.660083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.661192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.661678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.661967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.661983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.661996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.663760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.664064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.664344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.665443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.665693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.665709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.666752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.667775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.668159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.669027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.669254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.669271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.669284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.671300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.671590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.672636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.673569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.673800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.673816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.674263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.675265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.676356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.677439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.677777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.677794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.677807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.680397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.681474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.682524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.682854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.683094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.683111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.684266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.685320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.686222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.686518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.686856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.686873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.686886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.689483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.690565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.691062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.691898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.692131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.692148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.693216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.694000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.694281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.694560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.694868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.694884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.694899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.697445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.697938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.698770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.699719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.699964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.699982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.700834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.701119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.701398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.701676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.702008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.702059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.702072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.703951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.704002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.704830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.491 [2024-07-13 22:20:03.705870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.706105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.706121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.706666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.707793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.708089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.708369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.708600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.708616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.708628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.711386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.712191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.713165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.713209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.713442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.713458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.714524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.715616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.715660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.715943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.716287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.716303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.716315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.717994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.718039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.718070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.718107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.718330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.718346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.718395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.719371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.719412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.719443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.719699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.719714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.719727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.721145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.721190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.721222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.721254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.721582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.721598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.721640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.721674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.721710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.721743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.722078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.722095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.722107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.723939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.723984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.724017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.724049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.724368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.724384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.724425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.724459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.724492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.724524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.724849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.724865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.724877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.726738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.726783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.726814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.726845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.727172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.727190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.727231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.727265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.727297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.727329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.727596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.727612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.727624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.729456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.729503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.729535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.729568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.729878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.729893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.729960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.730015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.730059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.730092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.730379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.730395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.730408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.732276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.732321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.732364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.732395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.732769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.732784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.732835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.732887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.732937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.732981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.733315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.733332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.492 [2024-07-13 22:20:03.733344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.735177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.735222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.735267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.735300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.735620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.735637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.735691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.735725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.735758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.735790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.736119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.736136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.736150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.737908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.737954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.737989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.738033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.738404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.738421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.738464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.738499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.738531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.738574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.738872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.738888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.738900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.740621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.740667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.740700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.740732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.741078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.741096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.741141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.741176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.741210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.741246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.741606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.741623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.741636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.743511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.743556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.743588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.743620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.743958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.743974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.744016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.744051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.744085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.744119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.744451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.744467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.744479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.746242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.746286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.746318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.746349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.746692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.746709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.746749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.746784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.746817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.746852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.747137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.747153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.747167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.749002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.749047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.749084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.749116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.749434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.749452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.749501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.749536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.749572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.749605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.749914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.749931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.749944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.752055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.752102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.752156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.752201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.752538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.752553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.752609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.752645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.752678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.752712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.753025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.753041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.753053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.754881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.754944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.754991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.755035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.755309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.755328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.755371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.493 [2024-07-13 22:20:03.755405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.755438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.755472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.755810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.755827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.755840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.757638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.757696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.757729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.757761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.758120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.758138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.758181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.758218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.758253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.758287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.758601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.758617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.758631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.760427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.760473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.760512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.760545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.760890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.760913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.760965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.761000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.761035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.761072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.761433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.761449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.761462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.763858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.763911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.763945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.763977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.764244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.764260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.764314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.764349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.764394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.764427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.764729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.764745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.764757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.767108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.767154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.767188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.767220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.767557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.767574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.767615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.767661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.767695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.767729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.768068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.768085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.768099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.770409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.770459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.770492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.770525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.770847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.770863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.770927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.770972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.771006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.771039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.771293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.771310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.771323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.773623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.773669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.773716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.773749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.774091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.774109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.774154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.774189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.774222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.774256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.494 [2024-07-13 22:20:03.774543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.774560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.774572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.776872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.776928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.776962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.776994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.777335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.777352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.777403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.777448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.777482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.777533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.777838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.777854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.777866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.780271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.780326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.780359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.780391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.780707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.780723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.780765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.780800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.780833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.780867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.781210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.781228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.781241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.783366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.783410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.783443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.783475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.783802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.783819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.783861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.783895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.783935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.783980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.784324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.784341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.784354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.786696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.786743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.787119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.787169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.787224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.787268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.787573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.787589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.787601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.867523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.867801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.867849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.868111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.869831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.870226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.870277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.871152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.872199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.873273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.873586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.873615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.873680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.873954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.874002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.874266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.874315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.874584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.874631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.874919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.875186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.875211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.875224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.495 [2024-07-13 22:20:03.875237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.756 [2024-07-13 22:20:03.877706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.756 [2024-07-13 22:20:03.878858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.756 [2024-07-13 22:20:03.879897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.756 [2024-07-13 22:20:03.880183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.756 [2024-07-13 22:20:03.880514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.756 [2024-07-13 22:20:03.880531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.756 [2024-07-13 22:20:03.880823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.756 [2024-07-13 22:20:03.881111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.756 [2024-07-13 22:20:03.881611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.756 [2024-07-13 22:20:03.882422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.756 [2024-07-13 22:20:03.882656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.756 [2024-07-13 22:20:03.882673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.756 [2024-07-13 22:20:03.882685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.756 [2024-07-13 22:20:03.882698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.756 [2024-07-13 22:20:03.885090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.756 [2024-07-13 22:20:03.885944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.756 [2024-07-13 22:20:03.886227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.756 [2024-07-13 22:20:03.886524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.756 [2024-07-13 22:20:03.886846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.756 [2024-07-13 22:20:03.886862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.756 [2024-07-13 22:20:03.887168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.756 [2024-07-13 22:20:03.887854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.756 [2024-07-13 22:20:03.888733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.756 [2024-07-13 22:20:03.889766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.756 [2024-07-13 22:20:03.890019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.756 [2024-07-13 22:20:03.890039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.756 [2024-07-13 22:20:03.890051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.756 [2024-07-13 22:20:03.890064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.756 [2024-07-13 22:20:03.892071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.756 [2024-07-13 22:20:03.892373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.756 [2024-07-13 22:20:03.892652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.756 [2024-07-13 22:20:03.892937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.756 [2024-07-13 22:20:03.893276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.756 [2024-07-13 22:20:03.893293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.756 [2024-07-13 22:20:03.894143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.756 [2024-07-13 22:20:03.894977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.756 [2024-07-13 22:20:03.896007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.756 [2024-07-13 22:20:03.897046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.756 [2024-07-13 22:20:03.897385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.897402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.897415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.897428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.899081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.899371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.899660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.899947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.900183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.900199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.901034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.902042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.903075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.903447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.903686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.903702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.903715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.903728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.905521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.905819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.906104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.907214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.907459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.907475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.908527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.909558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.909969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.910849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.911103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.911119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.911131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.911144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.913156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.913445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.914614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.915696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.915935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.915952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.917125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.917721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.918561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.919612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.919845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.919861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.919874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.919886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.922091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.922926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.923970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.925016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.925284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.925301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.926244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.927092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.928137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.928881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.929261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.929278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.929291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.929304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.931937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.932991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.934022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.934373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.934609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.934625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.935825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.936924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.937960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.938243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.938563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.938579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.938592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.938604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.941224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.942256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.942711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.943795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.944033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.944054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.945203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.946326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.946606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.946885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.947200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.947218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.947230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.947242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.949728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.950350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.951492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.952513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.952747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.952763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.953832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.954126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.954405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.757 [2024-07-13 22:20:03.954683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.955036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.955054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.955067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.955080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.957158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.958257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.959225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.960270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.960504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.960520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.960815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.961103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.961385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.961663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.962005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.962022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.962046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.962059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.964361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.965358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.966417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.967559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.967895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.967917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.968214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.968493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.968770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.969058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.969292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.969308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.969320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.969334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.971495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.972528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.973622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.973914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.974249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.974268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.974557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.974835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.975122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.976129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.976365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.976381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.976394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.976406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.978270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.978560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.978838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.979135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.979479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.979496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.980438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.981252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.982278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.983314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.983630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.983646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.983658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.983671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.985252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.985542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.985821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.986108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.986348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.986364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.987183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.988201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.989251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.990269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.990576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.990592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.990604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.990621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.992553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.992850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.993137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.993176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.993516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.993532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.993820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.994597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.995509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.996033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.996266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.996282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.996294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.996306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.998242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.998299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.999022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.999060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.999339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:03.999355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:04.000110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:04.000153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:04.001247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.758 [2024-07-13 22:20:04.001290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.001517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.001532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.001545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.001557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.003695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.003747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.004487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.004527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.004756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.004772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.005571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.005613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.006359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.006398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.006672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.006688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.006700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.006713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.009616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.009674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.010695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.010735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.010998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.011015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.011764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.011806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.012397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.012437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.012774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.012790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.012803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.012817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.015489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.015545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.016091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.016130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.016385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.016401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.017264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.017307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.017585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.017622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.017969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.017987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.018000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.018015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.019979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.020030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.020768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.020807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.021045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.021063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.021355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.021393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.021671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.021707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.022037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.022054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.022067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.022079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.024545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.024602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.025595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.025636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.025949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.025965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.026258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.026297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.026574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.026609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.026946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.026963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.026977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.026990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.030203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.030251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.030527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.030572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.030846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.030862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.031162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.031219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.031496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.031531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.031863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.031880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.031891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.031910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.033926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.033974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.034252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.034286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.034631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.034648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.034945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.034993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.035283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.035344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.035635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.759 [2024-07-13 22:20:04.035650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.035663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.035675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.037692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.037748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.038036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.038072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.038421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.038440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.038729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.038766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.039050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.039086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.039364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.039381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.039394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.039407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.041434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.041481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.041761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.041801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.042118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.042135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.042427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.042467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.042743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.042789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.043179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.043202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.043215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.043229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.045251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.045299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.045575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.045609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.045937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.045955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.046258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.046300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.046586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.046624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.046966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.046984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.046997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.047010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.049017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.049065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.049342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.049377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.049683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.049699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.049994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.050034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.050313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.050353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.050637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.050653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.050665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.050678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.052741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.052793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.053083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.053124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.053455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.053471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.053759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.053796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.054094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.054133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.054473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.054489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.054503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.054516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.056579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.056627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.056925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.056967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.057255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.057271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.057563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.057605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.760 [2024-07-13 22:20:04.057882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.057924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.058243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.058258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.058271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.058283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.060310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.060358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.060635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.060673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.061015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.061033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.061327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.061369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.061650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.061689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.062028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.062044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.062057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.062069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.064115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.064166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.064200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.064232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.064562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.064579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.064866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.064915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.065206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.065241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.065600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.065617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.065629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.065644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.067393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.067440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.067471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.067502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.067814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.067833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.067875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.067915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.067947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.067980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.068251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.068267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.068279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.068292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.070044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.070089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.070124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.070158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.070479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.070496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.070547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.070592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.070637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.070671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.070973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.070991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.071004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.071017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.072774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.072820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.072878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.072918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.073207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.073223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.073274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.073311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.073361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.073407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.073731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.073747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.073759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.073773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.075571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.075616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.075654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.075688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.075966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.075983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.076036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.076071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.076117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.076176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.076398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.076413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.076425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.076437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.078224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.078270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.078304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.078352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.078652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.078669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.078726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.078771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.761 [2024-07-13 22:20:04.078818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.078866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.079175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.079192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.079204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.079218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.080986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.081031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.081064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.081096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.081359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.081375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.081426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.081462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.081508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.081542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.081875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.081891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.081910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.081924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.083762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.083817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.083851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.083884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.084214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.084229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.084271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.084306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.084340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.084374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.084713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.084729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.084742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.084759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.086340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.086386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.086422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.086454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.086678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.086694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.086738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.086777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.086811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.086862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.087093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.087109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.087122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.087135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.088965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.089010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.089043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.089076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.089301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.089316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.089367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.089401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.089445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.089480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.089690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.089706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.089717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.089730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.091216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.091267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.091300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.091332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.091645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.091661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.091702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.091736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.091769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.091802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.092142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.092159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.092173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.092186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.093551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.093606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.093638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.093670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.093894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.093916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.093965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.094000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.094033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.094077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.094440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.094456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.094470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.094483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.096154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.096199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.096483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.096521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.096559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.096593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.096814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.096829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.096841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:44.762 [2024-07-13 22:20:04.096853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.193233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.193306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.194346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.196974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.197037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.198068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.198117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.198636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.198932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.199275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.199292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.199352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.199611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.199657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.199920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.199965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.201078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.201130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.202254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.202480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.202496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.202508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.202521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.204898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.205253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.206317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.206607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.206946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.206964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.208124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.208404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.208682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.209810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.210045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.210061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.210073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.210086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.212426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.213462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.213800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.214085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.214421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.214439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.214726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.215014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.216032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.216928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.217155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.217170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.217183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.217195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.219541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.220062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.221244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.221526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.221874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.221893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.222935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.223222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.223501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.224532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.224809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.224825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.224838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.224851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.227173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.228205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.228779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.229076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.229417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.229435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.229725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.230010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.230827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.231639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.231869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.231885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.231897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.231914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.234249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.235006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.235909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.236211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.236535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.236551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.237251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.237722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.238014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.238757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.239011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.024 [2024-07-13 22:20:04.239028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.239041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.239053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.241432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.242493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.243311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.243593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.243934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.243952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.244242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.244520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.245102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.245926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.246154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.246171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.246183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.246196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.248571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.249531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.250273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.250788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.251132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.251150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.251692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.252408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.252689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.253236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.253486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.253501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.253513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.253525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.255894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.257107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.258184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.258465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.258806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.258822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.259119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.259399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.259714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.260654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.260884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.260900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.260917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.260929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.263245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.264363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.264811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.265621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.265967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.265985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.266275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.267192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.267456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.267719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.267941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.267957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.267972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.267985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.270198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.270672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.270958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.270997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.271345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.271362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.271661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.271946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.273010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.273945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.274177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.274193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.274205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.274217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.276575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.276625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.276949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.276988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.277222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.277238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.277529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.277566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.277895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.277938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.278171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.278187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.278200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.278213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.280710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.280759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.281132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.281181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.281442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.281458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.282536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.282576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.283592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.283630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.283886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.283908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.025 [2024-07-13 22:20:04.283920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.283934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.286833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.286887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.288025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.288072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.288367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.288382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.289535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.289574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.290685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.290734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.291075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.291091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.291103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.291116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.292913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.292962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.293534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.293574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.293807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.293822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.294891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.294942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.295839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.295878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.296275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.296291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.296305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.296317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.299071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.299128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.300173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.300214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.300470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.300486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.301240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.301282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.301962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.302004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.302239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.302255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.302268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.302281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.304421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.304472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.305455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.305501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.305721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.305737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.306271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.306314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.307062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.307104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.307352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.307368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.307381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.307394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.310287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.310350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.311346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.311390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.311691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.311707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.312626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.312673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.313614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.313656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.313943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.313961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.313974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.313987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.315871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.315927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.317081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.317126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.317356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.317373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.317683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.317724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.318530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.318571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.318801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.318818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.318831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.318844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.321377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.321427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.322199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.322240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.322513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.322529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.323698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.323737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.324883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.324935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.325228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.325245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.325258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.026 [2024-07-13 22:20:04.325271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.327113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.327160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.327976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.328014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.328262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.328278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.328925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.328967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.330119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.330154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.330389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.330408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.330420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.330432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.332516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.332572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.332869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.332922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.333197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.333224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.334050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.334090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.334367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.334402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.334684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.334700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.334712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.334724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.336622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.336671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.336958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.336994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.337310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.337325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.337612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.337650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.337939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.337981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.338249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.338264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.338278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.338293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.340071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.340364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.340402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.340687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.340997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.341014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.341306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.341585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.341622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.341899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.342248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.342264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.342276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.342290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.344085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.344134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.345262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.345552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.345895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.345918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.346208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.346251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.346538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.346820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.347157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.347175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.347188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.347202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.348962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.350061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.350356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.350393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.350736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.350755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.350808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.351933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.352214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.352249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.352584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.352599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.352612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.352625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.354607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.354899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.354952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.355239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.355525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.355540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.356621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.356893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.356936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.357200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.357415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.357430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.357442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.357454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.027 [2024-07-13 22:20:04.359556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.359604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.359882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.360165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.360528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.360545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.360838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.360885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.361173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.362321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.362684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.362701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.362714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.362727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.364396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.364694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.364751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.365044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.365385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.365402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.365449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.365728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.366011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.366048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.366362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.366378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.366390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.366403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.368082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.368687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.368726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.369250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.369590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.369606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.369898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.369951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.370232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.370272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.370592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.370608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.370620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.370633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.372455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.372748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.372790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.373878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.374225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.374242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.374538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.374587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.375676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.375719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.376097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.376115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.376127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.376141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.377845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.378141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.378180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.378457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.378783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.378799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.379099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.379146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.379431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.379470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.379696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.379713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.379725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.379737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.381447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.381739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.381786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.381820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.382126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.382142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.382436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.382475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.382754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.382788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.383090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.383107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.028 [2024-07-13 22:20:04.383119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.383132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.384803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.384849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.384882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.384920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.385249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.385264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.385305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.385355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.385394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.385426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.385648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.385663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.385680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.385692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.387432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.387477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.387508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.387541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.387869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.387885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.387934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.387969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.388014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.388049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.388399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.388413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.388425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.388438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.390144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.390189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.390221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.390254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.390550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.390566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.390610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.390644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.390677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.390710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.391035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.391051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.391063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.391076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.392638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.392692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.392753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.392789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.393086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.393102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.393145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.393178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.393210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.393245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.393571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.393587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.393600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.393614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.395361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.395417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.395456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.395488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.395829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.395845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.395888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.395929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.395963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.395997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.396268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.396284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.396295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.396309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.397949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.397995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.398031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.398066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.398389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.398406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.398449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.398484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.398517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.398551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.398856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.398873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.398886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.398898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.400416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.400461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.400492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.400537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.400758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.400773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.400823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.400863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.400899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.400948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.401168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.401183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.401195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.029 [2024-07-13 22:20:04.401207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.030 [2024-07-13 22:20:04.402826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.030 [2024-07-13 22:20:04.402872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.030 [2024-07-13 22:20:04.402912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.030 [2024-07-13 22:20:04.402946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.030 [2024-07-13 22:20:04.403269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.030 [2024-07-13 22:20:04.403285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.030 [2024-07-13 22:20:04.403332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.030 [2024-07-13 22:20:04.403365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.030 [2024-07-13 22:20:04.403398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.030 [2024-07-13 22:20:04.403430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.030 [2024-07-13 22:20:04.403678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.030 [2024-07-13 22:20:04.403693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.030 [2024-07-13 22:20:04.403705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.030 [2024-07-13 22:20:04.403717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.030 [2024-07-13 22:20:04.405138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.030 [2024-07-13 22:20:04.405183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.030 [2024-07-13 22:20:04.405215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.030 [2024-07-13 22:20:04.405246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.030 [2024-07-13 22:20:04.405568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.030 [2024-07-13 22:20:04.405584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.030 [2024-07-13 22:20:04.405625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.030 [2024-07-13 22:20:04.405659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.030 [2024-07-13 22:20:04.405692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.030 [2024-07-13 22:20:04.405726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.030 [2024-07-13 22:20:04.406031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.030 [2024-07-13 22:20:04.406048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.030 [2024-07-13 22:20:04.406060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.030 [2024-07-13 22:20:04.406072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.030 [2024-07-13 22:20:04.407647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.030 [2024-07-13 22:20:04.407698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.030 [2024-07-13 22:20:04.407767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.030 [2024-07-13 22:20:04.408549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.030 [2024-07-13 22:20:04.408836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.030 [2024-07-13 22:20:04.408854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.030 [2024-07-13 22:20:04.408912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.030 [2024-07-13 22:20:04.408961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.030 [2024-07-13 22:20:04.409004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.030 [2024-07-13 22:20:04.409038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.030 [2024-07-13 22:20:04.409382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.030 [2024-07-13 22:20:04.409409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.030 [2024-07-13 22:20:04.409421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.030 [2024-07-13 22:20:04.409435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.411189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.412140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.412184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.412607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.412852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.412868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.412938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.413892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.413938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.414217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.414538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.414556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.414569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.414583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.416144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.417050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.417091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.417427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.417655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.417671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.417735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.418802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.418842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.419126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.419468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.419490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.419503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.419516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.421081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.421863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.421911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.422356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.422632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.422655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.422717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.423855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.423895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.424179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.424513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.424530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.424543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.424556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.426140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.426973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.427016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.427299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.427531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.427547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.427599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.428741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.428786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.429069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.429406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.429423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.429436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.429449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.431030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.431779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.431820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.432856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.433146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.433163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.433211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.434023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.434064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.435115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.435352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.435368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.435380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.435393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.437066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.437357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.437397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.438296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.438526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.438542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.438590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.439630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.439669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.440344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.440576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.440602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.440615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.440628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.441995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.442286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.442327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.443331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.443682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.443698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.443748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.444037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.444074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.445209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.445438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.445453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.445465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.445478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.446809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.447863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.447909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.449066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.449341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.449358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.449406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.449685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.449721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.450402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.450683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.450698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.450710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.450723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.452217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.453329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.453376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.454049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.454310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.454334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.454382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.455434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.455473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.456514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.456812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.456829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.456841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.456854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.458582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.459298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.459339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.460160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.460391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.460406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.460454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.461505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.461544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.462017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.462291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.462306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.462319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.462331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.463771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.464109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.464150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.464935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.465272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.465288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.465334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.465897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.465940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.466749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.466984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.467001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.467013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.467025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.468353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.469409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.469450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.470055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.470380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.470395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.470443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.470721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.470760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.471773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.472113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.472131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.472144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.472158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.474640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.474691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.475642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.475680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.475914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.475931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.475979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.476013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.477066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.477106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.477381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.477396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.477409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.477422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.479214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.479841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.479882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.479922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.480186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.480201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.480249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.481319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.481357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.481388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.481614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.481630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.481642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.481656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.484373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.484420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.484454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.484974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.485208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.485224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.485516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.485553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.485587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.486261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.486541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.486556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.486572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.486594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.487957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.488005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.489034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.489072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.489304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.489319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.489367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.489401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.489793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.489832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.490198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.490218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.490231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.490245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.491985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.493006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.493047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.493081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.493312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.493328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.493377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.493971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.494009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.494043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.494305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.494320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.494333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.494346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.496337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.496388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.496951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.496988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.290 [2024-07-13 22:20:04.497321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.497338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.498095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.498135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.498169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.499032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.499264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.499280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.499293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.499305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.502079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.502134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.502435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.502471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.502792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.502808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.502854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.503690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.503725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.504011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.504305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.504321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.504333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.504346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.506812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.506865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.507975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.508017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.508247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.508263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.508311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.509100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.509137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.509416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.509751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.509769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.509781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.509796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.512639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.512690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.513564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.513601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.513833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.513848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.513894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.514797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.514833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.515938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.516169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.516184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.516197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.516209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.518065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.518122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.519094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.519951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.520184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.520199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.520251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.521300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.521344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.521929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.522178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.522193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.522205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.522217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.525063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.525355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.525763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.526606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.526835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.526852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.527907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.528866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.529612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.530440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.530672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.530688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.530700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.530713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.532588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.533001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.533826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.534846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.535083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.535099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.536054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.536854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.537689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.538722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.538959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.538976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.538988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.539000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.541351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.542191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.543219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.544249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.544577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.544594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.545695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.546694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.547765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.548915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.549268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.549284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.549296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.549310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.551740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.552778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.553816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.554304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.554535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.554551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.555564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.556647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.557797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.558093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.558434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.558454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.558466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.558480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.561070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.562144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.562608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.563444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.563676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.563692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.564659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.565479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.565747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.566043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.566379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.566395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.566408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.566422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.568800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.569346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.570192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.571219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.571452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.571468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.572272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.572554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.572835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.573128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.573468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.573485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.573498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.573512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.575542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.576379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.577402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.578440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.578730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.578746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.579063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.579345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.579625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.580104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.580382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.580398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.580410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.580423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.582474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.583519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.584567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.584857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.585202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.585220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.585511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.585794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.586258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.587284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.587516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.587532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.587544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.587557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.589141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.589433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.589717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.590006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.590246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.590263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.591082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.592098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.593132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.593511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.593743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.593759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.593771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.593784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.595336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.595644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.595939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.595980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.596208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.596224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.597393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.598555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.599633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.600265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.600526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.600542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.600554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.600567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.602256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.602308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.602585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.602620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.602923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.602945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.603647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.603690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.604199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.604240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.604489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.604505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.604517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.604530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.607329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.607382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.608261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.608338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.608572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.608590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.609529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.609571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.609854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.609893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.610241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.610259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.610272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.610286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.612172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.612223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.612991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.613031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.613262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.613278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.613571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.613615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.613894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.613939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.614293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.614311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.291 [2024-07-13 22:20:04.614325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.614338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.616387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.616438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.616995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.617035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.617399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.617417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.617707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.617747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.618033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.618069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.618299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.618315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.618327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.618339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.619884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.619942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.620222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.620266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.620607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.620624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.620923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.620963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.621849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.621894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.622136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.622152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.622164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.622177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.623765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.623815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.624101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.624138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.624400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.624416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.625171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.625214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.625851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.625891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.626130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.626147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.626158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.626172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.627958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.628007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.629080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.629137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.629426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.629441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.630421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.630471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.630755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.630795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.631151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.631170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.631184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.631203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.633148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.633197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.633475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.633513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.633859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.633875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.634173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.634220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.634503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.634544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.634892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.634916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.634929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.634942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.636935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.636987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.637264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.637302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.637646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.637662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.637964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.638011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.638302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.638342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.638689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.638705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.638718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.638732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.640697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.640752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.641040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.641080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.641390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.641406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.641707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.641750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.642048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.642090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.642451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.642470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.642484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.642498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.644347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.644398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.644677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.644712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.645004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.645032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.645327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.645369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.645645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.645684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.645974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.645991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.646004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.646017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.647732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.647784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.648074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.648122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.648404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.648421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.648721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.648778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.649067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.649104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.649463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.649480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.649494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.649508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.651365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.651658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.651701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.651992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.652270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.652286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.652577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.652859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.652897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.653186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.653479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.653496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.653509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.653522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.655234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.655287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.655570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.655855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.656193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.656211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.656512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.656572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.656859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.657152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.657460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.657477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.657490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.657505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.659117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.659224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.659613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.659910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.660196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.660236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.660543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.660559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.660571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.660583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.662125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.663332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.663654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.292 [2024-07-13 22:20:04.665209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:45.549 00:36:45.549 Latency(us) 00:36:45.549 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:45.549 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:36:45.549 Verification LBA range: start 0x0 length 0x100 00:36:45.549 crypto_ram : 5.46 49.79 3.11 0.00 0.00 2499773.04 3853.52 1865626.42 00:36:45.549 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:36:45.549 Verification LBA range: start 0x100 length 0x100 00:36:45.549 crypto_ram : 5.47 54.64 3.42 0.00 0.00 2236837.67 8074.04 1731408.69 00:36:45.549 Job: crypto_ram1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:36:45.549 Verification LBA range: start 0x0 length 0x100 00:36:45.549 crypto_ram1 : 5.47 50.85 3.18 0.00 0.00 2393114.08 2595.23 1731408.69 00:36:45.549 Job: crypto_ram1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:36:45.549 Verification LBA range: start 0x100 length 0x100 00:36:45.549 crypto_ram1 : 5.49 58.65 3.67 0.00 0.00 2057266.71 14260.63 1597190.96 00:36:45.549 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:36:45.549 Verification LBA range: start 0x0 length 0x100 00:36:45.549 crypto_ram2 : 5.41 375.14 23.45 0.00 0.00 316691.13 48024.78 476472.93 00:36:45.549 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:36:45.549 Verification LBA range: start 0x100 length 0x100 00:36:45.549 crypto_ram2 : 5.39 403.31 25.21 0.00 0.00 294510.02 5793.38 459695.72 00:36:45.549 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:36:45.549 Verification LBA range: start 0x0 length 0x100 00:36:45.549 crypto_ram3 : 5.45 387.56 24.22 0.00 0.00 300344.09 14260.63 258369.13 00:36:45.549 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:36:45.549 Verification LBA range: start 0x100 length 0x100 00:36:45.549 crypto_ram3 : 5.45 421.91 26.37 0.00 0.00 276371.68 1671.17 332188.88 00:36:45.549 =================================================================================================================== 00:36:45.549 Total : 1801.86 112.62 0.00 0.00 534417.65 1671.17 1865626.42 00:36:48.075 00:36:48.075 real 0m11.218s 00:36:48.075 user 0m20.858s 00:36:48.075 sys 0m0.571s 00:36:48.075 22:20:06 blockdev_crypto_qat.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:48.075 22:20:06 blockdev_crypto_qat.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:36:48.075 ************************************ 00:36:48.075 END TEST bdev_verify_big_io 00:36:48.075 ************************************ 00:36:48.075 22:20:06 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 0 00:36:48.075 22:20:06 blockdev_crypto_qat -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:36:48.075 22:20:06 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:36:48.075 22:20:06 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:48.075 22:20:06 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:36:48.075 ************************************ 00:36:48.075 START TEST bdev_write_zeroes 00:36:48.075 ************************************ 00:36:48.075 22:20:06 blockdev_crypto_qat.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:36:48.075 [2024-07-13 22:20:07.077554] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:48.075 [2024-07-13 22:20:07.077641] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1619919 ] 00:36:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:48.075 EAL: Requested device 0000:3d:01.0 cannot be used 00:36:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:48.075 EAL: Requested device 0000:3d:01.1 cannot be used 00:36:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:48.075 EAL: Requested device 0000:3d:01.2 cannot be used 00:36:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:48.075 EAL: Requested device 0000:3d:01.3 cannot be used 00:36:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:48.075 EAL: Requested device 0000:3d:01.4 cannot be used 00:36:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:48.075 EAL: Requested device 0000:3d:01.5 cannot be used 00:36:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:48.075 EAL: Requested device 0000:3d:01.6 cannot be used 00:36:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:48.075 EAL: Requested device 0000:3d:01.7 cannot be used 00:36:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:48.075 EAL: Requested device 0000:3d:02.0 cannot be used 00:36:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:48.075 EAL: Requested device 0000:3d:02.1 cannot be used 00:36:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:48.075 EAL: Requested device 0000:3d:02.2 cannot be used 00:36:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:48.075 EAL: Requested device 0000:3d:02.3 cannot be used 00:36:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:48.075 EAL: Requested device 0000:3d:02.4 cannot be used 00:36:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:48.075 EAL: Requested device 0000:3d:02.5 cannot be used 00:36:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:48.075 EAL: Requested device 0000:3d:02.6 cannot be used 00:36:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:48.075 EAL: Requested device 0000:3d:02.7 cannot be used 00:36:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:48.075 EAL: Requested device 0000:3f:01.0 cannot be used 00:36:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:48.075 EAL: Requested device 0000:3f:01.1 cannot be used 00:36:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:48.075 EAL: Requested device 0000:3f:01.2 cannot be used 00:36:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:48.075 EAL: Requested device 0000:3f:01.3 cannot be used 00:36:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:48.075 EAL: Requested device 0000:3f:01.4 cannot be used 00:36:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:48.075 EAL: Requested device 0000:3f:01.5 cannot be used 00:36:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:48.075 EAL: Requested device 0000:3f:01.6 cannot be used 00:36:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:48.075 EAL: Requested device 0000:3f:01.7 cannot be used 00:36:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:48.075 EAL: Requested device 0000:3f:02.0 cannot be used 00:36:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:48.075 EAL: Requested device 0000:3f:02.1 cannot be used 00:36:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:48.075 EAL: Requested device 0000:3f:02.2 cannot be used 00:36:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:48.075 EAL: Requested device 0000:3f:02.3 cannot be used 00:36:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:48.075 EAL: Requested device 0000:3f:02.4 cannot be used 00:36:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:48.075 EAL: Requested device 0000:3f:02.5 cannot be used 00:36:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:48.075 EAL: Requested device 0000:3f:02.6 cannot be used 00:36:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:48.075 EAL: Requested device 0000:3f:02.7 cannot be used 00:36:48.075 [2024-07-13 22:20:07.236306] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:48.075 [2024-07-13 22:20:07.437889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:48.075 [2024-07-13 22:20:07.459099] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:36:48.332 [2024-07-13 22:20:07.467121] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:36:48.332 [2024-07-13 22:20:07.475131] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:36:48.589 [2024-07-13 22:20:07.761269] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:36:51.137 [2024-07-13 22:20:10.343369] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:36:51.137 [2024-07-13 22:20:10.343439] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:36:51.137 [2024-07-13 22:20:10.343453] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:36:51.137 [2024-07-13 22:20:10.351375] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:36:51.137 [2024-07-13 22:20:10.351409] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:36:51.137 [2024-07-13 22:20:10.351421] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:36:51.137 [2024-07-13 22:20:10.359405] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:36:51.137 [2024-07-13 22:20:10.359437] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:36:51.137 [2024-07-13 22:20:10.359448] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:36:51.137 [2024-07-13 22:20:10.367414] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:36:51.137 [2024-07-13 22:20:10.367441] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:36:51.137 [2024-07-13 22:20:10.367451] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:36:51.394 Running I/O for 1 seconds... 00:36:52.325 00:36:52.325 Latency(us) 00:36:52.325 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:52.325 Job: crypto_ram (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:36:52.325 crypto_ram : 1.02 2729.15 10.66 0.00 0.00 46615.52 5006.95 58300.83 00:36:52.325 Job: crypto_ram1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:36:52.325 crypto_ram1 : 1.02 2742.34 10.71 0.00 0.00 46197.70 4639.95 53687.09 00:36:52.325 Job: crypto_ram2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:36:52.325 crypto_ram2 : 1.02 21234.09 82.95 0.00 0.00 5951.04 1848.12 8126.46 00:36:52.325 Job: crypto_ram3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:36:52.325 crypto_ram3 : 1.02 21266.07 83.07 0.00 0.00 5927.75 1900.54 6448.74 00:36:52.325 =================================================================================================================== 00:36:52.325 Total : 47971.65 187.39 0.00 0.00 10571.34 1848.12 58300.83 00:36:54.220 00:36:54.220 real 0m6.437s 00:36:54.220 user 0m5.935s 00:36:54.220 sys 0m0.454s 00:36:54.220 22:20:13 blockdev_crypto_qat.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:54.220 22:20:13 blockdev_crypto_qat.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:36:54.220 ************************************ 00:36:54.220 END TEST bdev_write_zeroes 00:36:54.220 ************************************ 00:36:54.220 22:20:13 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 0 00:36:54.220 22:20:13 blockdev_crypto_qat -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:36:54.220 22:20:13 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:36:54.220 22:20:13 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:54.220 22:20:13 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:36:54.220 ************************************ 00:36:54.220 START TEST bdev_json_nonenclosed 00:36:54.220 ************************************ 00:36:54.220 22:20:13 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:36:54.220 [2024-07-13 22:20:13.605405] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:54.220 [2024-07-13 22:20:13.605518] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1620987 ] 00:36:54.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:54.477 EAL: Requested device 0000:3d:01.0 cannot be used 00:36:54.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:54.477 EAL: Requested device 0000:3d:01.1 cannot be used 00:36:54.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:54.477 EAL: Requested device 0000:3d:01.2 cannot be used 00:36:54.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:54.477 EAL: Requested device 0000:3d:01.3 cannot be used 00:36:54.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:54.477 EAL: Requested device 0000:3d:01.4 cannot be used 00:36:54.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:54.477 EAL: Requested device 0000:3d:01.5 cannot be used 00:36:54.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:54.477 EAL: Requested device 0000:3d:01.6 cannot be used 00:36:54.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:54.477 EAL: Requested device 0000:3d:01.7 cannot be used 00:36:54.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:54.477 EAL: Requested device 0000:3d:02.0 cannot be used 00:36:54.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:54.477 EAL: Requested device 0000:3d:02.1 cannot be used 00:36:54.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:54.477 EAL: Requested device 0000:3d:02.2 cannot be used 00:36:54.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:54.477 EAL: Requested device 0000:3d:02.3 cannot be used 00:36:54.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:54.477 EAL: Requested device 0000:3d:02.4 cannot be used 00:36:54.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:54.477 EAL: Requested device 0000:3d:02.5 cannot be used 00:36:54.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:54.477 EAL: Requested device 0000:3d:02.6 cannot be used 00:36:54.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:54.477 EAL: Requested device 0000:3d:02.7 cannot be used 00:36:54.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:54.477 EAL: Requested device 0000:3f:01.0 cannot be used 00:36:54.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:54.477 EAL: Requested device 0000:3f:01.1 cannot be used 00:36:54.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:54.477 EAL: Requested device 0000:3f:01.2 cannot be used 00:36:54.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:54.477 EAL: Requested device 0000:3f:01.3 cannot be used 00:36:54.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:54.477 EAL: Requested device 0000:3f:01.4 cannot be used 00:36:54.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:54.477 EAL: Requested device 0000:3f:01.5 cannot be used 00:36:54.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:54.477 EAL: Requested device 0000:3f:01.6 cannot be used 00:36:54.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:54.477 EAL: Requested device 0000:3f:01.7 cannot be used 00:36:54.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:54.477 EAL: Requested device 0000:3f:02.0 cannot be used 00:36:54.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:54.477 EAL: Requested device 0000:3f:02.1 cannot be used 00:36:54.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:54.477 EAL: Requested device 0000:3f:02.2 cannot be used 00:36:54.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:54.477 EAL: Requested device 0000:3f:02.3 cannot be used 00:36:54.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:54.477 EAL: Requested device 0000:3f:02.4 cannot be used 00:36:54.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:54.477 EAL: Requested device 0000:3f:02.5 cannot be used 00:36:54.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:54.477 EAL: Requested device 0000:3f:02.6 cannot be used 00:36:54.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:54.477 EAL: Requested device 0000:3f:02.7 cannot be used 00:36:54.477 [2024-07-13 22:20:13.766809] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:54.734 [2024-07-13 22:20:13.969281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:54.734 [2024-07-13 22:20:13.969358] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:36:54.734 [2024-07-13 22:20:13.969377] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:36:54.734 [2024-07-13 22:20:13.969388] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:36:54.991 00:36:54.991 real 0m0.861s 00:36:54.991 user 0m0.651s 00:36:54.991 sys 0m0.206s 00:36:54.991 22:20:14 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:36:54.991 22:20:14 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:54.991 22:20:14 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:36:54.991 ************************************ 00:36:54.991 END TEST bdev_json_nonenclosed 00:36:54.991 ************************************ 00:36:55.247 22:20:14 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 234 00:36:55.247 22:20:14 blockdev_crypto_qat -- bdev/blockdev.sh@782 -- # true 00:36:55.247 22:20:14 blockdev_crypto_qat -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:36:55.247 22:20:14 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:36:55.247 22:20:14 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:55.247 22:20:14 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:36:55.247 ************************************ 00:36:55.247 START TEST bdev_json_nonarray 00:36:55.247 ************************************ 00:36:55.248 22:20:14 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:36:55.248 [2024-07-13 22:20:14.552240] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:55.248 [2024-07-13 22:20:14.552333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1621033 ] 00:36:55.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:55.504 EAL: Requested device 0000:3d:01.0 cannot be used 00:36:55.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:55.504 EAL: Requested device 0000:3d:01.1 cannot be used 00:36:55.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:55.505 EAL: Requested device 0000:3d:01.2 cannot be used 00:36:55.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:55.505 EAL: Requested device 0000:3d:01.3 cannot be used 00:36:55.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:55.505 EAL: Requested device 0000:3d:01.4 cannot be used 00:36:55.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:55.505 EAL: Requested device 0000:3d:01.5 cannot be used 00:36:55.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:55.505 EAL: Requested device 0000:3d:01.6 cannot be used 00:36:55.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:55.505 EAL: Requested device 0000:3d:01.7 cannot be used 00:36:55.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:55.505 EAL: Requested device 0000:3d:02.0 cannot be used 00:36:55.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:55.505 EAL: Requested device 0000:3d:02.1 cannot be used 00:36:55.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:55.505 EAL: Requested device 0000:3d:02.2 cannot be used 00:36:55.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:55.505 EAL: Requested device 0000:3d:02.3 cannot be used 00:36:55.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:55.505 EAL: Requested device 0000:3d:02.4 cannot be used 00:36:55.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:55.505 EAL: Requested device 0000:3d:02.5 cannot be used 00:36:55.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:55.505 EAL: Requested device 0000:3d:02.6 cannot be used 00:36:55.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:55.505 EAL: Requested device 0000:3d:02.7 cannot be used 00:36:55.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:55.505 EAL: Requested device 0000:3f:01.0 cannot be used 00:36:55.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:55.505 EAL: Requested device 0000:3f:01.1 cannot be used 00:36:55.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:55.505 EAL: Requested device 0000:3f:01.2 cannot be used 00:36:55.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:55.505 EAL: Requested device 0000:3f:01.3 cannot be used 00:36:55.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:55.505 EAL: Requested device 0000:3f:01.4 cannot be used 00:36:55.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:55.505 EAL: Requested device 0000:3f:01.5 cannot be used 00:36:55.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:55.505 EAL: Requested device 0000:3f:01.6 cannot be used 00:36:55.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:55.505 EAL: Requested device 0000:3f:01.7 cannot be used 00:36:55.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:55.505 EAL: Requested device 0000:3f:02.0 cannot be used 00:36:55.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:55.505 EAL: Requested device 0000:3f:02.1 cannot be used 00:36:55.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:55.505 EAL: Requested device 0000:3f:02.2 cannot be used 00:36:55.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:55.505 EAL: Requested device 0000:3f:02.3 cannot be used 00:36:55.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:55.505 EAL: Requested device 0000:3f:02.4 cannot be used 00:36:55.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:55.505 EAL: Requested device 0000:3f:02.5 cannot be used 00:36:55.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:55.505 EAL: Requested device 0000:3f:02.6 cannot be used 00:36:55.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:55.505 EAL: Requested device 0000:3f:02.7 cannot be used 00:36:55.505 [2024-07-13 22:20:14.713842] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:55.762 [2024-07-13 22:20:14.925757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:55.762 [2024-07-13 22:20:14.925835] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:36:55.762 [2024-07-13 22:20:14.925855] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:36:55.762 [2024-07-13 22:20:14.925867] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:36:56.019 00:36:56.019 real 0m0.865s 00:36:56.019 user 0m0.665s 00:36:56.019 sys 0m0.195s 00:36:56.019 22:20:15 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:36:56.019 22:20:15 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:56.019 22:20:15 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:36:56.019 ************************************ 00:36:56.019 END TEST bdev_json_nonarray 00:36:56.019 ************************************ 00:36:56.019 22:20:15 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 234 00:36:56.019 22:20:15 blockdev_crypto_qat -- bdev/blockdev.sh@785 -- # true 00:36:56.019 22:20:15 blockdev_crypto_qat -- bdev/blockdev.sh@787 -- # [[ crypto_qat == bdev ]] 00:36:56.019 22:20:15 blockdev_crypto_qat -- bdev/blockdev.sh@794 -- # [[ crypto_qat == gpt ]] 00:36:56.019 22:20:15 blockdev_crypto_qat -- bdev/blockdev.sh@798 -- # [[ crypto_qat == crypto_sw ]] 00:36:56.019 22:20:15 blockdev_crypto_qat -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:36:56.019 22:20:15 blockdev_crypto_qat -- bdev/blockdev.sh@811 -- # cleanup 00:36:56.019 22:20:15 blockdev_crypto_qat -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:36:56.019 22:20:15 blockdev_crypto_qat -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:36:56.019 22:20:15 blockdev_crypto_qat -- bdev/blockdev.sh@26 -- # [[ crypto_qat == rbd ]] 00:36:56.019 22:20:15 blockdev_crypto_qat -- bdev/blockdev.sh@30 -- # [[ crypto_qat == daos ]] 00:36:56.019 22:20:15 blockdev_crypto_qat -- bdev/blockdev.sh@34 -- # [[ crypto_qat = \g\p\t ]] 00:36:56.019 22:20:15 blockdev_crypto_qat -- bdev/blockdev.sh@40 -- # [[ crypto_qat == xnvme ]] 00:36:56.019 00:36:56.019 real 1m34.054s 00:36:56.019 user 3m19.987s 00:36:56.019 sys 0m9.469s 00:36:56.019 22:20:15 blockdev_crypto_qat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:56.019 22:20:15 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:36:56.019 ************************************ 00:36:56.019 END TEST blockdev_crypto_qat 00:36:56.019 ************************************ 00:36:56.277 22:20:15 -- common/autotest_common.sh@1142 -- # return 0 00:36:56.277 22:20:15 -- spdk/autotest.sh@360 -- # run_test chaining /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/chaining.sh 00:36:56.277 22:20:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:56.277 22:20:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:56.277 22:20:15 -- common/autotest_common.sh@10 -- # set +x 00:36:56.277 ************************************ 00:36:56.277 START TEST chaining 00:36:56.277 ************************************ 00:36:56.277 22:20:15 chaining -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/chaining.sh 00:36:56.277 * Looking for test storage... 00:36:56.277 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:36:56.277 22:20:15 chaining -- bdev/chaining.sh@14 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:36:56.277 22:20:15 chaining -- nvmf/common.sh@7 -- # uname -s 00:36:56.277 22:20:15 chaining -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:56.277 22:20:15 chaining -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:56.277 22:20:15 chaining -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:56.277 22:20:15 chaining -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:56.277 22:20:15 chaining -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:56.277 22:20:15 chaining -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:56.277 22:20:15 chaining -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:56.277 22:20:15 chaining -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:56.277 22:20:15 chaining -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:56.277 22:20:15 chaining -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:56.277 22:20:15 chaining -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:36:56.277 22:20:15 chaining -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:36:56.277 22:20:15 chaining -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:56.277 22:20:15 chaining -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:56.277 22:20:15 chaining -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:36:56.277 22:20:15 chaining -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:56.277 22:20:15 chaining -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:36:56.277 22:20:15 chaining -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:56.277 22:20:15 chaining -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:56.277 22:20:15 chaining -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:56.277 22:20:15 chaining -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:56.277 22:20:15 chaining -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:56.277 22:20:15 chaining -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:56.277 22:20:15 chaining -- paths/export.sh@5 -- # export PATH 00:36:56.277 22:20:15 chaining -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:56.277 22:20:15 chaining -- nvmf/common.sh@47 -- # : 0 00:36:56.277 22:20:15 chaining -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:56.277 22:20:15 chaining -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:56.277 22:20:15 chaining -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:56.277 22:20:15 chaining -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:56.277 22:20:15 chaining -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:56.277 22:20:15 chaining -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:56.277 22:20:15 chaining -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:56.277 22:20:15 chaining -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:56.277 22:20:15 chaining -- bdev/chaining.sh@16 -- # nqn=nqn.2016-06.io.spdk:cnode0 00:36:56.277 22:20:15 chaining -- bdev/chaining.sh@17 -- # key0=(00112233445566778899001122334455 11223344556677889900112233445500) 00:36:56.277 22:20:15 chaining -- bdev/chaining.sh@18 -- # key1=(22334455667788990011223344550011 33445566778899001122334455001122) 00:36:56.277 22:20:15 chaining -- bdev/chaining.sh@19 -- # bperfsock=/var/tmp/bperf.sock 00:36:56.277 22:20:15 chaining -- bdev/chaining.sh@20 -- # declare -A stats 00:36:56.277 22:20:15 chaining -- bdev/chaining.sh@66 -- # nvmftestinit 00:36:56.277 22:20:15 chaining -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:56.277 22:20:15 chaining -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:56.278 22:20:15 chaining -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:56.278 22:20:15 chaining -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:56.278 22:20:15 chaining -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:56.278 22:20:15 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:56.278 22:20:15 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:56.278 22:20:15 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:56.278 22:20:15 chaining -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:36:56.278 22:20:15 chaining -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:56.278 22:20:15 chaining -- nvmf/common.sh@285 -- # xtrace_disable 00:36:56.278 22:20:15 chaining -- common/autotest_common.sh@10 -- # set +x 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@291 -- # pci_devs=() 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@295 -- # net_devs=() 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@296 -- # e810=() 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@296 -- # local -ga e810 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@297 -- # x722=() 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@297 -- # local -ga x722 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@298 -- # mlx=() 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@298 -- # local -ga mlx 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:20:00.0 (0x8086 - 0x159b)' 00:37:06.267 Found 0000:20:00.0 (0x8086 - 0x159b) 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:20:00.1 (0x8086 - 0x159b)' 00:37:06.267 Found 0000:20:00.1 (0x8086 - 0x159b) 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:20:00.0: cvl_0_0' 00:37:06.267 Found net devices under 0000:20:00.0: cvl_0_0 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:20:00.1: cvl_0_1' 00:37:06.267 Found net devices under 0000:20:00.1: cvl_0_1 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@414 -- # is_hw=yes 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:06.267 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:06.267 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:37:06.267 00:37:06.267 --- 10.0.0.2 ping statistics --- 00:37:06.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:06.267 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:06.267 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:06.267 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:37:06.267 00:37:06.267 --- 10.0.0.1 ping statistics --- 00:37:06.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:06.267 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@422 -- # return 0 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:06.267 22:20:24 chaining -- bdev/chaining.sh@67 -- # nvmfappstart -m 0x2 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:06.267 22:20:24 chaining -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:06.267 22:20:24 chaining -- common/autotest_common.sh@10 -- # set +x 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@481 -- # nvmfpid=1625534 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:37:06.267 22:20:24 chaining -- nvmf/common.sh@482 -- # waitforlisten 1625534 00:37:06.267 22:20:24 chaining -- common/autotest_common.sh@829 -- # '[' -z 1625534 ']' 00:37:06.268 22:20:24 chaining -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:06.268 22:20:24 chaining -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:06.268 22:20:24 chaining -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:06.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:06.268 22:20:24 chaining -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:06.268 22:20:24 chaining -- common/autotest_common.sh@10 -- # set +x 00:37:06.268 [2024-07-13 22:20:24.748093] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:06.268 [2024-07-13 22:20:24.748201] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:06.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.268 EAL: Requested device 0000:3d:01.0 cannot be used 00:37:06.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.268 EAL: Requested device 0000:3d:01.1 cannot be used 00:37:06.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.268 EAL: Requested device 0000:3d:01.2 cannot be used 00:37:06.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.268 EAL: Requested device 0000:3d:01.3 cannot be used 00:37:06.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.268 EAL: Requested device 0000:3d:01.4 cannot be used 00:37:06.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.268 EAL: Requested device 0000:3d:01.5 cannot be used 00:37:06.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.268 EAL: Requested device 0000:3d:01.6 cannot be used 00:37:06.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.268 EAL: Requested device 0000:3d:01.7 cannot be used 00:37:06.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.268 EAL: Requested device 0000:3d:02.0 cannot be used 00:37:06.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.268 EAL: Requested device 0000:3d:02.1 cannot be used 00:37:06.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.268 EAL: Requested device 0000:3d:02.2 cannot be used 00:37:06.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.268 EAL: Requested device 0000:3d:02.3 cannot be used 00:37:06.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.268 EAL: Requested device 0000:3d:02.4 cannot be used 00:37:06.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.268 EAL: Requested device 0000:3d:02.5 cannot be used 00:37:06.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.268 EAL: Requested device 0000:3d:02.6 cannot be used 00:37:06.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.268 EAL: Requested device 0000:3d:02.7 cannot be used 00:37:06.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.268 EAL: Requested device 0000:3f:01.0 cannot be used 00:37:06.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.268 EAL: Requested device 0000:3f:01.1 cannot be used 00:37:06.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.268 EAL: Requested device 0000:3f:01.2 cannot be used 00:37:06.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.268 EAL: Requested device 0000:3f:01.3 cannot be used 00:37:06.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.268 EAL: Requested device 0000:3f:01.4 cannot be used 00:37:06.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.268 EAL: Requested device 0000:3f:01.5 cannot be used 00:37:06.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.268 EAL: Requested device 0000:3f:01.6 cannot be used 00:37:06.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.268 EAL: Requested device 0000:3f:01.7 cannot be used 00:37:06.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.268 EAL: Requested device 0000:3f:02.0 cannot be used 00:37:06.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.268 EAL: Requested device 0000:3f:02.1 cannot be used 00:37:06.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.268 EAL: Requested device 0000:3f:02.2 cannot be used 00:37:06.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.268 EAL: Requested device 0000:3f:02.3 cannot be used 00:37:06.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.268 EAL: Requested device 0000:3f:02.4 cannot be used 00:37:06.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.268 EAL: Requested device 0000:3f:02.5 cannot be used 00:37:06.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.268 EAL: Requested device 0000:3f:02.6 cannot be used 00:37:06.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.268 EAL: Requested device 0000:3f:02.7 cannot be used 00:37:06.268 [2024-07-13 22:20:24.920883] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:06.268 [2024-07-13 22:20:25.134480] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:06.268 [2024-07-13 22:20:25.134526] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:06.268 [2024-07-13 22:20:25.134541] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:06.268 [2024-07-13 22:20:25.134552] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:06.268 [2024-07-13 22:20:25.134563] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:06.268 [2024-07-13 22:20:25.134602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:06.268 22:20:25 chaining -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:06.268 22:20:25 chaining -- common/autotest_common.sh@862 -- # return 0 00:37:06.268 22:20:25 chaining -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:06.268 22:20:25 chaining -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:06.268 22:20:25 chaining -- common/autotest_common.sh@10 -- # set +x 00:37:06.268 22:20:25 chaining -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:06.268 22:20:25 chaining -- bdev/chaining.sh@69 -- # mktemp 00:37:06.268 22:20:25 chaining -- bdev/chaining.sh@69 -- # input=/tmp/tmp.Fa9hH0O262 00:37:06.268 22:20:25 chaining -- bdev/chaining.sh@69 -- # mktemp 00:37:06.268 22:20:25 chaining -- bdev/chaining.sh@69 -- # output=/tmp/tmp.mWRCQgB33r 00:37:06.268 22:20:25 chaining -- bdev/chaining.sh@70 -- # trap 'tgtcleanup; exit 1' SIGINT SIGTERM EXIT 00:37:06.268 22:20:25 chaining -- bdev/chaining.sh@72 -- # rpc_cmd 00:37:06.268 22:20:25 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:06.268 22:20:25 chaining -- common/autotest_common.sh@10 -- # set +x 00:37:06.268 malloc0 00:37:06.268 true 00:37:06.268 true 00:37:06.526 [2024-07-13 22:20:25.659728] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:37:06.526 crypto0 00:37:06.526 [2024-07-13 22:20:25.667738] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key1" 00:37:06.526 crypto1 00:37:06.526 [2024-07-13 22:20:25.675871] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:06.526 [2024-07-13 22:20:25.692080] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:06.526 22:20:25 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:06.526 22:20:25 chaining -- bdev/chaining.sh@85 -- # update_stats 00:37:06.526 22:20:25 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:37:06.526 22:20:25 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:37:06.526 22:20:25 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:37:06.526 22:20:25 chaining -- bdev/chaining.sh@39 -- # opcode= 00:37:06.526 22:20:25 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:37:06.526 22:20:25 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:37:06.526 22:20:25 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:37:06.526 22:20:25 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:37:06.526 22:20:25 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:06.526 22:20:25 chaining -- common/autotest_common.sh@10 -- # set +x 00:37:06.526 22:20:25 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:06.526 22:20:25 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=12 00:37:06.526 22:20:25 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:37:06.526 22:20:25 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:37:06.526 22:20:25 chaining -- bdev/chaining.sh@39 -- # event=executed 00:37:06.526 22:20:25 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:37:06.526 22:20:25 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:37:06.526 22:20:25 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:37:06.526 22:20:25 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:37:06.526 22:20:25 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:37:06.526 22:20:25 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:06.526 22:20:25 chaining -- common/autotest_common.sh@10 -- # set +x 00:37:06.526 22:20:25 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:06.526 22:20:25 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]= 00:37:06.526 22:20:25 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:37:06.526 22:20:25 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:37:06.526 22:20:25 chaining -- bdev/chaining.sh@39 -- # event=executed 00:37:06.526 22:20:25 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:37:06.526 22:20:25 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:37:06.526 22:20:25 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:37:06.526 22:20:25 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:37:06.526 22:20:25 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:37:06.526 22:20:25 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:06.526 22:20:25 chaining -- common/autotest_common.sh@10 -- # set +x 00:37:06.526 22:20:25 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:06.526 22:20:25 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=12 00:37:06.526 22:20:25 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:37:06.526 22:20:25 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:37:06.526 22:20:25 chaining -- bdev/chaining.sh@39 -- # event=executed 00:37:06.526 22:20:25 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:37:06.526 22:20:25 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:37:06.526 22:20:25 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:37:06.526 22:20:25 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:37:06.526 22:20:25 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:37:06.526 22:20:25 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:06.526 22:20:25 chaining -- common/autotest_common.sh@10 -- # set +x 00:37:06.526 22:20:25 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:06.526 22:20:25 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:37:06.526 22:20:25 chaining -- bdev/chaining.sh@88 -- # dd if=/dev/urandom of=/tmp/tmp.Fa9hH0O262 bs=1K count=64 00:37:06.526 64+0 records in 00:37:06.527 64+0 records out 00:37:06.527 65536 bytes (66 kB, 64 KiB) copied, 0.0003044 s, 215 MB/s 00:37:06.527 22:20:25 chaining -- bdev/chaining.sh@89 -- # spdk_dd --if /tmp/tmp.Fa9hH0O262 --ob Nvme0n1 --bs 65536 --count 1 00:37:06.527 22:20:25 chaining -- bdev/chaining.sh@25 -- # local config 00:37:06.527 22:20:25 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:37:06.527 22:20:25 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:37:06.527 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:37:06.785 22:20:25 chaining -- bdev/chaining.sh@31 -- # config='{ 00:37:06.785 "subsystems": [ 00:37:06.785 { 00:37:06.785 "subsystem": "bdev", 00:37:06.785 "config": [ 00:37:06.785 { 00:37:06.785 "method": "bdev_nvme_attach_controller", 00:37:06.785 "params": { 00:37:06.785 "trtype": "tcp", 00:37:06.785 "adrfam": "IPv4", 00:37:06.785 "name": "Nvme0", 00:37:06.785 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:06.785 "traddr": "10.0.0.2", 00:37:06.785 "trsvcid": "4420" 00:37:06.785 } 00:37:06.785 }, 00:37:06.785 { 00:37:06.785 "method": "bdev_set_options", 00:37:06.785 "params": { 00:37:06.785 "bdev_auto_examine": false 00:37:06.785 } 00:37:06.785 } 00:37:06.785 ] 00:37:06.785 } 00:37:06.785 ] 00:37:06.785 }' 00:37:06.785 22:20:25 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --if /tmp/tmp.Fa9hH0O262 --ob Nvme0n1 --bs 65536 --count 1 00:37:06.785 22:20:25 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:37:06.785 "subsystems": [ 00:37:06.785 { 00:37:06.785 "subsystem": "bdev", 00:37:06.785 "config": [ 00:37:06.785 { 00:37:06.785 "method": "bdev_nvme_attach_controller", 00:37:06.785 "params": { 00:37:06.785 "trtype": "tcp", 00:37:06.785 "adrfam": "IPv4", 00:37:06.785 "name": "Nvme0", 00:37:06.785 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:06.785 "traddr": "10.0.0.2", 00:37:06.785 "trsvcid": "4420" 00:37:06.785 } 00:37:06.785 }, 00:37:06.785 { 00:37:06.785 "method": "bdev_set_options", 00:37:06.785 "params": { 00:37:06.785 "bdev_auto_examine": false 00:37:06.785 } 00:37:06.785 } 00:37:06.785 ] 00:37:06.785 } 00:37:06.785 ] 00:37:06.785 }' 00:37:06.785 [2024-07-13 22:20:26.010187] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:06.785 [2024-07-13 22:20:26.010274] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1625696 ] 00:37:06.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.785 EAL: Requested device 0000:3d:01.0 cannot be used 00:37:06.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.785 EAL: Requested device 0000:3d:01.1 cannot be used 00:37:06.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.785 EAL: Requested device 0000:3d:01.2 cannot be used 00:37:06.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.785 EAL: Requested device 0000:3d:01.3 cannot be used 00:37:06.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.785 EAL: Requested device 0000:3d:01.4 cannot be used 00:37:06.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.785 EAL: Requested device 0000:3d:01.5 cannot be used 00:37:06.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.785 EAL: Requested device 0000:3d:01.6 cannot be used 00:37:06.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.785 EAL: Requested device 0000:3d:01.7 cannot be used 00:37:06.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.785 EAL: Requested device 0000:3d:02.0 cannot be used 00:37:06.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.785 EAL: Requested device 0000:3d:02.1 cannot be used 00:37:06.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.785 EAL: Requested device 0000:3d:02.2 cannot be used 00:37:06.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.785 EAL: Requested device 0000:3d:02.3 cannot be used 00:37:06.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.785 EAL: Requested device 0000:3d:02.4 cannot be used 00:37:06.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.785 EAL: Requested device 0000:3d:02.5 cannot be used 00:37:06.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.785 EAL: Requested device 0000:3d:02.6 cannot be used 00:37:06.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.785 EAL: Requested device 0000:3d:02.7 cannot be used 00:37:06.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.785 EAL: Requested device 0000:3f:01.0 cannot be used 00:37:06.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.785 EAL: Requested device 0000:3f:01.1 cannot be used 00:37:06.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.785 EAL: Requested device 0000:3f:01.2 cannot be used 00:37:06.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.785 EAL: Requested device 0000:3f:01.3 cannot be used 00:37:06.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.785 EAL: Requested device 0000:3f:01.4 cannot be used 00:37:06.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.785 EAL: Requested device 0000:3f:01.5 cannot be used 00:37:06.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.785 EAL: Requested device 0000:3f:01.6 cannot be used 00:37:06.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.785 EAL: Requested device 0000:3f:01.7 cannot be used 00:37:06.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.785 EAL: Requested device 0000:3f:02.0 cannot be used 00:37:06.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.785 EAL: Requested device 0000:3f:02.1 cannot be used 00:37:06.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.785 EAL: Requested device 0000:3f:02.2 cannot be used 00:37:06.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.785 EAL: Requested device 0000:3f:02.3 cannot be used 00:37:06.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.785 EAL: Requested device 0000:3f:02.4 cannot be used 00:37:06.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.785 EAL: Requested device 0000:3f:02.5 cannot be used 00:37:06.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.785 EAL: Requested device 0000:3f:02.6 cannot be used 00:37:06.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:06.785 EAL: Requested device 0000:3f:02.7 cannot be used 00:37:07.042 [2024-07-13 22:20:26.174836] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:07.043 [2024-07-13 22:20:26.383603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:08.982  Copying: 64/64 [kB] (average 12 MBps) 00:37:08.982 00:37:08.982 22:20:28 chaining -- bdev/chaining.sh@90 -- # get_stat sequence_executed 00:37:08.982 22:20:28 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:37:08.982 22:20:28 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:37:08.982 22:20:28 chaining -- bdev/chaining.sh@39 -- # opcode= 00:37:08.982 22:20:28 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:37:08.982 22:20:28 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:37:08.982 22:20:28 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:37:08.982 22:20:28 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:37:08.982 22:20:28 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:08.982 22:20:28 chaining -- common/autotest_common.sh@10 -- # set +x 00:37:08.982 22:20:28 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:08.982 22:20:28 chaining -- bdev/chaining.sh@90 -- # (( 13 == stats[sequence_executed] + 1 )) 00:37:08.982 22:20:28 chaining -- bdev/chaining.sh@91 -- # get_stat executed encrypt 00:37:08.982 22:20:28 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:37:08.982 22:20:28 chaining -- bdev/chaining.sh@39 -- # event=executed 00:37:08.982 22:20:28 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:37:08.982 22:20:28 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:37:08.982 22:20:28 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:37:08.982 22:20:28 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:37:08.982 22:20:28 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:08.982 22:20:28 chaining -- common/autotest_common.sh@10 -- # set +x 00:37:08.982 22:20:28 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:37:08.982 22:20:28 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:08.982 22:20:28 chaining -- bdev/chaining.sh@91 -- # (( 2 == stats[encrypt_executed] + 2 )) 00:37:08.982 22:20:28 chaining -- bdev/chaining.sh@92 -- # get_stat executed decrypt 00:37:08.982 22:20:28 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:37:08.982 22:20:28 chaining -- bdev/chaining.sh@39 -- # event=executed 00:37:08.982 22:20:28 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:37:08.982 22:20:28 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:37:08.982 22:20:28 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:37:08.982 22:20:28 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:37:08.982 22:20:28 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:37:08.982 22:20:28 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:08.982 22:20:28 chaining -- common/autotest_common.sh@10 -- # set +x 00:37:08.982 22:20:28 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:08.982 22:20:28 chaining -- bdev/chaining.sh@92 -- # (( 12 == stats[decrypt_executed] )) 00:37:08.982 22:20:28 chaining -- bdev/chaining.sh@95 -- # get_stat executed copy 00:37:08.982 22:20:28 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:37:08.982 22:20:28 chaining -- bdev/chaining.sh@39 -- # event=executed 00:37:08.982 22:20:28 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:37:08.982 22:20:28 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:37:08.982 22:20:28 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:37:08.982 22:20:28 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:37:08.982 22:20:28 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:37:08.982 22:20:28 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:08.982 22:20:28 chaining -- common/autotest_common.sh@10 -- # set +x 00:37:08.982 22:20:28 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:09.240 22:20:28 chaining -- bdev/chaining.sh@95 -- # (( 4 == stats[copy_executed] )) 00:37:09.240 22:20:28 chaining -- bdev/chaining.sh@96 -- # update_stats 00:37:09.240 22:20:28 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:37:09.240 22:20:28 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:37:09.240 22:20:28 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:37:09.240 22:20:28 chaining -- bdev/chaining.sh@39 -- # opcode= 00:37:09.240 22:20:28 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:37:09.240 22:20:28 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:37:09.240 22:20:28 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:37:09.240 22:20:28 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:09.240 22:20:28 chaining -- common/autotest_common.sh@10 -- # set +x 00:37:09.240 22:20:28 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:37:09.240 22:20:28 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:09.240 22:20:28 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=13 00:37:09.240 22:20:28 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:37:09.240 22:20:28 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:37:09.240 22:20:28 chaining -- bdev/chaining.sh@39 -- # event=executed 00:37:09.240 22:20:28 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:37:09.240 22:20:28 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:37:09.240 22:20:28 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:37:09.240 22:20:28 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:37:09.240 22:20:28 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:09.240 22:20:28 chaining -- common/autotest_common.sh@10 -- # set +x 00:37:09.240 22:20:28 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:37:09.240 22:20:28 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:09.240 22:20:28 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]=2 00:37:09.240 22:20:28 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:37:09.240 22:20:28 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:37:09.240 22:20:28 chaining -- bdev/chaining.sh@39 -- # event=executed 00:37:09.240 22:20:28 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:37:09.240 22:20:28 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:37:09.240 22:20:28 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:37:09.240 22:20:28 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:37:09.240 22:20:28 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:37:09.240 22:20:28 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:09.240 22:20:28 chaining -- common/autotest_common.sh@10 -- # set +x 00:37:09.240 22:20:28 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:09.240 22:20:28 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=12 00:37:09.240 22:20:28 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:37:09.240 22:20:28 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:37:09.240 22:20:28 chaining -- bdev/chaining.sh@39 -- # event=executed 00:37:09.240 22:20:28 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:37:09.240 22:20:28 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:37:09.240 22:20:28 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:37:09.240 22:20:28 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:37:09.240 22:20:28 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:09.240 22:20:28 chaining -- common/autotest_common.sh@10 -- # set +x 00:37:09.240 22:20:28 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:37:09.240 22:20:28 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:09.240 22:20:28 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:37:09.240 22:20:28 chaining -- bdev/chaining.sh@99 -- # spdk_dd --of /tmp/tmp.mWRCQgB33r --ib Nvme0n1 --bs 65536 --count 1 00:37:09.240 22:20:28 chaining -- bdev/chaining.sh@25 -- # local config 00:37:09.240 22:20:28 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:37:09.240 22:20:28 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:37:09.240 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:37:09.240 22:20:28 chaining -- bdev/chaining.sh@31 -- # config='{ 00:37:09.240 "subsystems": [ 00:37:09.240 { 00:37:09.240 "subsystem": "bdev", 00:37:09.240 "config": [ 00:37:09.240 { 00:37:09.240 "method": "bdev_nvme_attach_controller", 00:37:09.240 "params": { 00:37:09.241 "trtype": "tcp", 00:37:09.241 "adrfam": "IPv4", 00:37:09.241 "name": "Nvme0", 00:37:09.241 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:09.241 "traddr": "10.0.0.2", 00:37:09.241 "trsvcid": "4420" 00:37:09.241 } 00:37:09.241 }, 00:37:09.241 { 00:37:09.241 "method": "bdev_set_options", 00:37:09.241 "params": { 00:37:09.241 "bdev_auto_examine": false 00:37:09.241 } 00:37:09.241 } 00:37:09.241 ] 00:37:09.241 } 00:37:09.241 ] 00:37:09.241 }' 00:37:09.241 22:20:28 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --of /tmp/tmp.mWRCQgB33r --ib Nvme0n1 --bs 65536 --count 1 00:37:09.241 22:20:28 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:37:09.241 "subsystems": [ 00:37:09.241 { 00:37:09.241 "subsystem": "bdev", 00:37:09.241 "config": [ 00:37:09.241 { 00:37:09.241 "method": "bdev_nvme_attach_controller", 00:37:09.241 "params": { 00:37:09.241 "trtype": "tcp", 00:37:09.241 "adrfam": "IPv4", 00:37:09.241 "name": "Nvme0", 00:37:09.241 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:09.241 "traddr": "10.0.0.2", 00:37:09.241 "trsvcid": "4420" 00:37:09.241 } 00:37:09.241 }, 00:37:09.241 { 00:37:09.241 "method": "bdev_set_options", 00:37:09.241 "params": { 00:37:09.241 "bdev_auto_examine": false 00:37:09.241 } 00:37:09.241 } 00:37:09.241 ] 00:37:09.241 } 00:37:09.241 ] 00:37:09.241 }' 00:37:09.499 [2024-07-13 22:20:28.681370] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:09.499 [2024-07-13 22:20:28.681473] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1626176 ] 00:37:09.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:09.499 EAL: Requested device 0000:3d:01.0 cannot be used 00:37:09.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:09.499 EAL: Requested device 0000:3d:01.1 cannot be used 00:37:09.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:09.499 EAL: Requested device 0000:3d:01.2 cannot be used 00:37:09.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:09.499 EAL: Requested device 0000:3d:01.3 cannot be used 00:37:09.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:09.499 EAL: Requested device 0000:3d:01.4 cannot be used 00:37:09.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:09.499 EAL: Requested device 0000:3d:01.5 cannot be used 00:37:09.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:09.499 EAL: Requested device 0000:3d:01.6 cannot be used 00:37:09.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:09.499 EAL: Requested device 0000:3d:01.7 cannot be used 00:37:09.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:09.499 EAL: Requested device 0000:3d:02.0 cannot be used 00:37:09.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:09.499 EAL: Requested device 0000:3d:02.1 cannot be used 00:37:09.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:09.499 EAL: Requested device 0000:3d:02.2 cannot be used 00:37:09.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:09.499 EAL: Requested device 0000:3d:02.3 cannot be used 00:37:09.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:09.499 EAL: Requested device 0000:3d:02.4 cannot be used 00:37:09.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:09.499 EAL: Requested device 0000:3d:02.5 cannot be used 00:37:09.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:09.499 EAL: Requested device 0000:3d:02.6 cannot be used 00:37:09.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:09.499 EAL: Requested device 0000:3d:02.7 cannot be used 00:37:09.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:09.499 EAL: Requested device 0000:3f:01.0 cannot be used 00:37:09.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:09.499 EAL: Requested device 0000:3f:01.1 cannot be used 00:37:09.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:09.499 EAL: Requested device 0000:3f:01.2 cannot be used 00:37:09.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:09.499 EAL: Requested device 0000:3f:01.3 cannot be used 00:37:09.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:09.499 EAL: Requested device 0000:3f:01.4 cannot be used 00:37:09.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:09.499 EAL: Requested device 0000:3f:01.5 cannot be used 00:37:09.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:09.499 EAL: Requested device 0000:3f:01.6 cannot be used 00:37:09.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:09.499 EAL: Requested device 0000:3f:01.7 cannot be used 00:37:09.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:09.499 EAL: Requested device 0000:3f:02.0 cannot be used 00:37:09.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:09.499 EAL: Requested device 0000:3f:02.1 cannot be used 00:37:09.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:09.499 EAL: Requested device 0000:3f:02.2 cannot be used 00:37:09.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:09.499 EAL: Requested device 0000:3f:02.3 cannot be used 00:37:09.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:09.499 EAL: Requested device 0000:3f:02.4 cannot be used 00:37:09.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:09.499 EAL: Requested device 0000:3f:02.5 cannot be used 00:37:09.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:09.499 EAL: Requested device 0000:3f:02.6 cannot be used 00:37:09.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:09.499 EAL: Requested device 0000:3f:02.7 cannot be used 00:37:09.499 [2024-07-13 22:20:28.843870] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:09.757 [2024-07-13 22:20:29.054976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:11.696  Copying: 64/64 [kB] (average 12 MBps) 00:37:11.696 00:37:11.696 22:20:30 chaining -- bdev/chaining.sh@100 -- # get_stat sequence_executed 00:37:11.696 22:20:30 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:37:11.696 22:20:30 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:37:11.696 22:20:30 chaining -- bdev/chaining.sh@39 -- # opcode= 00:37:11.696 22:20:30 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:37:11.696 22:20:30 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:37:11.696 22:20:30 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:37:11.696 22:20:30 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:37:11.696 22:20:30 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:11.696 22:20:30 chaining -- common/autotest_common.sh@10 -- # set +x 00:37:11.697 22:20:30 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:11.697 22:20:30 chaining -- bdev/chaining.sh@100 -- # (( 14 == stats[sequence_executed] + 1 )) 00:37:11.697 22:20:30 chaining -- bdev/chaining.sh@101 -- # get_stat executed encrypt 00:37:11.697 22:20:30 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:37:11.697 22:20:30 chaining -- bdev/chaining.sh@39 -- # event=executed 00:37:11.697 22:20:30 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:37:11.697 22:20:30 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:37:11.697 22:20:30 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:37:11.697 22:20:30 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:37:11.697 22:20:30 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:37:11.697 22:20:30 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:11.697 22:20:30 chaining -- common/autotest_common.sh@10 -- # set +x 00:37:11.697 22:20:30 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:11.697 22:20:30 chaining -- bdev/chaining.sh@101 -- # (( 2 == stats[encrypt_executed] )) 00:37:11.697 22:20:30 chaining -- bdev/chaining.sh@102 -- # get_stat executed decrypt 00:37:11.697 22:20:30 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:37:11.697 22:20:30 chaining -- bdev/chaining.sh@39 -- # event=executed 00:37:11.697 22:20:30 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:37:11.697 22:20:30 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:37:11.697 22:20:30 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:37:11.697 22:20:30 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:37:11.697 22:20:30 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:37:11.697 22:20:30 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:11.697 22:20:30 chaining -- common/autotest_common.sh@10 -- # set +x 00:37:11.697 22:20:30 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:11.697 22:20:30 chaining -- bdev/chaining.sh@102 -- # (( 14 == stats[decrypt_executed] + 2 )) 00:37:11.697 22:20:30 chaining -- bdev/chaining.sh@103 -- # get_stat executed copy 00:37:11.697 22:20:30 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:37:11.697 22:20:30 chaining -- bdev/chaining.sh@39 -- # event=executed 00:37:11.697 22:20:30 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:37:11.697 22:20:30 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:37:11.697 22:20:30 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:37:11.697 22:20:30 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:37:11.697 22:20:30 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:11.697 22:20:30 chaining -- common/autotest_common.sh@10 -- # set +x 00:37:11.697 22:20:30 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:37:11.697 22:20:30 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:11.697 22:20:30 chaining -- bdev/chaining.sh@103 -- # (( 4 == stats[copy_executed] )) 00:37:11.697 22:20:30 chaining -- bdev/chaining.sh@104 -- # cmp /tmp/tmp.Fa9hH0O262 /tmp/tmp.mWRCQgB33r 00:37:11.697 22:20:30 chaining -- bdev/chaining.sh@105 -- # spdk_dd --if /dev/zero --ob Nvme0n1 --bs 65536 --count 1 00:37:11.697 22:20:30 chaining -- bdev/chaining.sh@25 -- # local config 00:37:11.697 22:20:30 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:37:11.697 22:20:30 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:37:11.697 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:37:11.697 22:20:31 chaining -- bdev/chaining.sh@31 -- # config='{ 00:37:11.697 "subsystems": [ 00:37:11.697 { 00:37:11.697 "subsystem": "bdev", 00:37:11.697 "config": [ 00:37:11.697 { 00:37:11.697 "method": "bdev_nvme_attach_controller", 00:37:11.697 "params": { 00:37:11.697 "trtype": "tcp", 00:37:11.697 "adrfam": "IPv4", 00:37:11.697 "name": "Nvme0", 00:37:11.697 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:11.697 "traddr": "10.0.0.2", 00:37:11.697 "trsvcid": "4420" 00:37:11.697 } 00:37:11.697 }, 00:37:11.697 { 00:37:11.697 "method": "bdev_set_options", 00:37:11.697 "params": { 00:37:11.697 "bdev_auto_examine": false 00:37:11.697 } 00:37:11.697 } 00:37:11.697 ] 00:37:11.697 } 00:37:11.697 ] 00:37:11.697 }' 00:37:11.697 22:20:31 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --if /dev/zero --ob Nvme0n1 --bs 65536 --count 1 00:37:11.697 22:20:31 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:37:11.697 "subsystems": [ 00:37:11.697 { 00:37:11.697 "subsystem": "bdev", 00:37:11.697 "config": [ 00:37:11.697 { 00:37:11.697 "method": "bdev_nvme_attach_controller", 00:37:11.697 "params": { 00:37:11.697 "trtype": "tcp", 00:37:11.697 "adrfam": "IPv4", 00:37:11.697 "name": "Nvme0", 00:37:11.697 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:11.697 "traddr": "10.0.0.2", 00:37:11.697 "trsvcid": "4420" 00:37:11.697 } 00:37:11.697 }, 00:37:11.697 { 00:37:11.697 "method": "bdev_set_options", 00:37:11.697 "params": { 00:37:11.697 "bdev_auto_examine": false 00:37:11.697 } 00:37:11.697 } 00:37:11.697 ] 00:37:11.697 } 00:37:11.697 ] 00:37:11.697 }' 00:37:11.955 [2024-07-13 22:20:31.112759] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:11.955 [2024-07-13 22:20:31.112848] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1626643 ] 00:37:11.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:11.955 EAL: Requested device 0000:3d:01.0 cannot be used 00:37:11.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:11.955 EAL: Requested device 0000:3d:01.1 cannot be used 00:37:11.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:11.955 EAL: Requested device 0000:3d:01.2 cannot be used 00:37:11.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:11.955 EAL: Requested device 0000:3d:01.3 cannot be used 00:37:11.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:11.956 EAL: Requested device 0000:3d:01.4 cannot be used 00:37:11.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:11.956 EAL: Requested device 0000:3d:01.5 cannot be used 00:37:11.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:11.956 EAL: Requested device 0000:3d:01.6 cannot be used 00:37:11.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:11.956 EAL: Requested device 0000:3d:01.7 cannot be used 00:37:11.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:11.956 EAL: Requested device 0000:3d:02.0 cannot be used 00:37:11.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:11.956 EAL: Requested device 0000:3d:02.1 cannot be used 00:37:11.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:11.956 EAL: Requested device 0000:3d:02.2 cannot be used 00:37:11.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:11.956 EAL: Requested device 0000:3d:02.3 cannot be used 00:37:11.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:11.956 EAL: Requested device 0000:3d:02.4 cannot be used 00:37:11.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:11.956 EAL: Requested device 0000:3d:02.5 cannot be used 00:37:11.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:11.956 EAL: Requested device 0000:3d:02.6 cannot be used 00:37:11.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:11.956 EAL: Requested device 0000:3d:02.7 cannot be used 00:37:11.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:11.956 EAL: Requested device 0000:3f:01.0 cannot be used 00:37:11.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:11.956 EAL: Requested device 0000:3f:01.1 cannot be used 00:37:11.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:11.956 EAL: Requested device 0000:3f:01.2 cannot be used 00:37:11.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:11.956 EAL: Requested device 0000:3f:01.3 cannot be used 00:37:11.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:11.956 EAL: Requested device 0000:3f:01.4 cannot be used 00:37:11.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:11.956 EAL: Requested device 0000:3f:01.5 cannot be used 00:37:11.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:11.956 EAL: Requested device 0000:3f:01.6 cannot be used 00:37:11.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:11.956 EAL: Requested device 0000:3f:01.7 cannot be used 00:37:11.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:11.956 EAL: Requested device 0000:3f:02.0 cannot be used 00:37:11.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:11.956 EAL: Requested device 0000:3f:02.1 cannot be used 00:37:11.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:11.956 EAL: Requested device 0000:3f:02.2 cannot be used 00:37:11.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:11.956 EAL: Requested device 0000:3f:02.3 cannot be used 00:37:11.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:11.956 EAL: Requested device 0000:3f:02.4 cannot be used 00:37:11.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:11.956 EAL: Requested device 0000:3f:02.5 cannot be used 00:37:11.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:11.956 EAL: Requested device 0000:3f:02.6 cannot be used 00:37:11.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:11.956 EAL: Requested device 0000:3f:02.7 cannot be used 00:37:11.956 [2024-07-13 22:20:31.273701] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:12.214 [2024-07-13 22:20:31.482746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:14.152  Copying: 64/64 [kB] (average 62 MBps) 00:37:14.152 00:37:14.152 22:20:33 chaining -- bdev/chaining.sh@106 -- # update_stats 00:37:14.152 22:20:33 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:37:14.152 22:20:33 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:37:14.152 22:20:33 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:37:14.152 22:20:33 chaining -- bdev/chaining.sh@39 -- # opcode= 00:37:14.152 22:20:33 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:37:14.152 22:20:33 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:37:14.152 22:20:33 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:37:14.152 22:20:33 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:37:14.152 22:20:33 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:14.152 22:20:33 chaining -- common/autotest_common.sh@10 -- # set +x 00:37:14.152 22:20:33 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:14.152 22:20:33 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=15 00:37:14.152 22:20:33 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:37:14.152 22:20:33 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:37:14.152 22:20:33 chaining -- bdev/chaining.sh@39 -- # event=executed 00:37:14.152 22:20:33 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:37:14.152 22:20:33 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:37:14.152 22:20:33 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:37:14.152 22:20:33 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:37:14.152 22:20:33 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:37:14.152 22:20:33 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:14.152 22:20:33 chaining -- common/autotest_common.sh@10 -- # set +x 00:37:14.152 22:20:33 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:14.152 22:20:33 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]=4 00:37:14.152 22:20:33 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:37:14.152 22:20:33 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:37:14.152 22:20:33 chaining -- bdev/chaining.sh@39 -- # event=executed 00:37:14.152 22:20:33 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:37:14.152 22:20:33 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:37:14.152 22:20:33 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:37:14.152 22:20:33 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:37:14.152 22:20:33 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:37:14.152 22:20:33 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:14.152 22:20:33 chaining -- common/autotest_common.sh@10 -- # set +x 00:37:14.152 22:20:33 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:14.152 22:20:33 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=14 00:37:14.152 22:20:33 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:37:14.152 22:20:33 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:37:14.152 22:20:33 chaining -- bdev/chaining.sh@39 -- # event=executed 00:37:14.152 22:20:33 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:37:14.152 22:20:33 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:37:14.152 22:20:33 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:37:14.152 22:20:33 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:37:14.152 22:20:33 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:37:14.152 22:20:33 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:14.152 22:20:33 chaining -- common/autotest_common.sh@10 -- # set +x 00:37:14.152 22:20:33 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:14.153 22:20:33 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:37:14.153 22:20:33 chaining -- bdev/chaining.sh@109 -- # spdk_dd --if /tmp/tmp.Fa9hH0O262 --ob Nvme0n1 --bs 4096 --count 16 00:37:14.153 22:20:33 chaining -- bdev/chaining.sh@25 -- # local config 00:37:14.153 22:20:33 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:37:14.153 22:20:33 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:37:14.153 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:37:14.153 22:20:33 chaining -- bdev/chaining.sh@31 -- # config='{ 00:37:14.153 "subsystems": [ 00:37:14.153 { 00:37:14.153 "subsystem": "bdev", 00:37:14.153 "config": [ 00:37:14.153 { 00:37:14.153 "method": "bdev_nvme_attach_controller", 00:37:14.153 "params": { 00:37:14.153 "trtype": "tcp", 00:37:14.153 "adrfam": "IPv4", 00:37:14.153 "name": "Nvme0", 00:37:14.153 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:14.153 "traddr": "10.0.0.2", 00:37:14.153 "trsvcid": "4420" 00:37:14.153 } 00:37:14.153 }, 00:37:14.153 { 00:37:14.153 "method": "bdev_set_options", 00:37:14.153 "params": { 00:37:14.153 "bdev_auto_examine": false 00:37:14.153 } 00:37:14.153 } 00:37:14.153 ] 00:37:14.153 } 00:37:14.153 ] 00:37:14.153 }' 00:37:14.153 22:20:33 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --if /tmp/tmp.Fa9hH0O262 --ob Nvme0n1 --bs 4096 --count 16 00:37:14.153 22:20:33 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:37:14.153 "subsystems": [ 00:37:14.153 { 00:37:14.153 "subsystem": "bdev", 00:37:14.153 "config": [ 00:37:14.153 { 00:37:14.153 "method": "bdev_nvme_attach_controller", 00:37:14.153 "params": { 00:37:14.153 "trtype": "tcp", 00:37:14.153 "adrfam": "IPv4", 00:37:14.153 "name": "Nvme0", 00:37:14.153 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:14.153 "traddr": "10.0.0.2", 00:37:14.153 "trsvcid": "4420" 00:37:14.153 } 00:37:14.153 }, 00:37:14.153 { 00:37:14.153 "method": "bdev_set_options", 00:37:14.153 "params": { 00:37:14.153 "bdev_auto_examine": false 00:37:14.153 } 00:37:14.153 } 00:37:14.153 ] 00:37:14.153 } 00:37:14.153 ] 00:37:14.153 }' 00:37:14.409 [2024-07-13 22:20:33.614694] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:14.410 [2024-07-13 22:20:33.614793] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1627020 ] 00:37:14.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:14.410 EAL: Requested device 0000:3d:01.0 cannot be used 00:37:14.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:14.410 EAL: Requested device 0000:3d:01.1 cannot be used 00:37:14.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:14.410 EAL: Requested device 0000:3d:01.2 cannot be used 00:37:14.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:14.410 EAL: Requested device 0000:3d:01.3 cannot be used 00:37:14.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:14.410 EAL: Requested device 0000:3d:01.4 cannot be used 00:37:14.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:14.410 EAL: Requested device 0000:3d:01.5 cannot be used 00:37:14.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:14.410 EAL: Requested device 0000:3d:01.6 cannot be used 00:37:14.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:14.410 EAL: Requested device 0000:3d:01.7 cannot be used 00:37:14.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:14.410 EAL: Requested device 0000:3d:02.0 cannot be used 00:37:14.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:14.410 EAL: Requested device 0000:3d:02.1 cannot be used 00:37:14.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:14.410 EAL: Requested device 0000:3d:02.2 cannot be used 00:37:14.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:14.410 EAL: Requested device 0000:3d:02.3 cannot be used 00:37:14.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:14.410 EAL: Requested device 0000:3d:02.4 cannot be used 00:37:14.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:14.410 EAL: Requested device 0000:3d:02.5 cannot be used 00:37:14.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:14.410 EAL: Requested device 0000:3d:02.6 cannot be used 00:37:14.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:14.410 EAL: Requested device 0000:3d:02.7 cannot be used 00:37:14.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:14.410 EAL: Requested device 0000:3f:01.0 cannot be used 00:37:14.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:14.410 EAL: Requested device 0000:3f:01.1 cannot be used 00:37:14.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:14.410 EAL: Requested device 0000:3f:01.2 cannot be used 00:37:14.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:14.410 EAL: Requested device 0000:3f:01.3 cannot be used 00:37:14.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:14.410 EAL: Requested device 0000:3f:01.4 cannot be used 00:37:14.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:14.410 EAL: Requested device 0000:3f:01.5 cannot be used 00:37:14.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:14.410 EAL: Requested device 0000:3f:01.6 cannot be used 00:37:14.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:14.410 EAL: Requested device 0000:3f:01.7 cannot be used 00:37:14.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:14.410 EAL: Requested device 0000:3f:02.0 cannot be used 00:37:14.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:14.410 EAL: Requested device 0000:3f:02.1 cannot be used 00:37:14.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:14.410 EAL: Requested device 0000:3f:02.2 cannot be used 00:37:14.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:14.410 EAL: Requested device 0000:3f:02.3 cannot be used 00:37:14.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:14.410 EAL: Requested device 0000:3f:02.4 cannot be used 00:37:14.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:14.410 EAL: Requested device 0000:3f:02.5 cannot be used 00:37:14.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:14.410 EAL: Requested device 0000:3f:02.6 cannot be used 00:37:14.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:14.410 EAL: Requested device 0000:3f:02.7 cannot be used 00:37:14.410 [2024-07-13 22:20:33.775785] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:14.667 [2024-07-13 22:20:33.988304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:16.603  Copying: 64/64 [kB] (average 20 MBps) 00:37:16.603 00:37:16.603 22:20:35 chaining -- bdev/chaining.sh@110 -- # get_stat sequence_executed 00:37:16.603 22:20:35 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:37:16.603 22:20:35 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:37:16.603 22:20:35 chaining -- bdev/chaining.sh@39 -- # opcode= 00:37:16.603 22:20:35 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:37:16.603 22:20:35 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:37:16.603 22:20:35 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:37:16.603 22:20:35 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:37:16.603 22:20:35 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:16.603 22:20:35 chaining -- common/autotest_common.sh@10 -- # set +x 00:37:16.603 22:20:35 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:16.603 22:20:35 chaining -- bdev/chaining.sh@110 -- # (( 31 == stats[sequence_executed] + 16 )) 00:37:16.603 22:20:35 chaining -- bdev/chaining.sh@111 -- # get_stat executed encrypt 00:37:16.603 22:20:35 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:37:16.603 22:20:35 chaining -- bdev/chaining.sh@39 -- # event=executed 00:37:16.603 22:20:35 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:37:16.603 22:20:35 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:37:16.603 22:20:35 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:37:16.603 22:20:35 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:37:16.603 22:20:35 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:37:16.603 22:20:35 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:16.603 22:20:35 chaining -- common/autotest_common.sh@10 -- # set +x 00:37:16.603 22:20:35 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:16.603 22:20:35 chaining -- bdev/chaining.sh@111 -- # (( 36 == stats[encrypt_executed] + 32 )) 00:37:16.603 22:20:35 chaining -- bdev/chaining.sh@112 -- # get_stat executed decrypt 00:37:16.603 22:20:35 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:37:16.603 22:20:35 chaining -- bdev/chaining.sh@39 -- # event=executed 00:37:16.603 22:20:35 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:37:16.603 22:20:35 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:37:16.603 22:20:35 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:37:16.603 22:20:35 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:37:16.603 22:20:35 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:16.603 22:20:35 chaining -- common/autotest_common.sh@10 -- # set +x 00:37:16.603 22:20:35 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:37:16.603 22:20:35 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:16.603 22:20:35 chaining -- bdev/chaining.sh@112 -- # (( 14 == stats[decrypt_executed] )) 00:37:16.603 22:20:35 chaining -- bdev/chaining.sh@113 -- # get_stat executed copy 00:37:16.603 22:20:35 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:37:16.603 22:20:35 chaining -- bdev/chaining.sh@39 -- # event=executed 00:37:16.603 22:20:35 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:37:16.603 22:20:35 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:37:16.603 22:20:35 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:37:16.603 22:20:35 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:37:16.603 22:20:35 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:16.603 22:20:35 chaining -- common/autotest_common.sh@10 -- # set +x 00:37:16.603 22:20:35 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:37:16.603 22:20:35 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:16.861 22:20:36 chaining -- bdev/chaining.sh@113 -- # (( 4 == stats[copy_executed] )) 00:37:16.861 22:20:36 chaining -- bdev/chaining.sh@114 -- # update_stats 00:37:16.861 22:20:36 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:37:16.861 22:20:36 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:37:16.861 22:20:36 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:37:16.861 22:20:36 chaining -- bdev/chaining.sh@39 -- # opcode= 00:37:16.861 22:20:36 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:37:16.861 22:20:36 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:37:16.861 22:20:36 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:37:16.861 22:20:36 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:37:16.861 22:20:36 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:16.861 22:20:36 chaining -- common/autotest_common.sh@10 -- # set +x 00:37:16.861 22:20:36 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:16.861 22:20:36 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=31 00:37:16.861 22:20:36 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:37:16.861 22:20:36 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:37:16.861 22:20:36 chaining -- bdev/chaining.sh@39 -- # event=executed 00:37:16.861 22:20:36 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:37:16.861 22:20:36 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:37:16.861 22:20:36 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:37:16.861 22:20:36 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:37:16.861 22:20:36 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:37:16.861 22:20:36 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:16.861 22:20:36 chaining -- common/autotest_common.sh@10 -- # set +x 00:37:16.861 22:20:36 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:16.861 22:20:36 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]=36 00:37:16.861 22:20:36 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:37:16.861 22:20:36 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:37:16.861 22:20:36 chaining -- bdev/chaining.sh@39 -- # event=executed 00:37:16.861 22:20:36 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:37:16.861 22:20:36 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:37:16.861 22:20:36 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:37:16.861 22:20:36 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:37:16.861 22:20:36 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:16.861 22:20:36 chaining -- common/autotest_common.sh@10 -- # set +x 00:37:16.861 22:20:36 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:37:16.861 22:20:36 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:16.861 22:20:36 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=14 00:37:16.861 22:20:36 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:37:16.861 22:20:36 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:37:16.861 22:20:36 chaining -- bdev/chaining.sh@39 -- # event=executed 00:37:16.861 22:20:36 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:37:16.861 22:20:36 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:37:16.861 22:20:36 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:37:16.861 22:20:36 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:37:16.861 22:20:36 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:16.861 22:20:36 chaining -- common/autotest_common.sh@10 -- # set +x 00:37:16.861 22:20:36 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:37:16.861 22:20:36 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:16.861 22:20:36 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:37:16.861 22:20:36 chaining -- bdev/chaining.sh@117 -- # : 00:37:16.861 22:20:36 chaining -- bdev/chaining.sh@118 -- # spdk_dd --of /tmp/tmp.mWRCQgB33r --ib Nvme0n1 --bs 4096 --count 16 00:37:16.861 22:20:36 chaining -- bdev/chaining.sh@25 -- # local config 00:37:16.861 22:20:36 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:37:16.861 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:37:16.861 22:20:36 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:37:16.861 22:20:36 chaining -- bdev/chaining.sh@31 -- # config='{ 00:37:16.861 "subsystems": [ 00:37:16.861 { 00:37:16.861 "subsystem": "bdev", 00:37:16.861 "config": [ 00:37:16.861 { 00:37:16.861 "method": "bdev_nvme_attach_controller", 00:37:16.861 "params": { 00:37:16.861 "trtype": "tcp", 00:37:16.861 "adrfam": "IPv4", 00:37:16.861 "name": "Nvme0", 00:37:16.861 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:16.861 "traddr": "10.0.0.2", 00:37:16.861 "trsvcid": "4420" 00:37:16.861 } 00:37:16.861 }, 00:37:16.861 { 00:37:16.861 "method": "bdev_set_options", 00:37:16.861 "params": { 00:37:16.861 "bdev_auto_examine": false 00:37:16.861 } 00:37:16.861 } 00:37:16.861 ] 00:37:16.861 } 00:37:16.861 ] 00:37:16.861 }' 00:37:16.861 22:20:36 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --of /tmp/tmp.mWRCQgB33r --ib Nvme0n1 --bs 4096 --count 16 00:37:16.861 22:20:36 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:37:16.861 "subsystems": [ 00:37:16.861 { 00:37:16.861 "subsystem": "bdev", 00:37:16.861 "config": [ 00:37:16.861 { 00:37:16.861 "method": "bdev_nvme_attach_controller", 00:37:16.861 "params": { 00:37:16.861 "trtype": "tcp", 00:37:16.861 "adrfam": "IPv4", 00:37:16.861 "name": "Nvme0", 00:37:16.861 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:16.861 "traddr": "10.0.0.2", 00:37:16.861 "trsvcid": "4420" 00:37:16.861 } 00:37:16.861 }, 00:37:16.861 { 00:37:16.861 "method": "bdev_set_options", 00:37:16.861 "params": { 00:37:16.861 "bdev_auto_examine": false 00:37:16.861 } 00:37:16.861 } 00:37:16.861 ] 00:37:16.861 } 00:37:16.861 ] 00:37:16.861 }' 00:37:17.118 [2024-07-13 22:20:36.289291] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:17.118 [2024-07-13 22:20:36.289381] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1627576 ] 00:37:17.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:17.118 EAL: Requested device 0000:3d:01.0 cannot be used 00:37:17.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:17.118 EAL: Requested device 0000:3d:01.1 cannot be used 00:37:17.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:17.118 EAL: Requested device 0000:3d:01.2 cannot be used 00:37:17.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:17.118 EAL: Requested device 0000:3d:01.3 cannot be used 00:37:17.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:17.118 EAL: Requested device 0000:3d:01.4 cannot be used 00:37:17.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:17.118 EAL: Requested device 0000:3d:01.5 cannot be used 00:37:17.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:17.118 EAL: Requested device 0000:3d:01.6 cannot be used 00:37:17.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:17.118 EAL: Requested device 0000:3d:01.7 cannot be used 00:37:17.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:17.118 EAL: Requested device 0000:3d:02.0 cannot be used 00:37:17.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:17.118 EAL: Requested device 0000:3d:02.1 cannot be used 00:37:17.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:17.118 EAL: Requested device 0000:3d:02.2 cannot be used 00:37:17.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:17.118 EAL: Requested device 0000:3d:02.3 cannot be used 00:37:17.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:17.118 EAL: Requested device 0000:3d:02.4 cannot be used 00:37:17.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:17.118 EAL: Requested device 0000:3d:02.5 cannot be used 00:37:17.119 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:17.119 EAL: Requested device 0000:3d:02.6 cannot be used 00:37:17.119 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:17.119 EAL: Requested device 0000:3d:02.7 cannot be used 00:37:17.119 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:17.119 EAL: Requested device 0000:3f:01.0 cannot be used 00:37:17.119 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:17.119 EAL: Requested device 0000:3f:01.1 cannot be used 00:37:17.119 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:17.119 EAL: Requested device 0000:3f:01.2 cannot be used 00:37:17.119 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:17.119 EAL: Requested device 0000:3f:01.3 cannot be used 00:37:17.119 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:17.119 EAL: Requested device 0000:3f:01.4 cannot be used 00:37:17.119 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:17.119 EAL: Requested device 0000:3f:01.5 cannot be used 00:37:17.119 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:17.119 EAL: Requested device 0000:3f:01.6 cannot be used 00:37:17.119 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:17.119 EAL: Requested device 0000:3f:01.7 cannot be used 00:37:17.119 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:17.119 EAL: Requested device 0000:3f:02.0 cannot be used 00:37:17.119 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:17.119 EAL: Requested device 0000:3f:02.1 cannot be used 00:37:17.119 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:17.119 EAL: Requested device 0000:3f:02.2 cannot be used 00:37:17.119 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:17.119 EAL: Requested device 0000:3f:02.3 cannot be used 00:37:17.119 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:17.119 EAL: Requested device 0000:3f:02.4 cannot be used 00:37:17.119 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:17.119 EAL: Requested device 0000:3f:02.5 cannot be used 00:37:17.119 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:17.119 EAL: Requested device 0000:3f:02.6 cannot be used 00:37:17.119 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:17.119 EAL: Requested device 0000:3f:02.7 cannot be used 00:37:17.119 [2024-07-13 22:20:36.446898] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:17.376 [2024-07-13 22:20:36.668682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:19.685  Copying: 64/64 [kB] (average 492 kBps) 00:37:19.685 00:37:19.685 22:20:38 chaining -- bdev/chaining.sh@119 -- # get_stat sequence_executed 00:37:19.685 22:20:38 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:37:19.685 22:20:38 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:37:19.685 22:20:38 chaining -- bdev/chaining.sh@39 -- # opcode= 00:37:19.685 22:20:38 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:37:19.685 22:20:38 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:37:19.685 22:20:38 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:37:19.685 22:20:38 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:37:19.685 22:20:38 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:19.685 22:20:38 chaining -- common/autotest_common.sh@10 -- # set +x 00:37:19.685 22:20:38 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:19.685 22:20:38 chaining -- bdev/chaining.sh@119 -- # (( 47 == stats[sequence_executed] + 16 )) 00:37:19.685 22:20:38 chaining -- bdev/chaining.sh@120 -- # get_stat executed encrypt 00:37:19.685 22:20:38 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:37:19.685 22:20:38 chaining -- bdev/chaining.sh@39 -- # event=executed 00:37:19.685 22:20:38 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:37:19.685 22:20:38 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:37:19.685 22:20:38 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:37:19.685 22:20:38 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:37:19.685 22:20:38 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:37:19.685 22:20:38 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:19.685 22:20:38 chaining -- common/autotest_common.sh@10 -- # set +x 00:37:19.685 22:20:38 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:19.685 22:20:38 chaining -- bdev/chaining.sh@120 -- # (( 36 == stats[encrypt_executed] )) 00:37:19.685 22:20:38 chaining -- bdev/chaining.sh@121 -- # get_stat executed decrypt 00:37:19.685 22:20:38 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:37:19.685 22:20:38 chaining -- bdev/chaining.sh@39 -- # event=executed 00:37:19.685 22:20:38 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:37:19.685 22:20:38 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:37:19.685 22:20:38 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:37:19.685 22:20:38 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:37:19.685 22:20:38 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:37:19.685 22:20:38 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:19.685 22:20:38 chaining -- common/autotest_common.sh@10 -- # set +x 00:37:19.685 22:20:38 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:19.685 22:20:38 chaining -- bdev/chaining.sh@121 -- # (( 46 == stats[decrypt_executed] + 32 )) 00:37:19.685 22:20:38 chaining -- bdev/chaining.sh@122 -- # get_stat executed copy 00:37:19.685 22:20:38 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:37:19.685 22:20:38 chaining -- bdev/chaining.sh@39 -- # event=executed 00:37:19.685 22:20:38 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:37:19.685 22:20:38 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:37:19.685 22:20:38 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:37:19.685 22:20:38 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:37:19.685 22:20:38 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:37:19.685 22:20:38 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:19.685 22:20:38 chaining -- common/autotest_common.sh@10 -- # set +x 00:37:19.685 22:20:38 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:19.685 22:20:38 chaining -- bdev/chaining.sh@122 -- # (( 4 == stats[copy_executed] )) 00:37:19.685 22:20:38 chaining -- bdev/chaining.sh@123 -- # cmp /tmp/tmp.Fa9hH0O262 /tmp/tmp.mWRCQgB33r 00:37:19.685 22:20:38 chaining -- bdev/chaining.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:37:19.685 22:20:38 chaining -- bdev/chaining.sh@126 -- # tgtcleanup 00:37:19.685 22:20:38 chaining -- bdev/chaining.sh@58 -- # rm -f /tmp/tmp.Fa9hH0O262 /tmp/tmp.mWRCQgB33r 00:37:19.685 22:20:38 chaining -- bdev/chaining.sh@59 -- # nvmftestfini 00:37:19.685 22:20:38 chaining -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:19.685 22:20:38 chaining -- nvmf/common.sh@117 -- # sync 00:37:19.685 22:20:38 chaining -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:19.685 22:20:38 chaining -- nvmf/common.sh@120 -- # set +e 00:37:19.685 22:20:38 chaining -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:19.685 22:20:38 chaining -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:19.685 rmmod nvme_tcp 00:37:19.685 rmmod nvme_fabrics 00:37:19.685 rmmod nvme_keyring 00:37:19.685 22:20:38 chaining -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:19.685 22:20:38 chaining -- nvmf/common.sh@124 -- # set -e 00:37:19.685 22:20:38 chaining -- nvmf/common.sh@125 -- # return 0 00:37:19.685 22:20:38 chaining -- nvmf/common.sh@489 -- # '[' -n 1625534 ']' 00:37:19.685 22:20:38 chaining -- nvmf/common.sh@490 -- # killprocess 1625534 00:37:19.685 22:20:38 chaining -- common/autotest_common.sh@948 -- # '[' -z 1625534 ']' 00:37:19.685 22:20:38 chaining -- common/autotest_common.sh@952 -- # kill -0 1625534 00:37:19.685 22:20:38 chaining -- common/autotest_common.sh@953 -- # uname 00:37:19.685 22:20:38 chaining -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:19.685 22:20:38 chaining -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1625534 00:37:19.685 22:20:38 chaining -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:19.685 22:20:38 chaining -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:19.686 22:20:38 chaining -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1625534' 00:37:19.686 killing process with pid 1625534 00:37:19.686 22:20:38 chaining -- common/autotest_common.sh@967 -- # kill 1625534 00:37:19.686 22:20:38 chaining -- common/autotest_common.sh@972 -- # wait 1625534 00:37:21.064 22:20:40 chaining -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:21.064 22:20:40 chaining -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:21.064 22:20:40 chaining -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:21.064 22:20:40 chaining -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:21.064 22:20:40 chaining -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:21.064 22:20:40 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:21.064 22:20:40 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:21.064 22:20:40 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:22.964 22:20:42 chaining -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:22.964 22:20:42 chaining -- bdev/chaining.sh@129 -- # trap 'bperfcleanup; exit 1' SIGINT SIGTERM EXIT 00:37:22.964 22:20:42 chaining -- bdev/chaining.sh@132 -- # bperfpid=1628570 00:37:22.964 22:20:42 chaining -- bdev/chaining.sh@134 -- # waitforlisten 1628570 00:37:22.964 22:20:42 chaining -- bdev/chaining.sh@131 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 5 -w verify -o 4096 -q 256 --wait-for-rpc -z 00:37:22.964 22:20:42 chaining -- common/autotest_common.sh@829 -- # '[' -z 1628570 ']' 00:37:22.964 22:20:42 chaining -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:22.964 22:20:42 chaining -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:22.964 22:20:42 chaining -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:22.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:22.964 22:20:42 chaining -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:22.964 22:20:42 chaining -- common/autotest_common.sh@10 -- # set +x 00:37:22.964 [2024-07-13 22:20:42.353260] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:22.964 [2024-07-13 22:20:42.353357] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1628570 ] 00:37:23.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:23.222 EAL: Requested device 0000:3d:01.0 cannot be used 00:37:23.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:23.222 EAL: Requested device 0000:3d:01.1 cannot be used 00:37:23.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:23.222 EAL: Requested device 0000:3d:01.2 cannot be used 00:37:23.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:23.222 EAL: Requested device 0000:3d:01.3 cannot be used 00:37:23.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:23.222 EAL: Requested device 0000:3d:01.4 cannot be used 00:37:23.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:23.222 EAL: Requested device 0000:3d:01.5 cannot be used 00:37:23.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:23.222 EAL: Requested device 0000:3d:01.6 cannot be used 00:37:23.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:23.222 EAL: Requested device 0000:3d:01.7 cannot be used 00:37:23.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:23.222 EAL: Requested device 0000:3d:02.0 cannot be used 00:37:23.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:23.222 EAL: Requested device 0000:3d:02.1 cannot be used 00:37:23.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:23.222 EAL: Requested device 0000:3d:02.2 cannot be used 00:37:23.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:23.222 EAL: Requested device 0000:3d:02.3 cannot be used 00:37:23.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:23.222 EAL: Requested device 0000:3d:02.4 cannot be used 00:37:23.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:23.222 EAL: Requested device 0000:3d:02.5 cannot be used 00:37:23.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:23.222 EAL: Requested device 0000:3d:02.6 cannot be used 00:37:23.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:23.222 EAL: Requested device 0000:3d:02.7 cannot be used 00:37:23.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:23.222 EAL: Requested device 0000:3f:01.0 cannot be used 00:37:23.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:23.222 EAL: Requested device 0000:3f:01.1 cannot be used 00:37:23.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:23.222 EAL: Requested device 0000:3f:01.2 cannot be used 00:37:23.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:23.222 EAL: Requested device 0000:3f:01.3 cannot be used 00:37:23.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:23.222 EAL: Requested device 0000:3f:01.4 cannot be used 00:37:23.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:23.222 EAL: Requested device 0000:3f:01.5 cannot be used 00:37:23.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:23.222 EAL: Requested device 0000:3f:01.6 cannot be used 00:37:23.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:23.222 EAL: Requested device 0000:3f:01.7 cannot be used 00:37:23.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:23.222 EAL: Requested device 0000:3f:02.0 cannot be used 00:37:23.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:23.222 EAL: Requested device 0000:3f:02.1 cannot be used 00:37:23.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:23.222 EAL: Requested device 0000:3f:02.2 cannot be used 00:37:23.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:23.222 EAL: Requested device 0000:3f:02.3 cannot be used 00:37:23.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:23.222 EAL: Requested device 0000:3f:02.4 cannot be used 00:37:23.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:23.222 EAL: Requested device 0000:3f:02.5 cannot be used 00:37:23.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:23.222 EAL: Requested device 0000:3f:02.6 cannot be used 00:37:23.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:23.222 EAL: Requested device 0000:3f:02.7 cannot be used 00:37:23.222 [2024-07-13 22:20:42.516215] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:23.479 [2024-07-13 22:20:42.715610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:23.736 22:20:43 chaining -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:23.736 22:20:43 chaining -- common/autotest_common.sh@862 -- # return 0 00:37:23.736 22:20:43 chaining -- bdev/chaining.sh@135 -- # rpc_cmd 00:37:23.736 22:20:43 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:23.736 22:20:43 chaining -- common/autotest_common.sh@10 -- # set +x 00:37:24.299 malloc0 00:37:24.299 true 00:37:24.299 true 00:37:24.299 [2024-07-13 22:20:43.544153] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:37:24.299 crypto0 00:37:24.299 [2024-07-13 22:20:43.552187] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key1" 00:37:24.299 crypto1 00:37:24.299 22:20:43 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:24.299 22:20:43 chaining -- bdev/chaining.sh@145 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:37:24.299 Running I/O for 5 seconds... 00:37:29.554 00:37:29.554 Latency(us) 00:37:29.554 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:29.554 Job: crypto1 (Core Mask 0x1, workload: verify, depth: 256, IO size: 4096) 00:37:29.554 Verification LBA range: start 0x0 length 0x2000 00:37:29.554 crypto1 : 5.01 17094.53 66.78 0.00 0.00 14937.48 1900.54 10171.19 00:37:29.554 =================================================================================================================== 00:37:29.554 Total : 17094.53 66.78 0.00 0.00 14937.48 1900.54 10171.19 00:37:29.554 0 00:37:29.554 22:20:48 chaining -- bdev/chaining.sh@146 -- # killprocess 1628570 00:37:29.554 22:20:48 chaining -- common/autotest_common.sh@948 -- # '[' -z 1628570 ']' 00:37:29.554 22:20:48 chaining -- common/autotest_common.sh@952 -- # kill -0 1628570 00:37:29.554 22:20:48 chaining -- common/autotest_common.sh@953 -- # uname 00:37:29.554 22:20:48 chaining -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:29.554 22:20:48 chaining -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1628570 00:37:29.554 22:20:48 chaining -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:29.554 22:20:48 chaining -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:29.554 22:20:48 chaining -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1628570' 00:37:29.554 killing process with pid 1628570 00:37:29.554 22:20:48 chaining -- common/autotest_common.sh@967 -- # kill 1628570 00:37:29.554 Received shutdown signal, test time was about 5.000000 seconds 00:37:29.554 00:37:29.554 Latency(us) 00:37:29.554 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:29.554 =================================================================================================================== 00:37:29.554 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:29.554 22:20:48 chaining -- common/autotest_common.sh@972 -- # wait 1628570 00:37:30.924 22:20:49 chaining -- bdev/chaining.sh@152 -- # bperfpid=1629747 00:37:30.924 22:20:49 chaining -- bdev/chaining.sh@151 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 5 -w verify -o 4096 -q 256 --wait-for-rpc -z 00:37:30.924 22:20:49 chaining -- bdev/chaining.sh@154 -- # waitforlisten 1629747 00:37:30.924 22:20:49 chaining -- common/autotest_common.sh@829 -- # '[' -z 1629747 ']' 00:37:30.924 22:20:49 chaining -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:30.924 22:20:49 chaining -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:30.924 22:20:49 chaining -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:30.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:30.924 22:20:49 chaining -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:30.924 22:20:49 chaining -- common/autotest_common.sh@10 -- # set +x 00:37:30.924 [2024-07-13 22:20:50.033277] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:30.924 [2024-07-13 22:20:50.033382] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1629747 ] 00:37:30.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:30.924 EAL: Requested device 0000:3d:01.0 cannot be used 00:37:30.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:30.924 EAL: Requested device 0000:3d:01.1 cannot be used 00:37:30.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:30.924 EAL: Requested device 0000:3d:01.2 cannot be used 00:37:30.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:30.924 EAL: Requested device 0000:3d:01.3 cannot be used 00:37:30.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:30.924 EAL: Requested device 0000:3d:01.4 cannot be used 00:37:30.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:30.924 EAL: Requested device 0000:3d:01.5 cannot be used 00:37:30.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:30.924 EAL: Requested device 0000:3d:01.6 cannot be used 00:37:30.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:30.924 EAL: Requested device 0000:3d:01.7 cannot be used 00:37:30.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:30.924 EAL: Requested device 0000:3d:02.0 cannot be used 00:37:30.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:30.924 EAL: Requested device 0000:3d:02.1 cannot be used 00:37:30.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:30.924 EAL: Requested device 0000:3d:02.2 cannot be used 00:37:30.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:30.924 EAL: Requested device 0000:3d:02.3 cannot be used 00:37:30.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:30.924 EAL: Requested device 0000:3d:02.4 cannot be used 00:37:30.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:30.924 EAL: Requested device 0000:3d:02.5 cannot be used 00:37:30.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:30.924 EAL: Requested device 0000:3d:02.6 cannot be used 00:37:30.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:30.924 EAL: Requested device 0000:3d:02.7 cannot be used 00:37:30.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:30.924 EAL: Requested device 0000:3f:01.0 cannot be used 00:37:30.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:30.924 EAL: Requested device 0000:3f:01.1 cannot be used 00:37:30.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:30.924 EAL: Requested device 0000:3f:01.2 cannot be used 00:37:30.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:30.924 EAL: Requested device 0000:3f:01.3 cannot be used 00:37:30.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:30.924 EAL: Requested device 0000:3f:01.4 cannot be used 00:37:30.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:30.924 EAL: Requested device 0000:3f:01.5 cannot be used 00:37:30.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:30.924 EAL: Requested device 0000:3f:01.6 cannot be used 00:37:30.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:30.924 EAL: Requested device 0000:3f:01.7 cannot be used 00:37:30.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:30.924 EAL: Requested device 0000:3f:02.0 cannot be used 00:37:30.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:30.924 EAL: Requested device 0000:3f:02.1 cannot be used 00:37:30.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:30.924 EAL: Requested device 0000:3f:02.2 cannot be used 00:37:30.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:30.924 EAL: Requested device 0000:3f:02.3 cannot be used 00:37:30.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:30.924 EAL: Requested device 0000:3f:02.4 cannot be used 00:37:30.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:30.924 EAL: Requested device 0000:3f:02.5 cannot be used 00:37:30.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:30.924 EAL: Requested device 0000:3f:02.6 cannot be used 00:37:30.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:30.924 EAL: Requested device 0000:3f:02.7 cannot be used 00:37:30.924 [2024-07-13 22:20:50.199576] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:31.181 [2024-07-13 22:20:50.407773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:31.438 22:20:50 chaining -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:31.438 22:20:50 chaining -- common/autotest_common.sh@862 -- # return 0 00:37:31.438 22:20:50 chaining -- bdev/chaining.sh@155 -- # rpc_cmd 00:37:31.438 22:20:50 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:31.438 22:20:50 chaining -- common/autotest_common.sh@10 -- # set +x 00:37:31.999 malloc0 00:37:31.999 true 00:37:31.999 true 00:37:31.999 [2024-07-13 22:20:51.223415] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:37:31.999 [2024-07-13 22:20:51.223473] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:31.999 [2024-07-13 22:20:51.223495] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:37:31.999 [2024-07-13 22:20:51.223508] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:31.999 [2024-07-13 22:20:51.224700] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:31.999 [2024-07-13 22:20:51.224733] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:37:31.999 pt0 00:37:31.999 [2024-07-13 22:20:51.231453] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:37:31.999 crypto0 00:37:31.999 [2024-07-13 22:20:51.239459] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key1" 00:37:31.999 crypto1 00:37:31.999 22:20:51 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:31.999 22:20:51 chaining -- bdev/chaining.sh@166 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:37:31.999 Running I/O for 5 seconds... 00:37:37.256 00:37:37.256 Latency(us) 00:37:37.256 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:37.256 Job: crypto1 (Core Mask 0x1, workload: verify, depth: 256, IO size: 4096) 00:37:37.256 Verification LBA range: start 0x0 length 0x2000 00:37:37.256 crypto1 : 5.01 13307.45 51.98 0.00 0.00 19187.60 3316.12 12163.48 00:37:37.256 =================================================================================================================== 00:37:37.256 Total : 13307.45 51.98 0.00 0.00 19187.60 3316.12 12163.48 00:37:37.256 0 00:37:37.256 22:20:56 chaining -- bdev/chaining.sh@167 -- # killprocess 1629747 00:37:37.256 22:20:56 chaining -- common/autotest_common.sh@948 -- # '[' -z 1629747 ']' 00:37:37.256 22:20:56 chaining -- common/autotest_common.sh@952 -- # kill -0 1629747 00:37:37.256 22:20:56 chaining -- common/autotest_common.sh@953 -- # uname 00:37:37.256 22:20:56 chaining -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:37.256 22:20:56 chaining -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1629747 00:37:37.256 22:20:56 chaining -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:37.256 22:20:56 chaining -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:37.256 22:20:56 chaining -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1629747' 00:37:37.256 killing process with pid 1629747 00:37:37.256 22:20:56 chaining -- common/autotest_common.sh@967 -- # kill 1629747 00:37:37.256 Received shutdown signal, test time was about 5.000000 seconds 00:37:37.256 00:37:37.256 Latency(us) 00:37:37.256 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:37.256 =================================================================================================================== 00:37:37.257 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:37.257 22:20:56 chaining -- common/autotest_common.sh@972 -- # wait 1629747 00:37:38.629 22:20:57 chaining -- bdev/chaining.sh@169 -- # trap - SIGINT SIGTERM EXIT 00:37:38.629 22:20:57 chaining -- bdev/chaining.sh@170 -- # killprocess 1629747 00:37:38.629 22:20:57 chaining -- common/autotest_common.sh@948 -- # '[' -z 1629747 ']' 00:37:38.629 22:20:57 chaining -- common/autotest_common.sh@952 -- # kill -0 1629747 00:37:38.629 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1629747) - No such process 00:37:38.629 22:20:57 chaining -- common/autotest_common.sh@975 -- # echo 'Process with pid 1629747 is not found' 00:37:38.629 Process with pid 1629747 is not found 00:37:38.629 22:20:57 chaining -- bdev/chaining.sh@171 -- # wait 1629747 00:37:38.629 22:20:57 chaining -- bdev/chaining.sh@175 -- # nvmftestinit 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:38.629 22:20:57 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:38.629 22:20:57 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@285 -- # xtrace_disable 00:37:38.629 22:20:57 chaining -- common/autotest_common.sh@10 -- # set +x 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@291 -- # pci_devs=() 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@295 -- # net_devs=() 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@296 -- # e810=() 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@296 -- # local -ga e810 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@297 -- # x722=() 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@297 -- # local -ga x722 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@298 -- # mlx=() 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@298 -- # local -ga mlx 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:20:00.0 (0x8086 - 0x159b)' 00:37:38.629 Found 0000:20:00.0 (0x8086 - 0x159b) 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:20:00.1 (0x8086 - 0x159b)' 00:37:38.629 Found 0000:20:00.1 (0x8086 - 0x159b) 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:38.629 22:20:57 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:20:00.0: cvl_0_0' 00:37:38.630 Found net devices under 0000:20:00.0: cvl_0_0 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:20:00.1: cvl_0_1' 00:37:38.630 Found net devices under 0000:20:00.1: cvl_0_1 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@414 -- # is_hw=yes 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:38.630 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:38.630 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:37:38.630 00:37:38.630 --- 10.0.0.2 ping statistics --- 00:37:38.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:38.630 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:38.630 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:38.630 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:37:38.630 00:37:38.630 --- 10.0.0.1 ping statistics --- 00:37:38.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:38.630 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@422 -- # return 0 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:38.630 22:20:57 chaining -- bdev/chaining.sh@176 -- # nvmfappstart -m 0x2 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:38.630 22:20:57 chaining -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:38.630 22:20:57 chaining -- common/autotest_common.sh@10 -- # set +x 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@481 -- # nvmfpid=1631095 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:37:38.630 22:20:57 chaining -- nvmf/common.sh@482 -- # waitforlisten 1631095 00:37:38.630 22:20:57 chaining -- common/autotest_common.sh@829 -- # '[' -z 1631095 ']' 00:37:38.630 22:20:57 chaining -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:38.630 22:20:57 chaining -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:38.630 22:20:57 chaining -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:38.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:38.630 22:20:57 chaining -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:38.630 22:20:57 chaining -- common/autotest_common.sh@10 -- # set +x 00:37:38.888 [2024-07-13 22:20:58.093982] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:38.888 [2024-07-13 22:20:58.094079] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:38.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:38.888 EAL: Requested device 0000:3d:01.0 cannot be used 00:37:38.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:38.888 EAL: Requested device 0000:3d:01.1 cannot be used 00:37:38.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:38.888 EAL: Requested device 0000:3d:01.2 cannot be used 00:37:38.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:38.888 EAL: Requested device 0000:3d:01.3 cannot be used 00:37:38.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:38.889 EAL: Requested device 0000:3d:01.4 cannot be used 00:37:38.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:38.889 EAL: Requested device 0000:3d:01.5 cannot be used 00:37:38.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:38.889 EAL: Requested device 0000:3d:01.6 cannot be used 00:37:38.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:38.889 EAL: Requested device 0000:3d:01.7 cannot be used 00:37:38.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:38.889 EAL: Requested device 0000:3d:02.0 cannot be used 00:37:38.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:38.889 EAL: Requested device 0000:3d:02.1 cannot be used 00:37:38.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:38.889 EAL: Requested device 0000:3d:02.2 cannot be used 00:37:38.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:38.889 EAL: Requested device 0000:3d:02.3 cannot be used 00:37:38.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:38.889 EAL: Requested device 0000:3d:02.4 cannot be used 00:37:38.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:38.889 EAL: Requested device 0000:3d:02.5 cannot be used 00:37:38.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:38.889 EAL: Requested device 0000:3d:02.6 cannot be used 00:37:38.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:38.889 EAL: Requested device 0000:3d:02.7 cannot be used 00:37:38.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:38.889 EAL: Requested device 0000:3f:01.0 cannot be used 00:37:38.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:38.889 EAL: Requested device 0000:3f:01.1 cannot be used 00:37:38.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:38.889 EAL: Requested device 0000:3f:01.2 cannot be used 00:37:38.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:38.889 EAL: Requested device 0000:3f:01.3 cannot be used 00:37:38.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:38.889 EAL: Requested device 0000:3f:01.4 cannot be used 00:37:38.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:38.889 EAL: Requested device 0000:3f:01.5 cannot be used 00:37:38.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:38.889 EAL: Requested device 0000:3f:01.6 cannot be used 00:37:38.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:38.889 EAL: Requested device 0000:3f:01.7 cannot be used 00:37:38.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:38.889 EAL: Requested device 0000:3f:02.0 cannot be used 00:37:38.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:38.889 EAL: Requested device 0000:3f:02.1 cannot be used 00:37:38.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:38.889 EAL: Requested device 0000:3f:02.2 cannot be used 00:37:38.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:38.889 EAL: Requested device 0000:3f:02.3 cannot be used 00:37:38.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:38.889 EAL: Requested device 0000:3f:02.4 cannot be used 00:37:38.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:38.889 EAL: Requested device 0000:3f:02.5 cannot be used 00:37:38.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:38.889 EAL: Requested device 0000:3f:02.6 cannot be used 00:37:38.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:38.889 EAL: Requested device 0000:3f:02.7 cannot be used 00:37:38.889 [2024-07-13 22:20:58.263250] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:39.147 [2024-07-13 22:20:58.480024] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:39.147 [2024-07-13 22:20:58.480065] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:39.147 [2024-07-13 22:20:58.480080] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:39.147 [2024-07-13 22:20:58.480092] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:39.147 [2024-07-13 22:20:58.480104] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:39.147 [2024-07-13 22:20:58.480132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:39.764 22:20:58 chaining -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:39.764 22:20:58 chaining -- common/autotest_common.sh@862 -- # return 0 00:37:39.764 22:20:58 chaining -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:39.764 22:20:58 chaining -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:39.764 22:20:58 chaining -- common/autotest_common.sh@10 -- # set +x 00:37:39.764 22:20:58 chaining -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:39.764 22:20:58 chaining -- bdev/chaining.sh@178 -- # rpc_cmd 00:37:39.764 22:20:58 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:39.765 22:20:58 chaining -- common/autotest_common.sh@10 -- # set +x 00:37:39.765 malloc0 00:37:39.765 [2024-07-13 22:20:58.966585] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:39.765 [2024-07-13 22:20:58.982761] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:39.765 22:20:58 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:39.765 22:20:58 chaining -- bdev/chaining.sh@186 -- # trap 'bperfcleanup || :; nvmftestfini || :; exit 1' SIGINT SIGTERM EXIT 00:37:39.765 22:20:58 chaining -- bdev/chaining.sh@189 -- # bperfpid=1631373 00:37:39.765 22:20:58 chaining -- bdev/chaining.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bperf.sock -t 5 -w verify -o 4096 -q 256 --wait-for-rpc -z 00:37:39.765 22:20:58 chaining -- bdev/chaining.sh@191 -- # waitforlisten 1631373 /var/tmp/bperf.sock 00:37:39.765 22:20:58 chaining -- common/autotest_common.sh@829 -- # '[' -z 1631373 ']' 00:37:39.765 22:20:58 chaining -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:39.765 22:20:58 chaining -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:39.765 22:20:58 chaining -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:39.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:39.765 22:20:58 chaining -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:39.765 22:20:58 chaining -- common/autotest_common.sh@10 -- # set +x 00:37:39.765 [2024-07-13 22:20:59.083616] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:39.765 [2024-07-13 22:20:59.083715] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1631373 ] 00:37:40.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.038 EAL: Requested device 0000:3d:01.0 cannot be used 00:37:40.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.038 EAL: Requested device 0000:3d:01.1 cannot be used 00:37:40.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.038 EAL: Requested device 0000:3d:01.2 cannot be used 00:37:40.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.039 EAL: Requested device 0000:3d:01.3 cannot be used 00:37:40.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.039 EAL: Requested device 0000:3d:01.4 cannot be used 00:37:40.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.039 EAL: Requested device 0000:3d:01.5 cannot be used 00:37:40.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.039 EAL: Requested device 0000:3d:01.6 cannot be used 00:37:40.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.039 EAL: Requested device 0000:3d:01.7 cannot be used 00:37:40.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.039 EAL: Requested device 0000:3d:02.0 cannot be used 00:37:40.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.039 EAL: Requested device 0000:3d:02.1 cannot be used 00:37:40.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.039 EAL: Requested device 0000:3d:02.2 cannot be used 00:37:40.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.039 EAL: Requested device 0000:3d:02.3 cannot be used 00:37:40.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.039 EAL: Requested device 0000:3d:02.4 cannot be used 00:37:40.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.039 EAL: Requested device 0000:3d:02.5 cannot be used 00:37:40.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.039 EAL: Requested device 0000:3d:02.6 cannot be used 00:37:40.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.039 EAL: Requested device 0000:3d:02.7 cannot be used 00:37:40.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.039 EAL: Requested device 0000:3f:01.0 cannot be used 00:37:40.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.039 EAL: Requested device 0000:3f:01.1 cannot be used 00:37:40.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.039 EAL: Requested device 0000:3f:01.2 cannot be used 00:37:40.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.039 EAL: Requested device 0000:3f:01.3 cannot be used 00:37:40.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.039 EAL: Requested device 0000:3f:01.4 cannot be used 00:37:40.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.039 EAL: Requested device 0000:3f:01.5 cannot be used 00:37:40.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.039 EAL: Requested device 0000:3f:01.6 cannot be used 00:37:40.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.039 EAL: Requested device 0000:3f:01.7 cannot be used 00:37:40.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.039 EAL: Requested device 0000:3f:02.0 cannot be used 00:37:40.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.039 EAL: Requested device 0000:3f:02.1 cannot be used 00:37:40.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.039 EAL: Requested device 0000:3f:02.2 cannot be used 00:37:40.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.039 EAL: Requested device 0000:3f:02.3 cannot be used 00:37:40.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.039 EAL: Requested device 0000:3f:02.4 cannot be used 00:37:40.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.039 EAL: Requested device 0000:3f:02.5 cannot be used 00:37:40.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.039 EAL: Requested device 0000:3f:02.6 cannot be used 00:37:40.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.039 EAL: Requested device 0000:3f:02.7 cannot be used 00:37:40.039 [2024-07-13 22:20:59.246895] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:40.297 [2024-07-13 22:20:59.456004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:40.555 22:20:59 chaining -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:40.555 22:20:59 chaining -- common/autotest_common.sh@862 -- # return 0 00:37:40.555 22:20:59 chaining -- bdev/chaining.sh@192 -- # rpc_bperf 00:37:40.555 22:20:59 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock 00:37:41.121 [2024-07-13 22:21:00.446636] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:37:41.121 nvme0n1 00:37:41.121 true 00:37:41.121 crypto0 00:37:41.121 22:21:00 chaining -- bdev/chaining.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:41.378 Running I/O for 5 seconds... 00:37:46.639 00:37:46.639 Latency(us) 00:37:46.639 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:46.639 Job: crypto0 (Core Mask 0x1, workload: verify, depth: 256, IO size: 4096) 00:37:46.639 Verification LBA range: start 0x0 length 0x2000 00:37:46.639 crypto0 : 5.02 11380.17 44.45 0.00 0.00 22435.10 2398.62 20342.37 00:37:46.639 =================================================================================================================== 00:37:46.639 Total : 11380.17 44.45 0.00 0.00 22435.10 2398.62 20342.37 00:37:46.639 0 00:37:46.639 22:21:05 chaining -- bdev/chaining.sh@205 -- # get_stat_bperf sequence_executed 00:37:46.639 22:21:05 chaining -- bdev/chaining.sh@48 -- # get_stat sequence_executed '' rpc_bperf 00:37:46.639 22:21:05 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:37:46.639 22:21:05 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:37:46.639 22:21:05 chaining -- bdev/chaining.sh@39 -- # opcode= 00:37:46.639 22:21:05 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:37:46.639 22:21:05 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:37:46.639 22:21:05 chaining -- bdev/chaining.sh@41 -- # rpc_bperf accel_get_stats 00:37:46.639 22:21:05 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:46.639 22:21:05 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:37:46.639 22:21:05 chaining -- bdev/chaining.sh@205 -- # sequence=114144 00:37:46.639 22:21:05 chaining -- bdev/chaining.sh@206 -- # get_stat_bperf executed encrypt 00:37:46.639 22:21:05 chaining -- bdev/chaining.sh@48 -- # get_stat executed encrypt rpc_bperf 00:37:46.639 22:21:05 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:37:46.639 22:21:05 chaining -- bdev/chaining.sh@39 -- # event=executed 00:37:46.639 22:21:05 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:37:46.639 22:21:05 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:37:46.639 22:21:05 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:37:46.639 22:21:05 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:37:46.639 22:21:05 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:46.639 22:21:05 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:37:46.639 22:21:05 chaining -- bdev/chaining.sh@206 -- # encrypt=57072 00:37:46.639 22:21:05 chaining -- bdev/chaining.sh@207 -- # get_stat_bperf executed decrypt 00:37:46.639 22:21:05 chaining -- bdev/chaining.sh@48 -- # get_stat executed decrypt rpc_bperf 00:37:46.639 22:21:05 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:37:46.639 22:21:05 chaining -- bdev/chaining.sh@39 -- # event=executed 00:37:46.639 22:21:05 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:37:46.639 22:21:05 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:37:46.639 22:21:05 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:37:46.639 22:21:05 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:37:46.639 22:21:05 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:37:46.639 22:21:05 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:46.896 22:21:06 chaining -- bdev/chaining.sh@207 -- # decrypt=57072 00:37:46.896 22:21:06 chaining -- bdev/chaining.sh@208 -- # get_stat_bperf executed crc32c 00:37:46.896 22:21:06 chaining -- bdev/chaining.sh@48 -- # get_stat executed crc32c rpc_bperf 00:37:46.896 22:21:06 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:37:46.896 22:21:06 chaining -- bdev/chaining.sh@39 -- # event=executed 00:37:46.896 22:21:06 chaining -- bdev/chaining.sh@39 -- # opcode=crc32c 00:37:46.896 22:21:06 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:37:46.896 22:21:06 chaining -- bdev/chaining.sh@40 -- # [[ -z crc32c ]] 00:37:46.896 22:21:06 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:37:46.896 22:21:06 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "crc32c").executed' 00:37:46.896 22:21:06 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:47.153 22:21:06 chaining -- bdev/chaining.sh@208 -- # crc32c=114144 00:37:47.153 22:21:06 chaining -- bdev/chaining.sh@210 -- # (( sequence > 0 )) 00:37:47.153 22:21:06 chaining -- bdev/chaining.sh@211 -- # (( encrypt + decrypt == sequence )) 00:37:47.153 22:21:06 chaining -- bdev/chaining.sh@212 -- # (( encrypt + decrypt == crc32c )) 00:37:47.153 22:21:06 chaining -- bdev/chaining.sh@214 -- # killprocess 1631373 00:37:47.153 22:21:06 chaining -- common/autotest_common.sh@948 -- # '[' -z 1631373 ']' 00:37:47.153 22:21:06 chaining -- common/autotest_common.sh@952 -- # kill -0 1631373 00:37:47.153 22:21:06 chaining -- common/autotest_common.sh@953 -- # uname 00:37:47.153 22:21:06 chaining -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:47.153 22:21:06 chaining -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1631373 00:37:47.153 22:21:06 chaining -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:47.153 22:21:06 chaining -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:47.153 22:21:06 chaining -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1631373' 00:37:47.153 killing process with pid 1631373 00:37:47.153 22:21:06 chaining -- common/autotest_common.sh@967 -- # kill 1631373 00:37:47.153 Received shutdown signal, test time was about 5.000000 seconds 00:37:47.153 00:37:47.153 Latency(us) 00:37:47.153 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:47.153 =================================================================================================================== 00:37:47.153 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:47.153 22:21:06 chaining -- common/autotest_common.sh@972 -- # wait 1631373 00:37:48.524 22:21:07 chaining -- bdev/chaining.sh@219 -- # bperfpid=1633252 00:37:48.524 22:21:07 chaining -- bdev/chaining.sh@217 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bperf.sock -t 5 -w verify -o 65536 -q 32 --wait-for-rpc -z 00:37:48.524 22:21:07 chaining -- bdev/chaining.sh@221 -- # waitforlisten 1633252 /var/tmp/bperf.sock 00:37:48.524 22:21:07 chaining -- common/autotest_common.sh@829 -- # '[' -z 1633252 ']' 00:37:48.524 22:21:07 chaining -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:48.524 22:21:07 chaining -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:48.524 22:21:07 chaining -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:48.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:48.524 22:21:07 chaining -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:48.524 22:21:07 chaining -- common/autotest_common.sh@10 -- # set +x 00:37:48.524 [2024-07-13 22:21:07.728422] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:48.524 [2024-07-13 22:21:07.728518] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1633252 ] 00:37:48.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:48.524 EAL: Requested device 0000:3d:01.0 cannot be used 00:37:48.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:48.524 EAL: Requested device 0000:3d:01.1 cannot be used 00:37:48.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:48.524 EAL: Requested device 0000:3d:01.2 cannot be used 00:37:48.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:48.524 EAL: Requested device 0000:3d:01.3 cannot be used 00:37:48.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:48.524 EAL: Requested device 0000:3d:01.4 cannot be used 00:37:48.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:48.524 EAL: Requested device 0000:3d:01.5 cannot be used 00:37:48.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:48.524 EAL: Requested device 0000:3d:01.6 cannot be used 00:37:48.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:48.524 EAL: Requested device 0000:3d:01.7 cannot be used 00:37:48.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:48.524 EAL: Requested device 0000:3d:02.0 cannot be used 00:37:48.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:48.524 EAL: Requested device 0000:3d:02.1 cannot be used 00:37:48.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:48.524 EAL: Requested device 0000:3d:02.2 cannot be used 00:37:48.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:48.524 EAL: Requested device 0000:3d:02.3 cannot be used 00:37:48.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:48.524 EAL: Requested device 0000:3d:02.4 cannot be used 00:37:48.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:48.524 EAL: Requested device 0000:3d:02.5 cannot be used 00:37:48.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:48.524 EAL: Requested device 0000:3d:02.6 cannot be used 00:37:48.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:48.524 EAL: Requested device 0000:3d:02.7 cannot be used 00:37:48.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:48.524 EAL: Requested device 0000:3f:01.0 cannot be used 00:37:48.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:48.524 EAL: Requested device 0000:3f:01.1 cannot be used 00:37:48.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:48.524 EAL: Requested device 0000:3f:01.2 cannot be used 00:37:48.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:48.524 EAL: Requested device 0000:3f:01.3 cannot be used 00:37:48.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:48.524 EAL: Requested device 0000:3f:01.4 cannot be used 00:37:48.525 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:48.525 EAL: Requested device 0000:3f:01.5 cannot be used 00:37:48.525 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:48.525 EAL: Requested device 0000:3f:01.6 cannot be used 00:37:48.525 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:48.525 EAL: Requested device 0000:3f:01.7 cannot be used 00:37:48.525 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:48.525 EAL: Requested device 0000:3f:02.0 cannot be used 00:37:48.525 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:48.525 EAL: Requested device 0000:3f:02.1 cannot be used 00:37:48.525 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:48.525 EAL: Requested device 0000:3f:02.2 cannot be used 00:37:48.525 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:48.525 EAL: Requested device 0000:3f:02.3 cannot be used 00:37:48.525 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:48.525 EAL: Requested device 0000:3f:02.4 cannot be used 00:37:48.525 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:48.525 EAL: Requested device 0000:3f:02.5 cannot be used 00:37:48.525 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:48.525 EAL: Requested device 0000:3f:02.6 cannot be used 00:37:48.525 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:48.525 EAL: Requested device 0000:3f:02.7 cannot be used 00:37:48.525 [2024-07-13 22:21:07.892682] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:48.783 [2024-07-13 22:21:08.108288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:49.348 22:21:08 chaining -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:49.348 22:21:08 chaining -- common/autotest_common.sh@862 -- # return 0 00:37:49.348 22:21:08 chaining -- bdev/chaining.sh@222 -- # rpc_bperf 00:37:49.348 22:21:08 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock 00:37:49.914 [2024-07-13 22:21:09.102273] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:37:49.914 nvme0n1 00:37:49.914 true 00:37:49.914 crypto0 00:37:49.914 22:21:09 chaining -- bdev/chaining.sh@231 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:49.914 Running I/O for 5 seconds... 00:37:55.177 00:37:55.177 Latency(us) 00:37:55.177 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:55.177 Job: crypto0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:37:55.177 Verification LBA range: start 0x0 length 0x200 00:37:55.177 crypto0 : 5.00 2277.72 142.36 0.00 0.00 13781.69 458.75 16462.64 00:37:55.178 =================================================================================================================== 00:37:55.178 Total : 2277.72 142.36 0.00 0.00 13781.69 458.75 16462.64 00:37:55.178 0 00:37:55.178 22:21:14 chaining -- bdev/chaining.sh@233 -- # get_stat_bperf sequence_executed 00:37:55.178 22:21:14 chaining -- bdev/chaining.sh@48 -- # get_stat sequence_executed '' rpc_bperf 00:37:55.178 22:21:14 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:37:55.178 22:21:14 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:37:55.178 22:21:14 chaining -- bdev/chaining.sh@39 -- # opcode= 00:37:55.178 22:21:14 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:37:55.178 22:21:14 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:37:55.178 22:21:14 chaining -- bdev/chaining.sh@41 -- # rpc_bperf accel_get_stats 00:37:55.178 22:21:14 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:37:55.178 22:21:14 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:55.178 22:21:14 chaining -- bdev/chaining.sh@233 -- # sequence=22796 00:37:55.178 22:21:14 chaining -- bdev/chaining.sh@234 -- # get_stat_bperf executed encrypt 00:37:55.178 22:21:14 chaining -- bdev/chaining.sh@48 -- # get_stat executed encrypt rpc_bperf 00:37:55.178 22:21:14 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:37:55.178 22:21:14 chaining -- bdev/chaining.sh@39 -- # event=executed 00:37:55.178 22:21:14 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:37:55.178 22:21:14 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:37:55.178 22:21:14 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:37:55.178 22:21:14 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:37:55.178 22:21:14 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:37:55.178 22:21:14 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:55.435 22:21:14 chaining -- bdev/chaining.sh@234 -- # encrypt=11398 00:37:55.435 22:21:14 chaining -- bdev/chaining.sh@235 -- # get_stat_bperf executed decrypt 00:37:55.435 22:21:14 chaining -- bdev/chaining.sh@48 -- # get_stat executed decrypt rpc_bperf 00:37:55.435 22:21:14 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:37:55.435 22:21:14 chaining -- bdev/chaining.sh@39 -- # event=executed 00:37:55.435 22:21:14 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:37:55.435 22:21:14 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:37:55.435 22:21:14 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:37:55.435 22:21:14 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:37:55.435 22:21:14 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:55.435 22:21:14 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:37:55.435 22:21:14 chaining -- bdev/chaining.sh@235 -- # decrypt=11398 00:37:55.435 22:21:14 chaining -- bdev/chaining.sh@236 -- # get_stat_bperf executed crc32c 00:37:55.435 22:21:14 chaining -- bdev/chaining.sh@48 -- # get_stat executed crc32c rpc_bperf 00:37:55.435 22:21:14 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:37:55.435 22:21:14 chaining -- bdev/chaining.sh@39 -- # event=executed 00:37:55.435 22:21:14 chaining -- bdev/chaining.sh@39 -- # opcode=crc32c 00:37:55.435 22:21:14 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:37:55.435 22:21:14 chaining -- bdev/chaining.sh@40 -- # [[ -z crc32c ]] 00:37:55.435 22:21:14 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:37:55.435 22:21:14 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:55.435 22:21:14 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "crc32c").executed' 00:37:55.692 22:21:14 chaining -- bdev/chaining.sh@236 -- # crc32c=22796 00:37:55.693 22:21:14 chaining -- bdev/chaining.sh@238 -- # (( sequence > 0 )) 00:37:55.693 22:21:14 chaining -- bdev/chaining.sh@239 -- # (( encrypt + decrypt == sequence )) 00:37:55.693 22:21:14 chaining -- bdev/chaining.sh@240 -- # (( encrypt + decrypt == crc32c )) 00:37:55.693 22:21:14 chaining -- bdev/chaining.sh@242 -- # killprocess 1633252 00:37:55.693 22:21:14 chaining -- common/autotest_common.sh@948 -- # '[' -z 1633252 ']' 00:37:55.693 22:21:14 chaining -- common/autotest_common.sh@952 -- # kill -0 1633252 00:37:55.693 22:21:14 chaining -- common/autotest_common.sh@953 -- # uname 00:37:55.693 22:21:14 chaining -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:55.693 22:21:14 chaining -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1633252 00:37:55.693 22:21:15 chaining -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:55.693 22:21:15 chaining -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:55.693 22:21:15 chaining -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1633252' 00:37:55.693 killing process with pid 1633252 00:37:55.693 22:21:15 chaining -- common/autotest_common.sh@967 -- # kill 1633252 00:37:55.693 Received shutdown signal, test time was about 5.000000 seconds 00:37:55.693 00:37:55.693 Latency(us) 00:37:55.693 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:55.693 =================================================================================================================== 00:37:55.693 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:55.693 22:21:15 chaining -- common/autotest_common.sh@972 -- # wait 1633252 00:37:57.065 22:21:16 chaining -- bdev/chaining.sh@243 -- # nvmftestfini 00:37:57.065 22:21:16 chaining -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:57.065 22:21:16 chaining -- nvmf/common.sh@117 -- # sync 00:37:57.065 22:21:16 chaining -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:57.065 22:21:16 chaining -- nvmf/common.sh@120 -- # set +e 00:37:57.065 22:21:16 chaining -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:57.065 22:21:16 chaining -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:57.065 rmmod nvme_tcp 00:37:57.065 rmmod nvme_fabrics 00:37:57.065 rmmod nvme_keyring 00:37:57.065 22:21:16 chaining -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:57.065 22:21:16 chaining -- nvmf/common.sh@124 -- # set -e 00:37:57.065 22:21:16 chaining -- nvmf/common.sh@125 -- # return 0 00:37:57.065 22:21:16 chaining -- nvmf/common.sh@489 -- # '[' -n 1631095 ']' 00:37:57.065 22:21:16 chaining -- nvmf/common.sh@490 -- # killprocess 1631095 00:37:57.065 22:21:16 chaining -- common/autotest_common.sh@948 -- # '[' -z 1631095 ']' 00:37:57.065 22:21:16 chaining -- common/autotest_common.sh@952 -- # kill -0 1631095 00:37:57.065 22:21:16 chaining -- common/autotest_common.sh@953 -- # uname 00:37:57.065 22:21:16 chaining -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:57.065 22:21:16 chaining -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1631095 00:37:57.065 22:21:16 chaining -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:57.065 22:21:16 chaining -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:57.065 22:21:16 chaining -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1631095' 00:37:57.065 killing process with pid 1631095 00:37:57.065 22:21:16 chaining -- common/autotest_common.sh@967 -- # kill 1631095 00:37:57.065 22:21:16 chaining -- common/autotest_common.sh@972 -- # wait 1631095 00:37:58.440 22:21:17 chaining -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:58.440 22:21:17 chaining -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:58.440 22:21:17 chaining -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:58.441 22:21:17 chaining -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:58.441 22:21:17 chaining -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:58.441 22:21:17 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:58.441 22:21:17 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:58.441 22:21:17 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:00.375 22:21:19 chaining -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:00.375 22:21:19 chaining -- bdev/chaining.sh@245 -- # trap - SIGINT SIGTERM EXIT 00:38:00.375 00:38:00.375 real 1m4.256s 00:38:00.375 user 1m20.012s 00:38:00.375 sys 0m14.452s 00:38:00.375 22:21:19 chaining -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:00.375 22:21:19 chaining -- common/autotest_common.sh@10 -- # set +x 00:38:00.375 ************************************ 00:38:00.375 END TEST chaining 00:38:00.375 ************************************ 00:38:00.633 22:21:19 -- common/autotest_common.sh@1142 -- # return 0 00:38:00.633 22:21:19 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:38:00.633 22:21:19 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:38:00.633 22:21:19 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:38:00.633 22:21:19 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:38:00.633 22:21:19 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:38:00.633 22:21:19 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:38:00.633 22:21:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:38:00.633 22:21:19 -- common/autotest_common.sh@10 -- # set +x 00:38:00.633 22:21:19 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:38:00.633 22:21:19 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:38:00.633 22:21:19 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:38:00.633 22:21:19 -- common/autotest_common.sh@10 -- # set +x 00:38:07.193 INFO: APP EXITING 00:38:07.193 INFO: killing all VMs 00:38:07.193 INFO: killing vhost app 00:38:07.193 WARN: no vhost pid file found 00:38:07.193 INFO: EXIT DONE 00:38:10.477 Waiting for block devices as requested 00:38:10.477 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:10.477 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:10.477 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:10.477 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:10.477 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:10.477 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:10.477 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:10.477 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:10.735 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:10.735 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:10.735 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:10.993 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:10.993 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:10.993 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:11.252 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:11.252 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:11.252 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:38:16.520 Cleaning 00:38:16.520 Removing: /var/run/dpdk/spdk0/config 00:38:16.520 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:38:16.520 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:38:16.520 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:38:16.520 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:38:16.520 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:38:16.520 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:38:16.520 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:38:16.520 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:38:16.520 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:38:16.520 Removing: /var/run/dpdk/spdk0/hugepage_info 00:38:16.520 Removing: /dev/shm/nvmf_trace.0 00:38:16.520 Removing: /dev/shm/spdk_tgt_trace.pid1276535 00:38:16.520 Removing: /var/run/dpdk/spdk0 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1269989 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1273991 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1276535 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1277740 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1279043 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1279638 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1281008 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1281279 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1282054 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1285884 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1288289 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1288869 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1289653 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1290327 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1291169 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1291459 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1291797 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1292311 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1293186 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1296576 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1297017 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1297441 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1298272 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1298911 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1299425 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1299947 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1300284 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1300811 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1301345 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1301808 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1302184 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1302729 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1303272 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1303678 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1304113 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1304659 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1305183 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1305519 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1306041 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1306580 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1307059 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1307439 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1307970 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1308516 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1308989 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1309868 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1310391 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1310875 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1311372 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1311954 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1312573 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1313108 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1313653 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1314039 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1314897 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1315810 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1316812 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1317441 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1322465 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1325081 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1327616 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1329212 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1331193 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1332425 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1332647 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1332736 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1337862 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1338698 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1340287 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1340836 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1347455 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1349231 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1350388 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1355198 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1357005 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1358180 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1363030 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1365899 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1367335 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1377635 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1380042 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1381212 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1391517 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1393923 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1395099 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1406073 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1409577 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1410823 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1422237 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1424894 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1426125 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1438277 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1440942 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1442302 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1453802 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1457779 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1459202 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1460616 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1463865 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1469561 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1473208 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1478172 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1482113 00:38:16.520 Removing: /var/run/dpdk/spdk_pid1487879 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1491306 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1498626 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1501062 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1508391 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1510944 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1517839 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1520297 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1524995 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1525643 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1526332 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1527128 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1527996 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1528993 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1530065 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1530688 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1533014 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1535230 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1538161 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1540039 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1548342 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1553926 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1556225 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1558458 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1560649 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1562468 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1571321 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1576898 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1578225 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1579118 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1582433 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1585206 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1587975 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1589580 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1591435 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1592519 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1592785 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1593057 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1593711 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1594197 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1595719 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1598067 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1600738 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1602061 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1603180 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1603772 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1603969 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1604174 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1605379 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1606704 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1607530 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1610906 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1613578 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1616233 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1618064 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1619919 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1620987 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1621033 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1625696 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1626176 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1626643 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1627020 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1627576 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1628570 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1629747 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1631373 00:38:16.521 Removing: /var/run/dpdk/spdk_pid1633252 00:38:16.521 Clean 00:38:16.521 22:21:35 -- common/autotest_common.sh@1451 -- # return 0 00:38:16.521 22:21:35 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:38:16.521 22:21:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:16.521 22:21:35 -- common/autotest_common.sh@10 -- # set +x 00:38:16.521 22:21:35 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:38:16.521 22:21:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:16.521 22:21:35 -- common/autotest_common.sh@10 -- # set +x 00:38:16.521 22:21:35 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/timing.txt 00:38:16.779 22:21:35 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/udev.log ]] 00:38:16.779 22:21:35 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/udev.log 00:38:16.779 22:21:35 -- spdk/autotest.sh@391 -- # hash lcov 00:38:16.779 22:21:35 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:38:16.779 22:21:35 -- spdk/autotest.sh@393 -- # hostname 00:38:16.779 22:21:35 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/crypto-phy-autotest/spdk -t spdk-wfp-19 -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_test.info 00:38:16.779 geninfo: WARNING: invalid characters removed from testname! 00:38:34.852 22:21:54 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:38:37.439 22:21:56 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:38:38.824 22:21:58 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:38:40.726 22:21:59 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:38:42.102 22:22:01 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:38:44.013 22:22:02 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:38:45.392 22:22:04 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:45.392 22:22:04 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:38:45.392 22:22:04 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:38:45.392 22:22:04 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:45.392 22:22:04 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:45.392 22:22:04 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:45.392 22:22:04 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:45.392 22:22:04 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:45.392 22:22:04 -- paths/export.sh@5 -- $ export PATH 00:38:45.392 22:22:04 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:45.392 22:22:04 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:38:45.392 22:22:04 -- common/autobuild_common.sh@444 -- $ date +%s 00:38:45.392 22:22:04 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720902124.XXXXXX 00:38:45.392 22:22:04 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720902124.UeWq7F 00:38:45.392 22:22:04 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:38:45.392 22:22:04 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:38:45.392 22:22:04 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/' 00:38:45.392 22:22:04 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp' 00:38:45.392 22:22:04 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:38:45.392 22:22:04 -- common/autobuild_common.sh@460 -- $ get_config_params 00:38:45.392 22:22:04 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:38:45.392 22:22:04 -- common/autotest_common.sh@10 -- $ set +x 00:38:45.392 22:22:04 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-vbdev-compress --with-dpdk-compressdev --with-crypto --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:38:45.392 22:22:04 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:38:45.392 22:22:04 -- pm/common@17 -- $ local monitor 00:38:45.392 22:22:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:45.392 22:22:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:45.392 22:22:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:45.392 22:22:04 -- pm/common@21 -- $ date +%s 00:38:45.392 22:22:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:45.392 22:22:04 -- pm/common@21 -- $ date +%s 00:38:45.392 22:22:04 -- pm/common@21 -- $ date +%s 00:38:45.392 22:22:04 -- pm/common@25 -- $ sleep 1 00:38:45.392 22:22:04 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720902124 00:38:45.392 22:22:04 -- pm/common@21 -- $ date +%s 00:38:45.392 22:22:04 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720902124 00:38:45.392 22:22:04 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720902124 00:38:45.392 22:22:04 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720902124 00:38:45.651 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720902124_collect-cpu-temp.pm.log 00:38:45.651 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720902124_collect-vmstat.pm.log 00:38:45.651 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720902124_collect-cpu-load.pm.log 00:38:45.651 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720902124_collect-bmc-pm.bmc.pm.log 00:38:46.588 22:22:05 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:38:46.588 22:22:05 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:38:46.588 22:22:05 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:38:46.588 22:22:05 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:38:46.588 22:22:05 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:38:46.588 22:22:05 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:38:46.588 22:22:05 -- spdk/autopackage.sh@19 -- $ timing_finish 00:38:46.588 22:22:05 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:46.588 22:22:05 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:38:46.588 22:22:05 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/timing.txt 00:38:46.588 22:22:05 -- spdk/autopackage.sh@20 -- $ exit 0 00:38:46.588 22:22:05 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:38:46.588 22:22:05 -- pm/common@29 -- $ signal_monitor_resources TERM 00:38:46.588 22:22:05 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:38:46.588 22:22:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:46.588 22:22:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:38:46.588 22:22:05 -- pm/common@44 -- $ pid=1646831 00:38:46.588 22:22:05 -- pm/common@50 -- $ kill -TERM 1646831 00:38:46.588 22:22:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:46.588 22:22:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:38:46.588 22:22:05 -- pm/common@44 -- $ pid=1646833 00:38:46.588 22:22:05 -- pm/common@50 -- $ kill -TERM 1646833 00:38:46.588 22:22:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:46.588 22:22:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:38:46.588 22:22:05 -- pm/common@44 -- $ pid=1646835 00:38:46.588 22:22:05 -- pm/common@50 -- $ kill -TERM 1646835 00:38:46.588 22:22:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:46.588 22:22:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:38:46.588 22:22:05 -- pm/common@44 -- $ pid=1646860 00:38:46.588 22:22:05 -- pm/common@50 -- $ sudo -E kill -TERM 1646860 00:38:46.588 + [[ -n 1144387 ]] 00:38:46.588 + sudo kill 1144387 00:38:46.598 [Pipeline] } 00:38:46.617 [Pipeline] // stage 00:38:46.622 [Pipeline] } 00:38:46.639 [Pipeline] // timeout 00:38:46.645 [Pipeline] } 00:38:46.663 [Pipeline] // catchError 00:38:46.668 [Pipeline] } 00:38:46.687 [Pipeline] // wrap 00:38:46.693 [Pipeline] } 00:38:46.710 [Pipeline] // catchError 00:38:46.719 [Pipeline] stage 00:38:46.721 [Pipeline] { (Epilogue) 00:38:46.737 [Pipeline] catchError 00:38:46.739 [Pipeline] { 00:38:46.756 [Pipeline] echo 00:38:46.758 Cleanup processes 00:38:46.765 [Pipeline] sh 00:38:47.050 + sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:38:47.050 1646956 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/sdr.cache 00:38:47.050 1647280 sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:38:47.067 [Pipeline] sh 00:38:47.351 ++ sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:38:47.352 ++ grep -v 'sudo pgrep' 00:38:47.352 ++ awk '{print $1}' 00:38:47.352 + sudo kill -9 1646956 00:38:47.363 [Pipeline] sh 00:38:47.645 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:47.646 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:38:52.932 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:38:57.132 [Pipeline] sh 00:38:57.415 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:57.415 Artifacts sizes are good 00:38:57.430 [Pipeline] archiveArtifacts 00:38:57.438 Archiving artifacts 00:38:57.570 [Pipeline] sh 00:38:57.876 + sudo chown -R sys_sgci /var/jenkins/workspace/crypto-phy-autotest 00:38:57.892 [Pipeline] cleanWs 00:38:57.902 [WS-CLEANUP] Deleting project workspace... 00:38:57.902 [WS-CLEANUP] Deferred wipeout is used... 00:38:57.908 [WS-CLEANUP] done 00:38:57.910 [Pipeline] } 00:38:57.932 [Pipeline] // catchError 00:38:57.942 [Pipeline] sh 00:38:58.221 + logger -p user.info -t JENKINS-CI 00:38:58.231 [Pipeline] } 00:38:58.247 [Pipeline] // stage 00:38:58.254 [Pipeline] } 00:38:58.271 [Pipeline] // node 00:38:58.276 [Pipeline] End of Pipeline 00:38:58.306 Finished: SUCCESS